May 27, 2016

Great website to convert files.
This website come with great tools and conversion tool.
You can paid or used free like VISITOR or MEMBER - if you register your user will have 1 GB maximum file size.
The price can be from free up to: 1m Unlim PASS /$29.99 Any type of files like: archives, audios, documents, eBooks, images, presentations, videos can be converted.
You also can deal with tools for PDF like: unlock , OCR, compress ...  
One simple example:I converted 9.34 Mb .avi file to 14.51 Kb .swf file. and it's working well for free account - VISITOR.

For documents we have this formats:

Format Description
CSV Comma-Separated Values
DOC Microsoft Word Document

DOCX Microsoft Office Open XML

HTML HyperText Markup Language

ODT ODF Text Document

PDF Portable Document Format

RTF Rich Text Format

TXT Plaintext Text File

XLS Microsoft Excel Worksheet Sheet (97-2003)

XLSX Office Open XML Worksheet Sheet

This website don't convert 3D objects and meshes.
All systems go
New status good: Everything seems to be working. for services: Fedora Wiki, Fedora People, Zodbot IRC bot, The Koji Buildsystem, Darkserver, Tagger, Package Database, Fedora pastebin service, Blockerbugs, Badges, Services, Mirror Manager, Koschei Continuous Integration, Ipsilon, Mirror List, Fedora Infrastructure Cloud, Account System, Package maintainers git repositories, Fedora websites, Documentation website, COPR Build System, Package Updates Manager, Ask Fedora, Fedora Packages App, FreeMedia, Fedora Messaging Bus, Fedora elections, Mailing Lists, Fedora Calendar
There are scheduled downtimes in progress
New status scheduled: Everything seems to be working. for services: Fedora Wiki, Fedora People, Zodbot IRC bot, The Koji Buildsystem, Darkserver, Tagger, Package Database, Fedora pastebin service, Blockerbugs, Badges, Services, Mirror Manager, Koschei Continuous Integration, Ipsilon, Mirror List, Fedora Infrastructure Cloud, Account System, Package maintainers git repositories, Fedora websites, Documentation website, COPR Build System, Package Updates Manager, Ask Fedora, Fedora Packages App, FreeMedia, Fedora Messaging Bus, Fedora elections, Mailing Lists, Fedora Calendar
OpenShift and SSSD Part 3: Extended LDAP Attributes


This is the third post in a series on setting up advanced authentication mechanisms with OpenShift Origin. This entry will build upon the foundation created earlier, so if you haven’t already gone through that tutorial, start here and continue here.

Configuring Extended LDAP Attributes


  • SSSD 1.12.0  or later. This is available on Red Hat Enterprise Linux 7.0 and later.
  • mod_lookup_identity 0.9.4 or later. This is not yet available on any version of Red Hat Enterprise Linux 7, but RPMs for this platform are available for upstream at this COPR repository until they arrive in Red Hat Enterprise Linux.

Configuring SSSD

First, we need to ask SSSD to look up attributes in LDAP that it normally doesn’t care about for simple system-login use-cases. In the OpenShift case, there’s really only one such attribute: email. So we need to modify the [domain/DOMAINNAME] section of /etc/sssd/sssd.conf on the authenticating proxy and add this attribute:

ldap_user_extra_attrs = mail

Next, we also have to tell SSSD that it’s acceptable for this attribute to be retrieved by apache, so we need to add the following two lines to the [ifp] section of /etc/sssd/sssd.conf as well:

user_attributes = +mail
allowed_uids = apache, root

Now we should be able to restart SSSD and test this configuration.

# systemctl restart sssd.service

# getent passwd <username>
username:*:12345:12345:Example User:/home/username:/usr/bin/bash

# gdbus call \
        --system \
        --dest org.freedesktop.sssd.infopipe \
        --object-path /org/freedesktop/sssd/infopipe/Users/example_2ecom/12345 \
        --method org.freedesktop.DBus.Properties.Get \
        "org.freedesktop.sssd.infopipe.Users.User" "extraAttributes"
(<{'mail': ['']}>,)

Configuring Apache

Now that SSSD is set up and successfully serving extended attributes, we need to configure the web server to ask for them and to insert them in the correct places.

First, we need to install and enable the mod_lookup_identity module for Apache (See note in the “Prerequisites” setting for installing on RHEL 7):

# yum -y install mod_lookup_identity

Second, we need to enable the module so that Apache will load it. We need to modify /etc/httpd/conf.modules.d/55-lookup_identity.conf and uncomment the line:

LoadModule lookup_identity_module modules/

Next, we need to let SELinux know that it’s acceptable for Apache to connect to SSSD over D-BUS, so we’ll set an SELinux boolean:

# setsebool -P httpd_dbus_sssd on

Then we’ll edit /etc/httpd/conf.d/openshift-proxy.conf and add the following lines (bolded to show the additions) inside the <ProxyMatch /oauth/authorize> section:

  <ProxyMatch /oauth/authorize>
    AuthName openshift

    LookupOutput Headers
    LookupUserAttr mail X-Remote-User-Email
    LookupUserGECOS X-Remote-User-Display-Name

    RequestHeader set X-Remote-User %{REMOTE_USER}s env=REMOTE_USER

Then restart Apache to pick up the changes.

# systemctl restart httpd.service

Configuring OpenShift

The proxy is now all set, so it’s time to tell OpenShift where to find these new attributes during login. Edit the /etc/origin/master/master-config.yaml file and add the following lines to the identityProviders section (new lines bolded):

  - name: sssd
  challenge: true
  login: true
  mappingMethod: claim
    apiVersion: v1
    kind: RequestHeaderIdentityProvider
    challengeURL: "${query}"
    loginURL: "${query}"
    clientCA: /home/sgallagh/workspace/openshift/configs/
    - X-Remote-User
    - X-Remote-User-Email
    - X-Remote-User-Display-Name

Go ahead and launch OpenShift with this updated configuration and log in to the web as a new user. You should see their full name appear in the upper-right of the screen. You can also verify with oc get identities -o yaml that both email addresses and full names are available.

Debugging Notes

OpenShift currently only saves these attributes to the user at the time of the first login and doesn’t update them again after that. So while you are testing (and only while testing), it’s advisable to run oc delete users,identities --all to clear the identities out so you can log in again.

OpenShift and SSSD Part 2: LDAP Form Authentication


This is the second post in a series on setting up advanced authentication mechanisms with OpenShift Origin. This entry will build upon the foundation created earlier, so if you haven’t already gone through that tutorial, start here. Note that some of the content on that page has changed since it was first published to ensure that this second part is easier to set up, so make sure to double-check your configuration.

Configuring Form-based Authentication

In this tutorial, I’m going to describe how to set up form-based authentication to use when signing into the OpenShift Origin web console. The first step is to prepare a login page. The OpenShift upstream repositories have a handy template for forms, so we will copy that down to our authenticating proxy on

# curl -o /var/www/html/login.html \

You may edit this login HTML however you prefer, but if you change the form field names, you will need to update those in the configuration below as well.

Next, we need to install another Apache module, this time for intercepting form-based authentication.

# yum -y install mod_intercept_form_submit

Then we need to modify /etc/httpd/conf.modules.d/55-intercept_form_submit.conf and uncomment the LoadModule line.

Next, we’ll add a new section to our openshift-proxy.conf inside the <VirtualHost *:443> block.

  <Location /login-proxy/oauth/authorize>
    # Insert your backend server name/ip here.

    InterceptFormPAMService openshift-proxy-pam
    InterceptFormLogin httpd_username
    InterceptFormPassword httpd_password

    RewriteCond %{REQUEST_METHOD} GET
    RewriteRule ^.*$ /login.html [L]

This tells Apache to listen for POST requests on the /login-proxy/oauth/authorize and pass the username and password over to the openshift-proxy-pam PAM service, just like in the challenging-proxy example in the first entry of this series. This is all we need to do on the Apache side of things, so restart the service and move back over to the OpenShift configuration.

In the master-config.yaml, update the identityProviders section as follows (new lines bolded):

  - name: any_provider_name
    challenge: true
    login: false
    mappingMethod: claim
      apiVersion: v1
      kind: RequestHeaderIdentityProvider
      challengeURL: "${query}"
      loginURL: "${query}"
      clientCA: /etc/origin/master/proxy/proxyca.crt
      - X-Remote-User

Now restart OpenShift with the updated configuration. You should be able to browse to and use your LDAP credentials at the login form to sign in.

My Fedora Badges intern
For the past two weeks I was lucky to have an intern, who worked on Fedora Badges. Badges is a great way to start as a Fedora design contributor, as they have low entry level. Templates are ready, graphics is available to download, all the resources available here. All that’s left is to read through the guidelines, check out the already existing badges, install Inkscape – and create! The best thing is: you get badges for making badges:)
I hope Simon had a lot of fun working with us. He even created his own avatar for the badges profile – check it out!  Here are some of the badges he’s worked on:
Couple of those are a part of a series. Backgrounds might be a little far from the guide, but as he explained – it just said ‘green’, so he naturally went with whatever green he fancied at the moment. I’m sure we can fix these little issues in no time. Simon also worked on other tasks – such as helping me out with logo concept for Fedora Development Portal, as well as some graphics for Brno Newsletter.  All in all it was amazing to have an intern and a great experience for both of us.
I wish Simon good luck in his future endeavors and hope he stays a contributor!

State of syslog-ng 3.8 rpm packaging
While syslog-ng 3.8 does not yet have an alpha release, it already has many interesting features. As it is still under heavy development, we can’t recommend it for production use. On the other hand, any feedback is very welcome. There are many new features to try, here are just some highlights, without aiming for completeness: […]
Font improvements in Fedora 24 Workstation

Cantarell is the default font in Fedora Workstation. It comes courtesy of the GNOME desktop community, which designed and chose Cantarell. Recently the maintainers of Canatrell have done a great deal of work on the typeface to improve readability and appearance. There are now two maintainers, Jakub Steiner and Nikolaus Waxweiler, who both contribute to the GNOME desktop environment as well as Cantarell. Here’s a sample of the typeface:

A little sample.

A little sample.

These improvements will be shipped with Fedora 24, which is scheduled for final release in June.

What happened to Cantarell?

We talked a bit with the maintainers to get some more information. According to Steiner, maintenance of the Cantarell font had become quite stagnant. This lack was especially apparent when it comes to font hinting.

Cantarell hinting work.

Cantarell hinting work.

Hinting is a process that helps make a font more readable. Hinting requires precision when modifying a typeface. The developer must make use of zones that affect how the font is adjusted at different sizes. When a font is correctly designed, these zones match your font type and make it look clear and readable. To achieve this result, the design must be consistent and regular.

For instance, the horizontal zones defined in a font (see the image above) are called “blue zones.” If the lines of your typeface go outside of the blue zones, the hinting algorithm simply ignores them. This results in odd or inconsistent type appearance at different resolution or letter sizes.

Waxweiler found that Cantarell was in a somewhat poor state, with inconsistent diacritics and clunky appearance at some common resolutions.

Many fonts, one system.

Many fonts, one system.

Waxweiler set about cleaning up the Cantarell font, fixing for example the blue zones to achieve a more harmonious look. He also addressed a number of issues concerning Cyrillic. In addition, Cantarell now has all its diacritics and accented glyphs fixed in the font. Central European users who make use of these marks will have an improved experience and see characters correctly now.

Will I see the difference?

The new version of Cantarell is already available in Fedora 24. The adjustments in the new release provide a more pleasant default experience for Fedora users.

What if you’ve changed your font settings, for instance with the GNOME Tweak Tool? If you upgrade to Fedora 24, the Cantarell font will still be upgraded. But its appearance is subject to your font settings, which are correctly retained when you upgrade.

You can restore these settings to their defaults using the Tweak Tool, or you can use these commands in a terminal:

gsettings reset org.gnome.desktop.interface font-name
gsettings reset org.gnome.settings-daemon.plugins.xsettings antialiasing
gsettings reset org.gnome.settings-daemon.plugins.xsettings hinting
gsettings reset org.gnome.settings-daemon.plugins.xsettings rgba-order

If you have any issues with this font, please file a bug in  the Gnome Bug Tracker.

Image courtesy Marcus DePaula – originally posted to Unsplash as Untitled

PHP version 5.5.36, 5.6.22 and 7.0.7

RPM of PHP version 7.0.7 are available in remi-php70 repository for Fedora and Enterprise Linux (RHEL, CentOS).

RPM of PHP version 5.6.22 are available in remi repository for Fedora ≥ 21 and  remi-php56 repository for Fedora and Enterprise Linux.

RPM of PHP version 5.5.36 are available in remi repository for Fedora 20 and in remi-php55 repository for Enterprise Linux.

emblem-important-2-24.pngPHP version 5.4 have reached its end of life and is no longer maintained by the project. Given the very important number of downloads by the users of my repository the version is still available in  remi repository for Enterprise Linux (RHEL, CentOS...) and includes  security fix (from version 5.5.35). The upgrade to a maintained version is strongly recommended.

These versions are also available as Software Collections.

security-medium-2-24.pngThese versions fix some security bugs, so update is strongly recommended.

Version announcements:

emblem-important-2-24.png 5.5.27 release was the last planned release that contains regular bugfixes. All the consequent releases contain only security-relevant fixes, for the term of one year (July 2016).

emblem-notice-24.pngInstallation : use the Configuration Wizard and choose your version and installation mode.

Replacement of default PHP by version 7.0 installation (simplest):

yum-config-manager --enable remi-php70
yum update

Parallel installation of version 7.0 as Software Collection (x86_64 only):

yum install php70

Replacement of default PHP by version 5.6 installation (simplest):

yum-config-manager --enable remi-php56
yum update

Parallel installation of version 5.6 as Software Collection (x86_64 only):

yum install php56

Replacement of default PHP by version 5.5 installation (simplest):

yum-config-manager --enable remi-php55
yum update

Parallel installation of version 5.5 as Software Collection (x86_64 only):

yum --enablerepo=remi install php55

And soon in the official updates:

emblem-important-2-24.pngTo be noticed :

  • EL7 rpm are build using RHEL-7.2
  • EL6 rpm are build using RHEL-6.7
  • a lot of new extensions are also available, see the PECL extension RPM status page

emblem-notice-24.pngInformation, read:

Base packages (php)

Software Collections (php55 / php56 / php70)

Running a Hackerspace

I wrote parts of this post after our last monthly assembly at Athens Hackerspace. Most of the hackerspace operators are dealing with this monthly meeting routinely and we often forget what we have achieved during the last 5 years and how many great things this physical space enabled to happen. But this post is not about our hackerspace. It's an effort to distant myself and try to write about the experience of running a hackerspace.


Yes, it's a community

The kind of people a space attracts is the kind of people it "wants" to attract. That sounds kind of odd right? How a physical space can want anything? At some point (the sooner the better) the people planning to open and run a hackerspace should realize that they shape the form of the community to occupy and utilize the space. They are already a community before even they start paying the rent. But a community is not a random group of people that just happen to be in the same place. They are driven by the same vision, common goals, similar means, etc. Physical spaces don't have a vision. A community does. And that's a common struggle and misconception that I came across so many times. You can't build a hackerspace with a random group of people. You need to build a community first. And to do so you need to define that common vision beforehand. We did that. Our community is not just the space operators. It's everyone who embraces our vision and occupies the space.

Yes, it's political

There is a guilt behind every attempt to go political. Beyond the dominant apolitic newspeak that surrounds us and the attempt to find affiliations in anything political, there is still space to define yourself. It's not necessarily disruptive. After all it's just a drop in the ocean. But this drop is an autonomous zone where a collective group deliberately constructs a continuous situation where we challenge the status quo. Being not for profit is political. Choosing to change the world one bit at a time, instead of running another seed round, is political. Going open source and re-shaping the way we code, we manufacture, we share, we produce and in the end the way we build our own means of production, is political. Don't hurry to label it. Let it be for now. But it's a choice. Many spaces have chosen otherwise, operating as tech shops or as event hosts for marketing presentations around new commercial technologies and products, or even running as for-profit companies, declaring no political intention. These choices are also political. Acceptance comes after denial.

Rules vs Principles

You'll be tempted to define many ground rules on how you want things to operate. Well, I have two pieces of advice. Never establish a rule for a problem that has not yet emerged. You'll create bigger frictions than whatever problem you are trying to solve. Always prefer principles over rules. You don't need to over-specify things. Given the trust between the people of a hackerspace there is always common sense on how a principle applies.

Consensus vs Voting

All hackerspaces should have an assembly of some form to make decisions. Try to reach consensus, through discussion and arguments. There will be cases where a controversial matter can be hard to have an unanimous decision. Objections should be backed with arguments, otherwise they should be disregarded. Voting should always be the last resort. Remember, the prospect of a voting at the end of a discussion kills many good arguments in the process. Consensus doesn't mean unanimity.


Some call it lazy consensus. If you have an idea for a project you don't need permission. Don't wait for someone else to organize things for you. Just reach out to the people you want and are interested in your idea and start hacking.

Code of conduct

You'll find many approaches here. We decided to keep it simple and most importantly to stick on a positive language. Describe what's an accepted behavior inside your community, instead of stating all behaviors you find wrong (you'll miss something). Emphasize excellence over Wheaton's Law. "Be polite to everyone. Respect all people that you meet at the space, as well as the space itself.", is what we wrote on our frontpage. It may not be stated explicitly, but any form of discrimination is not an accepted behavior. Being excellent to everyone means that you accept the fact that all people are equal. Regardless of nationality (whatever that means) or sexual orientation, you should be polite to all people.


This is my favorite word when it comes to hackerspaces. I'm sure most people reading this are familiar with Free Software and its four freedom definition. Let me remind you one of the freedoms:

The freedom to study how the program works, and change it so it does your computing as you wish (freedom 1). Access to the source code is a precondition for this.

Something that usually escapes the attention of many people is that the availability of source code is not the important thing here. The important thing is the the freedom to study and change. Source code availability is a prerequisite to achieve that freedom.

Same happens with hackability. Remember the Hackerspace definition as it stands on the wiki:

Hackerspaces are community-operated physical places, where people share their interest in tinkering with technology, meet and work on their projects, and learn from each other.

So again the important thing here is that you tikner/hack things. Many people have misinterpret this into thinking that since there is no mention of Open Source or Free Software in that definition then these things are not important. Again, these are the requirements. In order to hack something, you should be granted the freedom to study and change it. Access to the source code is a prerequisite for this. For those who prefer graphical representations:


Mind the "principles" next to Free Software, since we are not just talking about software here. This also applies to hardware (hack beaglebones, not makey makey), data (hack OpenStreetMap, not Google Maps), content (hack Wikipedia, not Facebook) and of course software again (teach Inkscape, not Illustrator).

Sharing your knowledge around a specific technology or tool freely is not enough. Actually this notion is often used and more often abused to justify teaching things that nobody can hack. You are a hackerspace, act like one. All the things taking place in a hackerspace, from the tiniest piece of code to the most emblematic piece of art, should by definition and by default be hackable.

Remember, do-ocracy

I hope it's obvious after this post that building and running a hackerspace is a collective effort. Find the people who share the same vision as you and build a hackerspace. Don't wait for someone else to do it. Or if you are a lucky, join an existing one that already runs by that vision. It's not that hard. After all, the only thing we want to do is change the world. How hard can it be?

Comments and reactions on Diaspora or Twitter

May 26, 2016

New tutorial about Blender 3D.
Today I make a new tutorial about Blender 3D.
You can read this tutorial here.
Also I have a facebook page to read many tutorials about Linux and Windows , Blender 3D and more ... here.
Thank you for your support. I hope this help you. Regards.
Java is Fair Game!

A jury found that using the declaring lines of code and their structure, sequence, and organization from Java constitutes fair use. Which is a great outcome of a terrible lawsuit Oracle filed some years ago trying to destroy free java. They started by trying to be a patent troll, but when that failed they tried to pervert copyright law to make it illegal to reimplement APIs. Oracle’s behavior is unethical and greedy. Luckily this jury stopped them for now.

OpenShift and SSSD Part 1: Basic LDAP Authentication


OpenShift provides a fairly simple and straightforward authentication provider for use with LDAP setups. It has one major limitation, however: it can only connect to a single LDAP server. This can be problematic if that LDAP server becomes unavailable for any reason. When this happens, end-users get very unhappy.

Enter SSSD. Originally designed to manage local and remote authentication to the host OS, it can now be configured to provide identity, authentication and authorization services to web services like OpenShift as well. It provides a multitude of advantages over the built-in LDAP provider; in particular it has the ability to connect to any number of failover LDAP servers as well as to cache authentication attempts in case it can no longer reach any of those servers.

These advantages don’t come without a cost, of course: the setup of this configuration is somewhat more advanced, so I’m writing up this guide to help you get it set up. Rather than adding a few lines to the master-config.yml in OpenShift and calling it a day, we are going to need to set up a separate authentication server that OpenShift will talk to. This guide will describe how to do it on a dedicated physical or virtual machine, but the concepts should also be applicable to loading up such a setup in a container as well. (And in the future, I will be looking into whether we could build such a static container right into OpenShift, but for now this document will have to suffice.) For this guide, I will use the term VM to refer to either type of machine, simply because it’s shorter to type and read.

This separate authentication server will be called the “authenticating proxy” from here on out and describes a solution that will provide a specialized httpd server that will handle the authentication challenge and return the results to the OpenShift Server. See the OpenShift documentation for security considerations around the use of an authenticating proxy.

Formatting Notes

  • If you see something in italics within a source-code block below, you should replace it with the appropriate value for your environment.
  • Source-code blocks with a leading ‘#’ character indicates a command that must be executed as the “root” user, either by logging in as root or using the sudo command.
  • Source-code blocks with a leading ‘$’ character indicates a command that may be executed by any user (privileged or otherwise). These commands are generally for testing purposes.


You will need to know the following information about your LDAP server to follow the directions below:

  • Is the directory server powered by FreeIPA, Active Directory or another LDAP solution?
  • What is the URI for the LDAP server? e.g.
  • Where is the CA certificate for the LDAP server?
  • Does the LDAP server correspond to RFC 2307 or RFC2307bis for user-groups?

Prepare VMs:

  • A VM to use as the authenticating proxy. This machine must have at least SSSD 1.12.0 available, which means a fairly recent operating system. In these examples, I will be using a clean install of Red Hat Enterprise Linux 7.2 Server.
  • A VM to use to run OpenShift

(These machines *can* be configured to run on the same system, but for the purposes of this tutorial, I am keeping them separate)

Phase 1: Certificate Generation

In order to ensure that communication between the authenticating proxy and OpenShift is trustworthy, we need to create a set of TLS certificates that we will use during the other phases of this setup. For the purposes of this demo, we will start by using the auto-generated certificates created as part of running

# openshift start \
    --public-master= \

Among other things, this will generate /etc/origin/master/ca.{cert|key}. We will use this signing certificate to generate keys to use on the authenticating proxy.

# mkdir -p /etc/origin/proxy/
# oadm ca create-server-cert \
    --cert='/etc/origin/proxy/' \
    --key='/etc/origin/proxy/' \, \
    --signer-cert=/etc/origin/master/ca.crt \
    --signer-key='/etc/origin/master/ca.key' \

For the hostnames, ensure that any hostnames and interface IP addresses that might need to access the proxy are listed, otherwise the HTTPS connection will fail.

Next, we will generate the API client certificate that the authenticating proxy will use to prove its identity to OpenShift (this is necessary so that malicious users cannot impersonate the proxy and send fake identities). First, we will create a new CA to sign this client certificate.

# oadm ca create-signer-cert \
  --cert='/etc/origin/proxy/proxyca.crt' \
  --key='/etc/origin/proxy/proxyca.key' \
  --name='openshift-proxy-signer@`date +%s`' \

(The date +%s in that block is used to make the  signer unique. You can use any name you prefer, however.)

# oadm create-api-client-config \
    --certificate-authority='/etc/origin/proxy/proxyca.crt' \
    --client-dir='/etc/origin/proxy' \
    --signer-cert='/etc/origin/proxy/proxyca.crt' \
    --signer-key='/etc/origin/proxy/proxyca.key' \
    --signer-serial='/etc/origin/proxy/proxyca.serial.txt' \
# cat /etc/origin/proxy/system\:proxy.crt \
      /etc/origin/proxy/system\:proxy.key \
      > /etc/origin/proxy/authproxy.pem

Phase 2: Authenticating Proxy Setup

Step 1: Copy certificates

From, securely copy the necessary certificates to the proxy machine:

# scp /etc/origin/proxy/master/ca.crt \

# scp /etc/origin/proxy/ \
      /etc/origin/proxy/authproxy.pem \

# scp /etc/origin/proxy/ \

Step 2: SSSD Configuration

Install a new VM with a recent operating system (in order to use the mod_identity_lookup module later, it will need to be running SSSD 1.12.0 or later). In these examples, I will be using a clean install of Red Hat Enterprise Linux 7.2 Server.

First thing is to install all of the necessary dependencies:

# yum install -y sssd \
                 sssd-dbus \
                 realmd \
                 httpd \
                 mod_session \
                 mod_ssl \

This will give us the SSSD and the web server components we will need. The first step here will be to set up SSSD to authenticate this VM against the LDAP server. If the LDAP server in question is a FreeIPA or Active Directory environment, then realmd can be used to join this machine to the domain. This is the easiest way to get up and running.

realm join

If you aren’t running a domain, then your best option is to use the authconfig tool (or follow the many other tutorials on the internet for configuring SSSD for identity and authentication).

# authconfig --update --enablesssd --enablesssdauth \
             --enableldaptls \

This should create /etc/sssd/sssd.conf with most of the appropriate settings. (Note: RHEL 7 appears to have a bug wherein authconfig does not create the /etc/openldap/cacerts directory, so you may need to create it manually before running the above command.)

If you are interested in using SSSD to manage failover situations for LDAP, this can be configured simply by adding additional entries in /etc/sssd/sssd.conf on the ldap_uri line. Systems enrolled with FreeIPA will automatically handle failover using DNS SRV records.

Finally, restart SSSD to make sure that all of the changes are applied properly:

$ systemctl restart sssd.service

Now, test that the user information can be retrieved properly:

$ getent passwd <username>
username:*:12345:12345:Example User:/home/username:/usr/bin/bash

At this point, it is wise to attempt to log into the VM as an LDAP user and confirm that the authentication is properly set up. This can be done via the local console or a remote service such as SSH. (Later, you can modify your /etc/pam.d files to disallow this access if you prefer.) If this fails, consult the SSSD troubleshooting guide.

Step 3: Apache Configuration

Now that we have the authentication pieces in place, we need to set up Apache to talk to SSSD. First, we will create a PAM stack file for use with Apache. Create the /etc/pam.d/openshift file and add the following contents:

auth required
account required

This will tell PAM (the pluggable authentication module) that when an authentication request is issued for the “openshift” stack, it should use to determine authentication and access-control.

Next we will configure the Apache httpd.conf. (Taken from the OpenShift documentation and modified for SSSD.) For this tutorial, we’re only going to set up the challenge authentication (useful for logging in with oc login and similar automated tools). A future entry in this series will describe setup to use the web console.

First, create the new file openshift-proxy.conf in /etc/httpd/conf.d (substituting the correct hostnames where indicated):

LoadModule request_module modules/
LoadModule lookup_identity_module modules/
# Nothing needs to be served over HTTP.  This virtual host simply redirects to
<VirtualHost *:80>
  DocumentRoot /var/www/html
  RewriteEngine              On
  RewriteRule     ^(.*)$     https://%{HTTP_HOST}$1 [R,L]

<VirtualHost *:443>
  # This needs to match the certificates you generated.  See the CN and X509v3
  # Subject Alternative Name in the output of:
  # openssl x509 -text -in /etc/pki/tls/certs/

  DocumentRoot /var/www/html
  SSLEngine on
  SSLCertificateFile /etc/pki/tls/certs/
  SSLCertificateKeyFile /etc/pki/tls/private/
  SSLCACertificateFile /etc/pki/CA/certs/ca.crt

  # Send logs to a specific location to make them easier to find
  ErrorLog logs/proxy_error_log
  TransferLog logs/proxy_access_log
  LogLevel warn
  SSLProxyEngine on
  SSLProxyCACertificateFile /etc/pki/CA/certs/ca.crt
  # It's critical to enforce client certificates on the Master.  Otherwise
  # requests could spoof the X-Remote-User header by accessing the Master's
  # /oauth/authorize endpoint directly.
  SSLProxyMachineCertificateFile /etc/pki/tls/certs/authproxy.pem

  # Send all requests to the console
  RewriteEngine              On
  RewriteRule     ^/console(.*)$     https://%{HTTP_HOST}:8443/console$1 [R,L]

  # In order to using the challenging-proxy an X-Csrf-Token must be present.
  RewriteCond %{REQUEST_URI} ^/challenging-proxy
  RewriteCond %{HTTP:X-Csrf-Token} ^$ [NC]
  RewriteRule ^.* - [F,L]

  <Location /challenging-proxy/oauth/authorize>
    # Insert your backend server name/ip here.
    AuthType Basic
    AuthBasicProvider PAM
    AuthPAMService openshift
    Require valid-user

  <ProxyMatch /oauth/authorize>
    AuthName openshift
    RequestHeader set X-Remote-User %{REMOTE_USER}s

RequestHeader unset X-Remote-User


Then we need to tell SELinux that it’s acceptable for Apache to contact the PAM subsystem, so we set a boolean:

# setsebool -P allow_httpd_mod_auth_pam on

At this point, we can start up Apache.

# systemctl start httpd.service

Phase 3: OpenShift Configuration

This describes how to set up an OpenShift server from scratch in an “all in one” configuration. For more complicated (and interesting) setups, consult the official OpenShift documentation.

First, we need to modify the default configuration to use the new identity provider we just created. We’ll start by modifying the /etc/origin/master/master-config.yaml file. Scan through it and locate the identityProviders section and replace it with:

  - name: any_provider_name
    challenge: true
    login: false
    mappingMethod: claim
      apiVersion: v1
      kind: RequestHeaderIdentityProvider
      challengeURL: "${query}"
      clientCA: /etc/origin/master/proxy/proxyca.crt
      - X-Remote-User

Now we can start openshift with the updated configuration:

# openshift start \
    --public-master= \
    --master-config=/etc/origin/master/master-config.yaml \

Now you can test logins with

oc login

It should now be possible to log in with only valid LDAP credentials. Stay tuned for further entries in this series where I will teach you how to set up a “login” provider for authenticating the web console, how to retrieve extended user attributes like email address and full name from LDAP, and also how to set up automatic single-sign-on for users in a FreeIPA or Active Directory domain.

 Updates 2016-05-27: There were some mistakes in the httpd.conf as originally written that made it difficult to set up Part 2. They have been retroactively corrected. Additionally, I’ve moved the incomplete configuration of extended attributes out of this entry and will reintroduce them in a further entry in this series.

Event Report: OSCAL 2016

Last weekend, I attended OSCAL 2016, a conference about open source in Tirana, Albania. I was looking forward to the conference very much because the Fedora community in Albania had been very active recently. I’d met some of the Albanian community members at other conferences, but I was curious to meet others.

The conference really surprised me with its hospitality which was second to nothing. The organizers provided us with a lot of useful information, arranged transportation from the airport to the hotel. What was a real nice touch was a welcoming package which was waiting for every speaker in his/her hotel room. I haven’t seen something like this at any conference before and it must have been a real effort because speakers are spread among several hotels in the city.

The activity of Fedora community in Albania has real results. The user base of Fedora among open source enthusiasts in Albania seems to grow really fast. Fedora was by far the most popular distribution among OSCAL visitors and the only one visible there. We had a booth, many Fedora related talks, several ambassadors around.

2016-05-14 12.44.41

Fedora booth

I had two presentations. One was supposed to be a workshop for 30 minutes – “Best Practises in Translating Software”. 30 minutes is too short to make a proper workshop, so it was rather a practical talk. It was targeted at beginning translators, because I know there are quite a few people starting with that in Albania. But when I asked the audience who translates software just two hands rose. Others were just interested in the area. My second presentation was about Fedora Workstation (who is it for, what we have achieved, what we’re brewing), the room was pretty full and there were quite a lot of questions which is a sign that it was interesting for the audience.

At the end of the second day, there was a Fedora community meetup. There were experienced ambassadors from abroad (me, Giannis, Robert Scheck, Ardian,…), local ambassadors (Jona), other local contributors (Elio, Boris,…), and other people who were interested in joining the Fedora Project. We discussed what the Fedora Project can do for the local community to keep growing. We also talked about translations of Fedora and GNOME to Albanian. There are many new people translators, but the coordinators of translations that approve new translations are either inactive or reluctant to accept new contributions. Six years ago, I helped with a similar situation in the Slovak translation team, so I gave local contributors advice how to start processes to resolve it.

A couple of community members were interested in becoming ambassadors. There were three ambassador mentors (me, Robert, and Giannis) and we shared with them what are our expectations, that there is no limit for ambassadors per country. If there are enough active people, there can be even 10 ambassadors in Albania. We as mentors just have to make sure that the candidates are ready and are willing to contribute in longer term.

What was also very special about OSCAL was a number of women at the conference. Over 50% of attendees and 70% of organizers were women. That’s something you don’t see anywhere else. They’ve naturally achieved a gender diversity communities anywhere else in the world are struggling to achieve. When I ask why is that they told me that it’s because there are many women studying computer science. One girl told me that in her study program there are 190 women and only 10 men. Why are there so many women studying computer science? I was told it’s because girls are encourage to pursue this career path and IT is considered one of a few industries where you can get a job and earn good money. But I was also told that there are many women in other technical fields such as math and civil engineering. So it’s not only because IT would be the only attractive field there.

I’d like to thank the Fedora Project for sponsoring my flight ticket and I hope Fedora will be even more visible at OSCAL than this year.

2016-05-14 12.45.07

Mozilla booth

All systems go
New status good: Everything seems to be working. for services: Fedora Infrastructure Cloud, COPR Build System
There are scheduled downtimes in progress
New status scheduled: Scheduled maintenance in progress for services: Fedora Infrastructure Cloud, COPR Build System
Take a test drive of Fedora 24 Cloud on Openstack

There is no need to wait for the Fedora 24 release next month to take a look at the upcoming Fedora 24 Cloud images. Over on his blog, Major Hayden blogs the simple steps to get the Fedora 24 Cloud Beta running on OpenStack.

Testing out a Beta release of Fedora is a good way to take an early look at the new features in the release, and to prepare for migrating your systems over when Fedora 24 is released next month.


May 25, 2016

All systems go
New status good: Everything seems to be working. for services: Fedora Wiki, Fedora People, Zodbot IRC bot, The Koji Buildsystem, Darkserver, Tagger, Package Database, Fedora pastebin service, Blockerbugs, Badges, Services, Mirror Manager, Koschei Continuous Integration, Ipsilon, Mirror List, Fedora Infrastructure Cloud, Account System, Package maintainers git repositories, Fedora websites, Documentation website, COPR Build System, Package Updates Manager, Ask Fedora, Fedora Packages App, FreeMedia, Fedora Messaging Bus, Fedora elections, Mailing Lists, Fedora Calendar
Fedora 24 alpha - VirtualBox and FreeCAD software.

Today I tested the FreeCAD under Fedora 24 alpha with VirtualBox.
First I update my system using the standard updater for your desktop:

sudo dnf update --refresh
sudo dnf install dnf-plugin-system-upgrade
dnf install freecad freecad-data

About this software the official webpage told us: FreeCAD is a parametric 3D modeler made primarily to design real-life objects of any size. Parametric modeling allows you to easily modify your design by going back into your model history and changing its parameters. FreeCAD is open-source and highly customizable, scriptable and extensible.

Is working well and you can read more from here.
I have some problems with sound under my VirtualBox.

I think something block the memory when I try to make some screenshot.
See this video:
<iframe allowfullscreen="" frameborder="0" height="315" src="" width="560"></iframe>

There are scheduled downtimes in progress
New status scheduled: Scheduled reboots in progress for services: Fedora Wiki, Fedora People, Zodbot IRC bot, The Koji Buildsystem, Darkserver, Tagger, Package Database, Fedora pastebin service, Blockerbugs, Badges, Services, Mirror Manager, Koschei Continuous Integration, Ipsilon, Mirror List, Fedora Infrastructure Cloud, Account System, Package maintainers git repositories, Fedora websites, Documentation website, COPR Build System, Package Updates Manager, Ask Fedora, Fedora Packages App, FreeMedia, Fedora Messaging Bus, Fedora elections, Mailing Lists, Fedora Calendar
Evolution: Anzeige von Plaintext erzwingen
Bitte beachtet auch die Anmerkungen zu den HowTos!

Wem es wie mir geht und eine gewisse Abneigung gegen HTML-Nachrichten hat, der kann Evolution dazu zwingen, immer den Plaintext-Teil einer Mail zu bevorzugen, sofern dieser vorhanden ist.

Um dies zu erreichen, muss in den Einstellungen von Evolution unter „E-Mail-Einstellungen“ im Register „HTML-Nachrichten“ die Einstellung „HTML-Modus“ auf „Immer als einfachen Text anzeigen“ ändern.

Ab sofort wird Evolution bei Mails, die einen Plaintext-Teil enthalten, diesen anstelle des HTML-Teils anzeigen. Bei reinen HTML-Mails ohne Plaintext-Teil hat diese Einstellung jedoch keinen Auswirkungen zur Folge, das man eine leere Mail mit dem HTML-Teil als Anhang vorfindet.

[GSoC '16] Let the Coding Begin!
[GSoC '16] Let the Coding Begin!

The coding period of GSoC has finally started. It started on 23rd of May, but to me, it just started today as I had taken a 2 day excuse (Exams, sigh). As I mentioned in my earlier post, I will be working with the Fedora Project to build metrics tools in Python and also will also be helping the Commops team in refining the Fedora Onboarding process.

My internship this summer will be mostly, Python centric and will involve a lot of scraping, automation, analytics and data crunching. I am currently working on a gsoc-statistics tool for Fedora which will auto-magically generate weekly reports given a fedora FAS username. Instead of pushing code to Github, which I do; quite often, I have decided to work with Pagure, Fedora's own repository tracker. There's a GSoC project to improve Pagure as well ;)

To start off with the work, I decided to draw a rough skeleton of what I'd like the stats-tool to look like. I usually plan everything on paper, but since I'm home, I had the luxury of using the whiteboard. Here's what my whiteboard looks like now :

[GSoC '16] Let the Coding Begin!

This is just the beginning, and I'm excited already. Hoping to have a great summer this year :)

Oh, and if you are a FOSS enthusiast and would like to start contributing to Fedora, do take a look at WhatCanIDoForFedora. If you have any other Fedora related questions, feel free to ping me on IRC / e-mail. I go by the nick skamath on Freenode. You can find me in the #fedora-commops channel.

Blog backlog, Post 3, DisplayLink-based USB3 graphics support for Fedora
Last year, after DisplayLink released the first version of the supporting tools for their USB3 chipsets, I tried it out on my Dell S2340T.

As I wanted a clean way to test new versions, I took Eric Nothen's RPMs, and updated them along with newer versions, automating the creation of 32- and 64-bit x86 versions.

The RPM contains 3 parts, evdi, a GPLv2 kernel module that creates a virtual display, the LGPL library to access it, and a proprietary service which comes with "firmware" files.

Eric's initial RPMs used the precompiled, and proprietary bits, compiling only the kernel module with dkms when needed. I changed this, compiling the library from the upstream repository, using the minimal amount of pre-compiled binaries.

This package supports quite a few OEM devices, but does not work correctly with Wayland, so you'll need to disable Wayland support in /etc/gdm/custom.conf if you want it to work at the login screen, and without having to restart the displaylink.service systemd service after logging in.

 Plugged in via DisplayPort and USB (but I can only see one at a time)

The source for the RPM are on GitHub. Simply clone and run make in the repository to create 32-bit and 64-bit RPMs. The proprietary parts are redistributable, so if somebody wants to host and maintain those RPMs, I'd be glad to pass this on.
Here I am casually using GDB with Infinity


The Answer is always the same: Layers of Security

There is a common misperception that now that containers support seccomp we no longer need SELinux to help protect our systems. WRONG. The big weakness in containers is the container possesses the ability to interact with the host kernel and the host file systems. Securing the container processes is all about shrinking the attack surface on the host OS and more specifically on the host kernel.

seccomp does a great job of shrinking the attack surface on the kernel. The idea is to limit the number of syscalls that container processes can use. It is an awesome feature. For example, on an x86_64 bit machine, there are around 650 system calls. If the Linux Kernel has a bug in any one of these syscalls, a process could get the kernel to turn off security features and take over the system, i.e. it would break out of confinement. If your container does not run 32 bit code, you can turn on seccomp and eliminate all x86 syscalls, basically cutting the number of syscalls in half. This means that if the kernel had a bug in a 32 bit syscall that allowed the process to take over the system, this syscall would not be available to the processes in your container, and the container would not be able to break out. We also eliminate a lot of other syscalls that we do not expect processes inside of a container to call.

But seccomp is not enough

This still means that if a bug remains in the kernel that can be triggered in the 300 remaining syscalls, then the container process can still take the system over, and/or create havoc. Just having open/read/write/ioctl on things like files/devices etc, could allow a container process the ability to break out. And if they break out they would be able to write all over the system.

You could continue to shrink the seccomp syscall table to such a degree that processes can not escape, but at some point it will also prevent the container processes getting any real work done.

Defense in Depth

As usual, any single security mechanism by itself will not fully protect your containers. You need lots of security mechanisms to control what a process can do inside and outside a container.

  • Read-Only file systems. Prevent open/write on kernel file systems. Container processes need read access to kernel file systems like /proc, /sys, /sys/fs ... But they seldom need write access.

  • Dropping privileged process capabilities. This can prevent things like setting up the network or mounting file systems, (seccomp can also block some of these, but not as comprehensively as capabilities).

  • SELinux. Prevents which file system objects like files, devices, sockets, and directories a container process can read/write/execute. Since your processes in a container will need to use open/read/write/exec syscalls, SELinux controls which file system
    objects you can interact with. I have heard a great analogy, SELinux is telling people which people they can talk to, seccomp is telling them what they can say.

  • prctl(NO__NEW__PRIVS). Prevents privilege escalation through the use of setuid applications. Running your container
    processes without privileges is always a good idea, and this keeps the processes non privileged.

  • PID Namespace. Makes it harder to see other processes on the system that are not in your container.

  • Network Namespace. Controls which networks your container processes are able to see.

  • Mount Namespace. Hides large parts of the system from the processes inside of the container.

  • User Namespace. Helps remove remaining system capabilities. It can allow you to have privileges inside of your containers namespaces, but not outside of the container.

  • kvm. If you can find some way to run containers in a kvm/virtualization wrapper, this would be a lot more secure. (ClearLinux and others are working on this).

The more Linux security services that you can wrap around your container processes the more secure your system will be.

Bottom Line

It is the combination of all of these kernel services along with administrators continuing to maintain good security practices that begin to keep your container processes contained.


Red Hat Enterprise Linux




containers docker security selinux
Non Deterministic docker Networking and Source Based IP Routing


In the open source docker engine a new networking model was introduced in docker 1.9 which enabled the creation of separate "networks" for containers to be attached to. This, however, can lead to a nasty little problem where a port that is supposed to be exposed on the host isn't accessible from the outside. There are a few bug reports that are related to this issue.


This problem happens because docker wires up all of these containers to each other and the various "networks" using port forwarding/NAT via iptables. Let's take a popular example application which exhibits the problem, the Docker 3rd Birthday Application, and show what the problem is and why it happens.

We'll clone the git repo first and then check out the latest commit as of 2016-05-25:

# git clone
# cd docker-birthday-3/
# git checkout 'master@{2016-05-25}'
HEAD is now at 4f2f1c9... Update Dockerfile

Next we'll bring up the application:

# cd example-voting-app/
# docker-compose up -d 
Creating network "examplevotingapp_front-tier" with the default driver
Creating network "examplevotingapp_back-tier" with the default driver
Creating db
Creating redis
Creating examplevotingapp_voting-app_1
Creating examplevotingapp_worker_1
Creating examplevotingapp_result-app_1

So this created two networks and brought up several containers to host our application. Let's poke around to see what's there:

# docker network ls
NETWORK ID          NAME                          DRIVER
23c96b2e1fe7        bridge                        bridge              
cd8ecb4c0556        examplevotingapp_front-tier   bridge              
5760e64b9176        examplevotingapp_back-tier    bridge              
bce0f814fab1        none                          null                
1b7e62bcc37d        host                          host
# docker ps -a --format "table {{.Names}}\t{{.Image}}\t{{.Ports}}"
NAMES                           IMAGE                         PORTS
examplevotingapp_result-app_1   examplevotingapp_result-app>80/tcp
examplevotingapp_voting-app_1   examplevotingapp_voting-app>80/tcp
redis                           redis:alpine        >6379/tcp
db                              postgres:9.4                  5432/tcp
examplevotingapp_worker_1       manomarks/worker              

So two networks were created and the containers running the application were brought up. Looks like we should be able to connect to the examplevotingapp_voting-app_1 application on the host port 5000 that is bound to all interfaces. Does it work?:

# ip -4 -o a
1: lo    inet scope host lo\       valid_lft forever preferred_lft forever
2: eth0    inet brd scope global dynamic eth0\       valid_lft 2921sec preferred_lft 2921sec
3: docker0    inet scope global docker0\       valid_lft forever preferred_lft forever
106: br-cd8ecb4c0556    inet scope global br-cd8ecb4c0556\       valid_lft forever preferred_lft forever
107: br-5760e64b9176    inet scope global br-5760e64b9176\       valid_lft forever preferred_lft forever
# curl --connect-timeout 5 &>/dev/null && echo success || echo failure
# curl --connect-timeout 5 &>/dev/null && echo success || echo failure

Does it work? Yes and no?

That's right. There is something complicated going on with the networking here. I can connect from localhost but can't connect to the public IP of the host. Docker wires things up in iptables so that things can go into and out of containers following a strict set of rules; see the iptables output if you are interested. This works fine if you only have one network interface per container but can break down when you have multiple interfaces attached to a container.

Let's jump in to the examplevotingapp_voting-app_1 container and check out some of the networking:

# docker exec -it examplevotingapp_voting-app_1 /bin/sh
/app # ip -4 -o a
1: lo    inet scope host lo\       valid_lft forever preferred_lft forever
112: eth1    inet scope global eth1\       valid_lft forever preferred_lft forever
114: eth0    inet scope global eth0\       valid_lft forever preferred_lft forever
/app # 
/app # ip route show
default via dev eth0 dev eth1  src dev eth0  src

So there is a clue. We have two interfaces, but our default route is going to go out of the eth0 on the network. It just so happens that our iptables rules (see linked iptables output from above) performed DNAT for tcp dpt:5000 to: So traffic from the outside is going to come in to this container on the eth1 interface but leave it on the eth0 interface, which doesn't play nice with the iptables rules docker has set up.

We can prove that here by asking what route we will take when a packet leaves the machine:

/app # ip route get from from via dev eth0

Which basically means it will leave from eth0 even though it came in on eth1. The Docker documentation was updated to try to explain the behavior when multiple interfaces are attached to a container in this git commit.

Test Out Theory Using Source Based IP Routing

To test out the theory on this we can use source based IP routing (some reading on that here). Basically the idea is that we create policy rules that make IP traffic leave on the same interface it came in on.

To perform the test we'll need our container to be privileged so we can add routes. Modify the docker-compose.yml to add privileged: true to the voting-app:

    build: ./voting-app/.
     - ./voting-app:/app
      - "5000:80"
      - front-tier
      - back-tier
    privileged: true

Take down and bring up the application:

# docker-compose down
# docker-compose up -d

Exec into the container and create a new policy rule for packets originating from the network. Tell packets matching this rule to look up routing table 200:

# docker exec -it examplevotingapp_voting-app_1 /bin/sh
/app # ip rule add from table 200

Now add a default route for to routing table 200. Show the routing table after that and the rules as well:

/app # ip route add default via dev eth1 table 200
/app # ip route show table 200
default via dev eth1
/app # ip rule show
0:      from all lookup local 
32765:  from lookup 200 
32766:  from all lookup main 
32767:  from all lookup default

Now ask the kernel where a packet originating from our address will get sent:

/app # ip route get from from via dev eth1

And finally, go back to the host and check to see if everything works now:

# curl --connect-timeout 5 &>/dev/null && echo success || echo failure
# curl --connect-timeout 5 &>/dev/null && echo success || echo failure


I don't know if source based routing can be incorporated into docker to fix this problem or if there is a better solution. I guess we'll have to wait and find out.



NOTE I used the following versions of software for this blog post:

# rpm -q docker docker-compose kernel-core
DEVit Conf 2016

It's been another busy week after DEVit conf took place in Thessaloniki. Here are my impressions.

DEVit 2016


TechMinistry is Thessaloniki's hacker space which is hosted at a central location, near major shopping streets. I've attended an Open Source Wednesday meeting. From the event description I thought that there was going to be a discussion about getting involved with Firefox. However that was not the case. Once people started coming in they formed organic groups and started discussing various topics on their own.

I was also shown their 3D printer which IMO is the most precise of 3D printers I've seen so far. Imagine what it would be like to click Print, sometime in the future, and have your online orders appear on your desk over night. That would be quite cool!

I've met with Christos Bacharakis, a Mozilla representative for Greece, who gave me some goodies for my students at HackBulgaria!

On Thursday I spent the day merging pull requests for MrSenko/pelican-octopress-theme and attended the DEVit Speakers dinner at Massalia. Food and drinks were very good and I even found a new recipe for mushrooms with ouzo, of which I think I had a bit too many :).

I was also told that "a full stack developer is a developer who can introduce a bug to every layer of the software stack". I can't agree more!


The conference day started with a huge delay due to long queues for registration. The fist talk I attended, and the best one IMO was Need It Robust? Make It Fragile! by Yegor Bugayenko (watch the video). There he talked about two different approaches to writing software: fail safe vs. fail fast.

He argues that when software is designed to fail fast bugs are discovered earlier in the development cycle/software lifetime and thus are easier to fix, making the whole system more robust and more stable. On the other hand when software is designed to hide failures and tries to recover auto-magically the same problems remain hidden for longer and when they are finally discovered they are harder to fix. This is mostly due to the fact that the original error condition is hidden and manifested in a different way which makes it harder to debug.

Yegor made several examples, all of which are valid code, which he considers bad practice. For example imagine we have a function that accepts a filename as parameter:

def read_file_fail_safe(fname):
    if not os.path.exists(fname):
        return -1

    # read the file, do something else
    return bytes_read

def read_file_fail_fast(fname):
    if not os.path.exists(fname):
        raise Exception('File does not exist')

    # read the file, do something else
    return bytes_read

In the first example read_file_fail_safe returns -1 on error. The trouble is whoever is calling this method may not check for errors thus corrupting the flow of the program further down the line. You may also want to collect metrics and update your database with the number of bytes processed - this will totally skew your metrics. C programmers out there will quickly remember at least one case when they didn't check the return code for errors!

The second example read_file_fail_fast will raise an exception the moment it encounters a problem. It's not its fault that the file doesn't exist and there's nothing it can do about it, nor is its job to do anything about it. Raising an exception will surface back to the caller and they will be notified about the problem, taking appropriate actions to resolve it.

Yegor was also unhappy that many books teach fail safe and even IDEs (for Java) generate fail safe boiler-plate code (need to check this)! Indeed it is me who asks the first question Are there any tools to detect fail safe code patterns? and it turns out there aren't (for the majority of cases that is). If you happen to know such a tool please post a link in the comments below.

I was a bit disappointed by the rest of the talks. They were all high-level overviews IMO and didn't go deep technical. Last year was better. I also wanted to attend the GitHub Patchwork workshop but looking at the agenda it looked like this is for users who are starting with git and GitHub (which I'm not).

The closing session of the day was "Real time front-end alchemy, or: capturing, playing, altering and encoding video and audio streams, without servers or plugins!" by Soledad Penades from Mozilla. There she gave a demo about the latest and greatest in terms of audio and video capturing, recording and mixing natively in the browser. This is definitively very cool for apps in the audio/video space but I can also imagine an application for us software testers.

Depending on computational and memory requirements you should be able to record everything the user does in their browser (while on your website) and send it back home when they want to report an error or contact support. Definitely better than screenshots and having to go back and forth until the exact steps to reproduce are established.

Fedora is on diaspora*

diaspora is a distributed social networking platform comprised of nodes, called pods. These pods are linked together to allow users to connect seamlessly. This idea is different from the traditional social network, where user data is centralized and controlled by a single entity. diaspora is also free as in speech, so you can use it however you like. diaspora* also values your privacy. You don’t have to use your real identity, and you have complete control over who sees your content using Aspects.

Read on to learn more about how to get started on diaspora* and get the latest updates from Fedora once you’re there.

Getting Started

The first step in getting started with diaspora* is to select a pod. Your pod is where your data is stored. You can select a pod based on any criteria, such as geographic location, uptime, or which domain you like best. Fedora is hosted on, and this is a great place to start. But you can find a list of available pods, along with their locations, uptime, and supported services, at If you’re feeling adventurous, you can even create your own pod. You can find pod installation instructions on the diaspora* project wiki.

diaspora* sign up page

Setting up an account on

  1. Go to and click the Sign up button in the top right corner of the page.
  2. Enter your information in the form provided and click Sign up.
  3. Enter some information about yourself, including your name (this can be whatever you like, unlike some other services), a profile photo, and some hashtags for your interests.
  4. Introduce yourself to the community. An introductory message is prepared for you the first time you view your stream.
  5. (Optional) Go to and enter your diaspora* id, desired URL, and email to get a shortened URL for your account.

Connect With Fedora

Fedora on diaspora*

Once you’re set up on diaspora*, you can find Fedora either by searching for, or by visiting On the Fedora page, click Add Contact in the top right-hand corner of the window. Then select the Aspect you’d like to add Fedora to. Once you do this, any new updates from Fedora appear on your stream.

Test Fedora 24 Beta in an OpenStack cloud

Although there are a few weeks remaining before Fedora 24 is released, you can test out the Fedora 24 Beta release today! This is a great way to get a sneak peek at new features and help find bugs that still need a fix.
Fedora Infinity Logo
The Fedora Cloud image is available for download from your favorite local mirror or directly from Fedora’s servers. In this post, I’ll show you how to import this image into an OpenStack environment and begin testing Fedora 24 Beta.

One last thing: this is beta software. It has been reliable for me so far, but your experience may vary. I would recommend waiting for the final release before deploying any mission critical applications on it.

Importing the image

The older glance client (version 1) allows you to import an image from a URL that is reachable from your OpenStack environment. This is helpful since my OpenStack cloud has a much faster connection to the internet (1 Gbps) than my home does (~ 20 mbps upload speed). However, the functionality to import from a URL was removed in version 2 of the glance client. The OpenStackClient doesn’t offer the feature either.

There are two options here:

  • Install an older version of the glance client
  • Use Horizon (the web dashboard)

Getting an older version of glance client installed is challenging. The OpenStack requirements file for the liberty release leaves the version of glance client without a maximum version cap and it’s difficult to get all of the dependencies in order to make the older glance client work.

Let’s use Horizon instead so we can get back to the reason for the post.

Adding an image in Horizon

Log into the Horizon panel and click Compute > Images. Click + Create Image at the top right of the page and a new window should appear. Add this information in the window:

  • Name: Fedora 24 Cloud Beta
  • Image Source: Image Location
  • Image Location:
  • Format: QCOW2 – QEMU Emulator
  • Copy Data: ensure the box is checked

When you’re finished, the window should look like this:

Adding Fedora 24 Beta image in Horizon

Click Create Image and the images listing should show Saving for a short period of time. Once it switches to Active, you’re ready to build an instance.

Building the instance

Since we’re already in Horizon, we can finish out the build process there.

On the image listing page, find the row with the image we just uploaded and click Launch Instance on the right side. A new window will appear. The Image Name drop down should already have the Fedora 24 Beta image selected. From here, just choose an instance name, select a security group and keypair (on the Access & Security tab), and a network (on the Networking tab). Be sure to choose a flavor that has some available storage as well (m1.tiny is not enough).

Click Launch and wait for the instance to boot.

Once the instance build has finished, you can connect to the instance over ssh as the fedora user. If your security group allows the connection and your keypair was configured correctly, you should be inside your new Fedora 24 Beta instance!

Not sure what to do next? Here are some suggestions:

  • Update all packages and reboot (to ensure that you are testing the latest updates)
  • Install some familiar applications and verify that they work properly
  • Test out your existing automation or configuration management tools
  • Open bug tickets!

The post Test Fedora 24 Beta in an OpenStack cloud appeared first on

May 24, 2016

All systems go
New status good: Everything seems to be working. for services: The Koji Buildsystem, Koschei Continuous Integration, Package maintainers git repositories, Package Updates Manager
Fedora-Hubs: Google Summer of Code 2016


A warm hello to the Summer Coding World.

Devyani is a CS Undergraduate who will be working on Fedora-Hubs as her Google Summer of Code 2016 Project.

The Google Summer of Code 2016 results were declared on 22nd April 2016. The Community Bonding Period has been amazing: meeting like-minded people, making new friends and being a part of this awesome community !

Initially in May, I started to hack around the Feed-Widget of the Project, Fedora-Hubs.
The project is available on Pagure.

I had to go through the documentation, and hubs/widgets code more in detail thus, that week was pretty much a reading week rather than coding week. Hubs is implemented using Flask, which uses Flask-SQLAlchemy. Flask-SQLAlchemy wasn’t exactly my strength, especially the models and the relationships that exist. Hubs are related to widgets by many-to-many relationship. I went thoroughly through the Flask-SQLAlchemy documentation and googled more about establishing relationships amongst the fields. I tried to write down the code to establish relation between the databases to understand the connections between the various fields: Hubs, Users, Widgets😛
This helped me a lot to understand the basic structure of Hubs.

I also had to go through the Feed-Widget to work on getting similar threads of the mailing-list posts together based on Mizmo‘s mockups. That’s still a work in progress, though.
Largely the summer goals will be to integrate hubs successfully, with all the widgets working efficiently. By the summer, we plan to provide the user with an individual hubs-page featuring the widgets; a cool bookmarks bar for the user to switch tabs at his will; provide info regarding the unread notifications that he/she follows or is subscribed to.
We also plan to work on the badges-widget before Flock 2016 that displays a cool path of the badges that can be achieved. For the more interested folks, I have my project proposal updated on the wiki:)

For more updates on Hubs, follow me @devyanikota. I go by the nick devyani7 on freenode, you can find me lurking on #dgplug, #fedora-apps, #fedora-hubs. My Github profile and Pagure profile.

Until next time, Happy hacking !

There are scheduled downtimes in progress
New status scheduled: Scheduled reboots in progress for services: Package maintainers git repositories, Koschei Continuous Integration, The Koji Buildsystem, Package Updates Manager
LVFS Technical White Paper

I spent a good chunk of today writing a technical whitepaper titled Introducing the Linux Vendor Firmware Service — I’d really appreciate any comments, either from people who have seen all progress from the start or who don’t know anything about it at all.

Typos, or more general comments are all welcome and once I’ve got something a bit more polished I’ll be sending this to some important suits in a few well known companies. Thanks for any help!

Fedora 24: Let’s have a party!

Fedora 24 just recently entered Beta status a couple of weeks ago. With another Fedora release not so far away, it’s time for the Ambassadors to plan their activities around the release. The most common activity for Ambassadors to do around a release is namely the Release Parties. A release party is also a great way for other contributors in the community to get involved with advocacy in their local regions.

Organizing a release party

How do you organize a release party? There is a wiki page that has the full details. You will find some hints of what you can start doing now and how to do it. You can also see the Ambassadors schedule for the time frame that a release party should be held. With the current release schedule, this would be between June 6th and July 15th.

So how to do that? So first you can check out this page in our wiki. You find there some hints, what you can do and how to do it. You should check the Ambassadors schedule for the time frame the release party should be hold (6. June to 15. July 2016).

First steps for a release party

If you decide to host a party for the Fedora 24 release, you should first make a wiki page for it. Please follow the naming convention for the wiki page

You should then link that page to the overview page of all release events, which you can find here.

Check out with your regional Ambassadors if you can get media at your planned time. If not, you can offer the creation of USB flash drives for Fedora 24. Also, check for swag (t-shirts, stickers, pins, etc.) if you need some. For creating posters, banners, or other similar items, check and see if they exist somewhere already. If they don’t, you can create a ticket for the Design Team.

Have fun and earn a badge

Then you can have your party! Just make sure you write a report of it and have some nice pictures with happy faces. Then you will surly earn the badge for release party organizers.

The post Fedora 24: Let’s have a party! appeared first on Fedora Community Blog.

May 23, 2016

dgplug summer training student Tosin Damilare James Animashaun

Your name (blog/twitter) and what do you do

My name is Tosin Damilare James Animashaun (IRC: acetakwas; Twitter: @acetakwas; Github:, Blog:

I am currently a part-time Software Engineering student at NIIT, Lagos. I am also a co-founder at a startup called Krohx. We are in the final stages of deploying a job/hiring web platform that targets seekers of short-term jobs while also easing the process of getting service delivery for the hirers.

With my involvements with the burgeoning Python community in Nigeria, I recently got elected as secretary for the organization.

How did you learn about the training? (++ My Experience) I fail to recall

exactly how I learnt about the community now, but I remember joining the mailing list and going on from there. This was in fact my first time of joining a mailing list. Upon learning about the #dgplug IRC channel, I promptly enlisted the channel to my IRC client’s favourites.

I was so enthusiastic about this, as I had no prior experience of an online tutoring session. The feeling was truly exhilarating.

Having come from a Java background, I was really interested in learning the Python programming language as soon as I caught on waves with. At the time, I had volunteered to write some scripts to help automate some of the manual processing of NIIT’s annual scholarship exam records/documents at my school for that year. I accomplished that using Java, but in hind-sight, I feel I could have saved a lot of time if I had utilised Python in the situation.

This was a blog post I wrote as part of tasks given to us. I have failed to maintain that blog though. I now write here

How this training changed (if at all) your life? Python is surely the more

beginner-friendly language and it’s making its rounds these days. Every programmer should have it in their arsenal. One of the things I love most about the language is it versatility in application.

No doubt, I have continued to improve with Python everyday, but more so in my programming journey. I have become more and more familiar with various mainstream concepts and tools. The idea of contributing to open-source can be overwhelming at first, but following the path laid out in this course, one would be surprised to learn how easy it could be.

I have volunteered to mentor attendees at a Django Girls workshop, which held in Lagos, Nigeria. I picked-up Django in a week prior to the event, because frankly, there wasn’t much of a learning curve to it since I already used Flask, a different Python-based web development framework.

Flask was a framework I first got introduced to at the summer training, but I did not revisit it until months later. As a backend web-developer, I now do most of my work in Flask.

In my experience, there is no better or faster way to grow, than to collaborate; and that is what open source preaches. Note that growth here is not only about the project, but the growth applies to each contributing individual. Heck, even if you do not contribute, you will grow by reading good open-source code – this I can attest to. And joining the dgplug community has surely made open-source a more approachable endeavour. # #

Have you contributed to any upstream project(s)? If yes, then details.

While (and a bit disappointingly) I have yet to contribute to an upstream project, I do have quite a number of them lined up. Most probably, the first would be to the [official Slack Client]() written in Python. Having recently developed a bot, which was used in conducting the election of committee members of the Python Users Nigeria Group, I do have a number of changes I plan to push to the aforementioned API library.

With this forked project, I have also tried my hands at revamping the architecture of the official dgplug bot ekanora, which is used in running the summer training sessions.

Some other projects I’d like to contribute to include:

Any tips for the next batch of participants. (++Final words)

I am not sure I am in the best position to answer this. However, I would advise that you endeavour to remain focused through this; simply keep at it. Even if your experience at the summer camp doesn’t go as you expect, you should definitely not count it loss. Instead, make it a reason to put in more effort and grow as you go.

Depending on your personal activity schedule, you might miss one/two classes. But that shouldn’t deter you from continuing the program. The logs are available for covering such lapses. Also, you could consult the summer training docs to get a hang of the agenda, so you can anticipate classes beforehand and be better prepared.

While I participated in the training, there were times when there was no power supply, and I had to use my phone. Thankfully, there are IRC clients for mobile phones (I recommend AndChat).

The class sessions have a few rules, and the community would love it if you played by them. Not to worry, they are no stringent rules; just there to keep everyone behaving. For instance, the use of SMS-speak is discouraged.

You should keep notes beside you when attending the sessions. Writing is a good way to memorize things in my opinion. Also, although most clients would have this by default, you should ensure your IRC client is configured to log conversations. Going back over what you’ve learnt would do you great good as you mightn’t be able to keep up with the speed of the class at times.

If you have some time before program begins, two things I’d advise you become familiar with are: - Your OS (Linux, preferably Fedora or Ubuntu) - An IRC client (I suggest HexChat)

I learned quite a lot from the summer training. I was really obsessed with attending whenever I could. Although coming from a different country, the timing wasn’t favourable, often meeting me in the middle of the day’s activities – school especially. I made efforts to invite a few friends to join on – one or two did, but the determination to keep on was obviously lacking, which reminds me of a statement I heard a while back that reads something like this:

"If 40 people begin a course like this together, only about 5 get to

That in my experience is very often the case. Be aware that a lot of people are on this bandwagon. The question to ask yourself, is “Do I want to be among the surviving commutants at the end of this journey?” Unless you plan to experiment with this experience, if your answer is yes, then I congratulate you as you begin the journey that could potentially kick-start your software engineering career/journey.

Know this, free and open source software is for the good of all. The open source community is wide and continues to grow. It welcomes all and sundry, but it poses a few hurdles to sieve the wheat from the chaff. In the end, the gains are worth it.

PostBooks, PostgreSQL and talk

PostBooks 4.9.5 was recently released and the packages for Debian (including jessie-backports), Ubuntu and Fedora have been updated.

Postbooks at in Rapperswil, Switzerland is coming on Friday, 24 June. It is at the HSR Hochschule für Technik Rapperswil, at the eastern end of Lake Zurich.

I'll be making a presentation about Postbooks in the business track at 11:00.

Getting started with accounting using free, open source software

If you are not currently using a double-entry accounting system or if you are looking to move to a system that is based on completely free, open source software, please see my comparison of free, open source accounting software.

Free and open source solutions offer significant advantages: flexibility, businesses can choose any programmer to modify the code, and use of SQL back-ends, multi-user support and multi-currency support are standard. These are all things that proprietary vendors charge extra money for.

Accounting software is the lowest common denominator in the world of business software, people keen on the success of free and open source software may find that encouraging businesses to use one of these solutions is a great way to lay a foundation where other free software solutions can thrive.

PostBooks new web and mobile front end

xTuple, the team behind Postbooks, has been busy developing a new Web and Mobile front-end for their ERP, CRM and accounting suite, powered by the same PostgreSQL backend as the Linux desktop client.

More help is needed to create official packages of the JavaScript dependencies before the Web and Mobile solution itself can be packaged.

Flisol 2016 Managua

Usualmente Flisol se celebra en Abril, pero por diversas razones, se decidió postergar para celebrase el 14 de Mayo de 2016. Para este evento, por parte de Fedora tuvimos la oportunidad de tener presencia en tres espacios.  En instalaciones, las mesas y las charlas.


Porfirio y Samuel estuvieron ayudando en el área de instalaciones. Como siempre es un apoyo general, pero es importante que colaboradores que tienen experiencia en Fedora esten disponibles. Les brindan seguridad a los usuarios. Además estuvieron copiando las imagenes ISO del sistema operativo.

En la mesa nos turnamos varios. William, Naima, Fernando, Eduardo y yo recibimos a las personas, les comentamos sobre Fedora y contestamos sus preguntas. Gracias al patrocinio del Proyecto Fedora teniamos calcomanías para distribuir. De parte de la comunidad habiamos colectado para tener algunos discos.

Algo que llamó la atención fue la presencia de los robots del proyecto icaro. Robots hechos 100% con Fedora, tanto hardware como software.  Teniamos una araña con una placa NP-05 y un auto evasor de obstaculos con una placa NP-07 funcionando. En proceso de construcción tenemos un brazo robot. EezyBotArm es un diseño para ser replicado en impresoras 3D. Con una placa Icaro NP-06 estabamos controlando 4 servos usando potenciometros para controlarlos. Aun nos faltan ajustes, pero da idea de la variedad de aplicaciones de Icaro y como las placas han ido evolucionando. También teniamos una laptop para mostar el uso de Icaro bloques, aunque la mayor parte del tiempo se uso para preguntas de Fedora.

Recibimos publico en general, pero en particular podemos destacar la visita de estudiantes de Cyber-Periodismo que hicieron sus prácticas de entrevistas con distintos expositores. Una de las entrevistas salio publicada en El Nuevo Diario. También nos visitaron dos tropas de Scouts.

Omar presentó una charla sobre virtualizar sistemas android. Por mi parte estuve hablando sobre robótica educativa. (Descarga aqui la presentación)

Aura Lila apoyo al evento y tomó muchas fotos. Gracias a ella puedo ilustrar mi reporte sobre este evento.

Un agradecimiento especial a la comunidad de Mozilla Nicaragua por organizar este evento donde todos pudimos divulgar distintas facetas del software libre.

Make a dump of your MySQL DB on OpenShift

If you want to make a backup of your database or just want to create a dump to migrate your db to another provider, you need to do these steps:

1. Connect to your running application trough SSH. To do that you need to click this link on OpenShift and run the displayed command.

Screen Shot 2016-05-16 at 11.56.00 AM

2. Create the dump using this command:

gzip dump.sql

3. After that you can scp the dump to any location you want.

External Plugins in GNOME Software (6)

This is my last post about the gnome-software plugin structure. If you want more, join the mailing list and ask a question. If you’re not sure how something works then I’ve done a poor job on the docs, and I’m happy to explain as much as required.

GNOME Software used to provide a per-process plugin cache, automatically de-duplicating applications and trying to be smarter than the plugins themselves. This involved merging applications created by different plugins and really didn’t work very well. For 3.20 and later we moved to a per-plugin cache which allows the plugin to control getting and adding applications to the cache and invalidating it when it made sense. This seems to work a lot better and is an order of magnitude less complicated. Plugins can trivially be ported to using the cache using something like this:

   /* create new object */
   id = gs_plugin_flatpak_build_id (inst, xref);
-  app = gs_app_new (id);
+  app = gs_plugin_cache_lookup (plugin, id);
+  if (app == NULL) {
+     app = gs_app_new (id);
+     gs_plugin_cache_add (plugin, id, app);
+  }

Using the cache has two main benefits for plugins. The first is that we avoid creating duplicate GsApp objects for the same logical thing. This means we can query the installed list, start installing an application, then query it again before the install has finished. The GsApp returned from the second add_installed() request will be the same GObject, and thus all the signals connecting up to the UI will still be correct. This means we don’t have to care about migrating the UI widgets as the object changes and things like progress bars just magically work.

The other benefit is more obvious. If we know the application state from a previous request we don’t have to query a daemon or do another blocking library call to get it. This does of course imply that the plugin is properly invalidating the cache using gs_plugin_cache_invalidate() which it should do whenever a change is detected. Whether a plugin uses the cache for this reason is up to the plugin, but if it does it is up to the plugin to make sure the cache doesn’t get out of sync.

And one last thing: If you’re thinking of building an out-of-tree plugin for production use ask yourself if it actually belongs upstream. Upstream plugins get ported as the API evolves, and I’m already happily carrying Ubuntu and Fedora-specific plugins that either self-disable at runtime or are protected using --enable-foo configure argument.

External Plugins in GNOME Software (5)

This is my penultimate post about the gnome-software plugin structure. If you’ve followed everything so far, well done.

There’s a lot of flexibility in the gnome-software plugin structure; a plugin can add custom applications and handle things like search and icon loading in a totally custom way. Most of the time you don’t care about how search is implemented or how icons are going to be loaded, and you can re-use a lot of the existing code in the appstream plugin. To do this you just save an AppStream-format XML file in either /usr/share/app-info/xmls/, /var/cache/app-info/xmls/ or ~/.local/share/app-info/xmls/. GNOME Software will immediately notice any new files, or changes to existing files as it has set up the various inotify watches.

This allows plugins to care a lot less about how applications are going to be shown. For example, the steam plugin downloads and parses the descriptions from a remote service during gs_plugin_refresh(), and also finds the best icon types and downloads them too. Then it exports the data to an AppStream XML file, saving it to your home directory. This allows all the applications to be easily created (and then refined) using something as simple as gs_app_new("steam:foo.desktop"). All the search tokenisation and matching is done automatically, so it makes the plugin much simpler and faster.

The only extra step the steam plugin needs to do is implement the gs_plugin_adopt_app() function. This is called when an application does not have a management plugin set, and allows the plugin to claim the application for itself so it can handle installation, removal and updating. In the case of steam it could check the ID has a prefix of steam: or could check some other plugin-specific metadata using gs_app_get_metadata_item().

Another good example is the fwupd that wants to handle any firmware we’ve discovered in the AppStream XML. This might be shipped by the vendor in a package using Satellite, or downloaded from the LVFS. It wouldn’t be kind to set a management plugin explicitly in case XFCE or KDE want to handle this in a different way. This adoption function in this case is trivial:

gs_plugin_adopt_app (GsPlugin *plugin, GsApp *app)
  if (gs_app_get_kind (app) == AS_APP_KIND_FIRMWARE)
    gs_app_set_management_plugin (app, "fwupd");

The next (and last!) blog post I’m going to write is about the per-plugin cache that’s available to plugins to help speed up some operations. In related news, we now have a mailing list, so if you’re interested in this stuff I’d encourage you to join and ask questions there. I also released gnome-software 3.21.2 this morning, so if you want to try all this plugin stuff yourself your distro if probably going to be updating packages soon.

Write for the Fedora Magazine

Know an awesome piece of Fedora news? Have a good idea for a Fedora how-to? Do you or someone you know use Fedora in an interesting way? We’re always looking for new contributors to write awesome, relevant content. The Magazine is run by the Fedora community — and that’s all of us. You can help too!


What content does the Magazine need?

Glad you asked. We often feature material for desktop users, since there are many of them out there! But that’s not all we publish. We want the Magazine to feature lots of different content for the general public.

Sysadmins and power users

We love to publish articles for system administrators and power users who dive under the hood. Here are some recent examples:


We don’t forget about developers, either. We want to help people use Fedora to build and make incredible things. Here are some recent articles focusing on developers:

Interviews, projects, and links

We also feature interviews with people using Fedora in interesting ways. We even link to other useful content about Fedora. We’ve run interviews recently with people using Fedora to educate, power lasers, or create better photography. If you find content about Fedora you think would be interesting to our readers, let us know.

How do I get started?

It’s easy to start writing for Fedora Magazine! You just need to have decent skill in written English, since that’s the language in which we publish. Our editors can help polish your work for maximum impact.

The writers and editors use the Fedora Marketing mailing list to plan future articles. Create a new thread to that list and introduce yourself. If you have some ideas for posts, add them to your message as well. The Magazine team will then guide you through getting started. The team also hangs out on #fedora-mktg on Freenode. Drop by, and we can help you get started.

Image courtesy Dustin Lee – originally posted to Unsplash as Untitled




Creating High Altitude Balloon Tracking Systems Using Haskell

When I first started attending The University of Akron after high school back in 2010, I got involved in the ham radio club on campus, W8UPD. In addition to quickly redoing their website (because it badly needed it) and reimplementing their infrastructure using Puppet and VMWare, I also became their general-purpose "code-writing guy."

One project they took on towards the end of my involvement (before transferring to Youngstown State University) was a high altitude balloon flight. The project was being led by several students and alumni. The balloon would send telemetry down via a ham radio RF link, using RTTY (for reasons that aren't important here), and on the launch day, we would go chase it and try to find it based on the information it was sending down to us.

Immediately I saw an opportunity to have some fun writing code for this. Jimmy Carter/KG4SGP had already begun on the embedded code to put on the balloon payload itself. But nobody started work on a system for working with the data on the ground, and so I decided to get to work. Thus MapView was born.

I'll skip over the first two versions of MapView and talk about what MapView looks like nowadays.

MapView is a Haskell library for creating high-altitude balloon tracking systems. Actually, it's a collection of libraries, but I'll get to that.

You see, since the first version of MapView, this same group of people has launched other balloon projects, and each of them had different specifications. Without going into much detail, earlier versions of MapView required changing much of the "backend" code for each flight. It was not set up to be a reusable project, and certainly would not have been easy for another group to start using, if they wanted. It was open source, but barely worth the hassle for anyone except us. And barely even for us.

So I began thinking about what a more reusable, composable MapView might look like. By MapView 2, I was already using Haskell and there was no reason to change that. But I clearly needed to restructure everything and rewrite most of the code from scratch -- something that worked out well because the code was pretty bad to start with.

I had a couple of goals for the rewrite. Of course compositionality was key, but also ease of use, taking advantage of the type system as much as possible, and making use of high quality libraries where I could. Of course, robustness is critical as well.

I'll talk more about the design choices in another post (along with some of the current things I hate about it that should be fixed in the future). For now, I want to talk about using it to create a "mission control" for a balloon flight.

A sample flight telemetry specification

Let us assume that we'll be launching a flight with a RTTY downlink that transmits telemetry in the following way:

  1. A short training sequence to allow a demodulator to lock onto the RTTY signal
  2. A newline
  3. The callsign
  4. A colon
  5. The latitude
  6. A colon
  7. The longitude
  8. A colon
  9. The altitude in meters
  10. A newline

For example: R1R1R1\nW1ABC:39.040445:-94.619104:4193\n

In this simple example, no CRC hash is used.

Pick a modem, any modem

We will need a software modem to decode the RTTY transmission. The minimodem package works well for this, but you can use anything that will output the received data to standard output. It doesn't matter if it outputs at each character or at each newline -- either of these will work fine for MapView.

Once you've decided on, and installed a terminal-based modem, it's time to write your configuration file. In the same spirit as the xmonad window manager, your configuration file is ultimately what becomes the program you run. This is why MapView 3 is a library for creating balloon-tracking mission control programs.

Okay, so let's get started.

Thoughts on our security bubble
Last week I spent time with a lot of normal people. Well, they were all computer folks, but not the sort one would find in a typical security circle. It really got me thinking about the bubble we live in as the security people.

There are a lot of things we take for granted. I can reference Dunning Kruger and "turtles all the way down" and not have to explain myself. If I talk about a buffer overflow, or most any security term I never have to explain what's going on. Even some of the more obscure technologies like container scanners and SCAP don't need but a few words to explain what happens. It's easy to talk to security people, at least it's easy for security people to talk to other security people.

Sometimes it's good to get out of your comfort zone though. Last week I spent a lot of the week well outside groups I was comfortable with. It's a good thing for us to do this though. I really do think this is a big problem the security universe suffers from. There are a lot of us who don't really get out there and see what it's really like. I know I always assume everyone else knows a lot about security. They don't know a lot about security. They usually don't even know a little about security. This puts us in a place where we think everyone else is dumb, and they think we're idiots. Do you listen to someone who appears to be a smug jerk? Of course not, nobody does. This is one of the reasons it can be hard to get our messages across.

If we want people to listen to us, they have to trust us. If we want people to trust us, we have to make them understand us. If we want people to understand us, we have to understand them first. That bit of circular Yoda logic sounds insane, but it really is true. There's nothing worse than trying to help someone only to have them ignore you, or worse, do the opposite because they can.

So here's what I want to do. I have some homework for you, assuming you made it this far, which you probably did if you're reading this. Go talk to some non security people. Don't try to educate them on anything, just listen to what they have to say, even if they're wrong, especially if they're wrong, don't correct them. Just listen. Listen and see what you can learn. I bet it will be something amazing.

Let me know what you learn: @joshbressers
Copr CLI - Building from Tito, Mock and PyPI

It has been a while since Copr supports building directly from Git repository which is managed by Tito, or more freely building via Mock SCM. Since last release it is also possible to generate a .spec file and submit a build from PyPI package. You can read more about it here. What was missing about this features was command line support. Which is finally done now!

Since copr-cli-1.50 and python-copr-1.68 you can use following commands.


copr-cli buildtito <username>/<project> --git-url=<url>


--git-url URL
    Url to a project managed by Tito, required.

--git-dir DIRECTORY
    Relative path from Git root to directory containing .spec file.

--git-branch BRANCH
    Checokut specific branch on the repository.

    To build from the last commit instead of the last release tag.


copr-cli buildmock <username>/<project> --scm-url=<url> --spec=<path/to/foo.spec>


--scm-type TYPE
    Specify versioning tool, default is git.

--scm-url URL
    Url to a project versioned by Git or SVN, required.

--scm-branch BRANCH
    Checokut specific branch on the repository.

--spec FILE
    Relative path from SCM root to .spec file, required.


copr-cli buildpypi frostyx/foo --packagename=<pypi-pkgname>


--pythonversions [VERSION [VERSION ...]]
    For what Python versions to build (by default: 3 2)

--packageversion PYPIVERSION
    Version of the PyPI package to be built (by default latest)

--packagename PYPINAME
    Name of the PyPI package to be built, required.

Of course there are lot of other possible options for this commands. Please see manual page by running man copr-cli to see more info.

Happy building everyone ;-)

May 22, 2016

External Plugins in GNOME Software (4)

After my last post, I wanted to talk more about the refine functionality in gnome-software. As previous examples have shown it’s very easy to add a new application to the search results, updates list or installed list. Some plugins don’t want to add more applications, but want to modify existing applications to add more information depending on what is required by the UI code. The reason we don’t just add everything at once is that for search-as-you-type to work effectively we need to return results in less than about 50ms and querying some data can take a long time. For example, it might take a few hundred ms to work out the download size for an application when a plugin has to also look at what dependencies are already installed. We only need this information once the user has clicked the search results and when the user is in the details panel, so we can save a ton of time not working out properties that are not useful.

Lets looks at another example.

gs_plugin_refine_app (GsPlugin *plugin,
                      GsApp *app,
                      GsPluginRefineFlags flags,
                      GCancellable *cancellable,
                      GError **error)
  /* not required */
    return TRUE;

  /* already set */
  if (gs_app_get_license (app) != NULL)
    return TRUE;

  /* FIXME, not just hardcoded! */
  if (g_strcmp0 (gs_app_get_id (app, "chiron.desktop") == 0))
    gs_app_set_license (app, "GPL-2.0 and LGPL-2.0+");

  return TRUE;

This is a simple example, but shows what a plugin needs to do. It first checks if the action is required, in this case GS_PLUGIN_REFINE_FLAGS_REQUIRE_LICENSE. This request is more common than you might expect as even the search results shows a non-free label if the license is unspecified or non-free. It then checks if the license is already set, returning with success if so. If not, it checks the application ID and hardcodes a license; in the real world this would be querying a database or parsing an additional config file. As mentioned before, if the license value is freely available without any extra work then it’s best just to set this at the same time as when adding the app with gs_app_list_add(). Think of refine as adding things that cost time to calculate only when really required.

The UI in gnome-software is quite forgiving for missing data, hiding sections or labels as required. Some things are required however, and forgetting to assign an icon or short description will get the application vetoed so that it’s not displayed at all. Helpfully, running gnome-software --verbose on the command line will tell you why an application isn’t shown along with any extra data.

As a last point, a few people have worries that these blogs are perhaps asking for trouble; external plugins have a chequered history in a number of projects and I’m sure gnome-software would be in an even worse position given that the core maintainer team is still so small. Being honest, if we break your external plugin due to an API change in the core you probably should have pushed your changes upstream sooner. There’s a reason you have to build with -DI_KNOW_THE_GNOME_SOFTWARE_API_IS_SUBJECT_TO_CHANGE

spice OpenGL/virgl acceleration on Fedora 24
New in Fedora 24 virt is 3D accelerated SPICE graphics, via Virgl. This is kinda-sorta OpenGL passthrough from the VM up to the host machine. Much of the initial support has been around since qemu 2.5, but it's more generally accessible now that SPICE is in the mix, since that's the default display type used by virt-manager and gnome-boxes.

I'll explain below how you can test things on Fedora 24, but first let's cover the hurdles and caveats. This is far from being something that can be turned on by default and there's still serious integration issues to iron out. All of this is regarding usage with libvirt tools.

Caveats and known issues

Because of these issues, we haven't exposed this as a UI knob in any of the tools yet, to save us some redundant bug reports for the above issues from users who are just clicking a cool sounding check box :) Instead you'll need to explicitly opt in via the command line.

Testing it out

You'll need the following packages (or later) to test this:
  • qemu-2.6.0-2.fc24
  • libvirt-
  • virt-manager-1.3.2-4.20160520git2204de62d9.fc24
  • At least F24 beta on the host
  • Fedore 24 beta in the guest. Anything earlier is not going to actually enable the 3D acceleration. I have no idea about the state of other distributions. And to make it abundantly clear this is linux only and likely will be for a long time at least, I don't know if Windows driver support is even on the radar.
Once you install a Fedora 24 VM through the standard methods, you can enable spice GL for your VM with these two commands:

$ virt-xml --connect $URI $VM_NAME --confirm --edit --video clearxml=yes,model=virtio,accel3d=yes
$ virt-xml --connect $URI $VM_NAME --confirm --edit --graphics clearxml=yes,type=spice,gl=on,listen=none

The first command will switch the graphics device to 'virtio' and enable the 3D acceleration setting. The second command will set up spice to listen locally only, and enable GL. Make sure to fully poweroff the VM afterwards for the settings to take effect. If you want to make the changes manually with 'virsh edit', the XML specifics are described in the spice GL documentation.

Once your VM has started up, you can verify that everything is working correctly by checking glxinfo output in the VM, 'virgl' should appear in the renderer string:

$ glxinfo | grep virgl
Device: virgl (0x1010)
OpenGL renderer string: Gallium 0.4 on virgl

And of course the more fun test of giving supertuxkart a spin :)

Credit to Dave Airlie, Gerd Hoffman, and Marc-André Lureau for all the great work that got us to this point!
Amazon Web Services Command Line Interface (AWS CLI) - Cheat Sheet
I have been standing up quite a bit of infrastructure in AWS lately using the AWS CLI.  Here are some commands that I found helpful in a cheat sheet format. I'll show you how to create resources, query resources for information and how to update resources. Hopefully this will get you started quickly. The cheat sheet covers the following topics:

  • Setting up your environment.
  • Working with Virtual Private Clouds (VPC).
  • Working with Identity and Access Management (IAM).
  • Working with Route53.
  • Working with Elastic Load Balancers (ELB).
  • Working with SSH.
  • Working with DHCP.
  • Working with Elastic Compute Cloud (EC2).
  • Utilizing queries to gather information.

You can preview the AWS CLI cheat sheet by clicking below (hover mouse over upper right corner):

<iframe height="300" onclick="ga('send', 'event', { eventCategory: 'AWS_CLI_CS_Download', eventAction: 'AWS_CLI_CS_Download'});" src="" width="400"></iframe>

You can test all these commands with Fedora images which can be launched here:

If you have any questions about any of the commands in particular, please drop a comment below and I'll try to help.  Much credit goes to Ryan Cook for frontloading a lot of this.
Running .NET Core RC2 on Fedora 23

If you want to try the latest and greatest of .NET Core on your Fedora machine you will be sad to hear that there isn’t a package available at the official website yet (Fedora support is coming):

Luckily with some strace magic and a package from CentOS repository it is very easy to set it up.

First let’s download the CentOS package from the official storage and unpack it:

mkdir ~/dotnet
cd ~/dotnet
tar xf dotnet-dev-centos-x64.1.0.0-preview1-002702.tar.gz
rm dotnet-dev-centos-x64.1.0.0-preview1-002702.tar.gz

And you’re done! There is a dotnet executable file which you can try to run but sadly it won’t work just yet…

[fedora@localhost dotnet]$ ./dotnet --version
Failed to initialize CoreCLR, HRESULT: 0x80131500

I tried to look for solutions on the GitHub issues page, but haven’t found any, so I used strace to see what is wrong. I suspected that there is a missing library for dotnet to work and I was right. A lot of missing libraries for globalization. It turns out that the CentOS dotnet package expects libicu version to be 50 where in Fedora it is version 54 already. It’s an easy fix though, just download the CentOS package and extract it, it won’t break the other version.

cd ~/dotnet
cd /
sudo rpm2cpio ~/dotnet/libicu-50.1.2-15.el7.x86_64.rpm | sudo cpio -idmv
rm ~/dotnet/libicu-50.1.2-15.el7.x86_64.rpm
cd ~/dotnet

At this point everything should be working, only thing you should do is make a symbolic link to the executable with:

sudo ln -s ~/dotnet/dotnet /usr/local/bin/dotnet

And then you are ready!

mkdir ~/App1 
cd ~/App1
dotnet new
dotnet restore
dotnet run

If you want to try ASP.NET there is a nice guide here:

Thanks for reading!
Have fun!

External Plugins in GNOME Software (3)

Lots of nice feedback from my last post, so here’s some new stuff. Up now is downloading new metadata and updates in plugins.

The plugin loader supports a gs_plugin_refresh() vfunc that is called in various situations. To ensure plugins have the minimum required metadata on disk it is called at startup, but with a cache age of infinite. This basically means the plugin must just ensure that any data exists no matter what the age.

Usually once per day, we’ll call gs_plugin_refresh() but with the correct cache age set (typically a little over 24 hours) which allows the plugin to download new metadata or payload files from remote servers. The gs_utils_get_file_age() utility helper can help you work out the cache age of file, or the plugin can handle it some other way.

For the Flatpak plugin we just make sure the AppStream metadata exists at startup, which allows us to show search results in the UI. If the metadata did not exist (e.g. if the user had added a remote using the commandline without gnome-software running) then we would show a loading screen with a progress bar before showing the main UI. On fast connections we should only show that for a couple of seconds, but it’s a good idea to try any avoid that if at all possible in the plugin.
Once per day the gs_plugin_refresh() method is called again, but this time with GS_PLUGIN_REFRESH_FLAGS_PAYLOAD set. This is where the Flatpak plugin would download any ostree trees (but not doing the deloy step) so that the applications can be updated live in the details panel without having to wait for the download to complete. In a similar way, the fwupd plugin downloads the tiny LVFS metadata with GS_PLUGIN_REFRESH_FLAGS_METADATA and then downloads the large firmware files themselves only when the GS_PLUGIN_REFRESH_FLAGS_PAYLOAD flag is set.

If the @app parameter is set for gs_plugin_download_file() then the progress of the download is automatically proxied to the UI elements associated with the application, for instance the install button would show a progress bar in the various different places in the UI. For a refresh there’s no relevant GsApp to use, so we’ll leave it NULL which means something is happening globally which the UI can handle how it wants, for instance showing a loading page at startup.

gs_plugin_refresh (GsPlugin *plugin,
                   guint cache_age,
                   GsPluginRefreshFlags flags,
                   GCancellable *cancellable,
                   GError **error)
  const gchar *metadata_fn = "/var/cache/example/metadata.xml";
  const gchar *metadata_url = "";

  /* this is called at startup and once per day */
    g_autoptr(GFile) file = g_file_new_for_path (metadata_fn);

    /* is the metadata missing or too old */
    if (gs_utils_get_file_age (file) > cache_age) {
      if (!gs_plugin_download_file (plugin,
                                    error)) {
        /* it's okay to fail here */
        return FALSE;
      g_debug ("successfully downloaded new metadata");

  /* this is called when the session is idle */
  if ((flags & GS_PLUGIN_REFRESH_FLAGS_PAYLOAD) == 0) {
    // FIXME: download any required updates now

  return TRUE;

Note, if the downloading fails it’s okay to return FALSE; the plugin loader continues to run all plugins and just logs an error to the console. We’ll be calling into gs_plugin_refresh() again in only a few hours, so there’s no need to bother the user. For actions like gs_plugin_app_install we also do the same thing, but we also save the error on the GsApp itself so that the UI is free to handle that how it wants, for instance showing a GtkDialog window for example.

Step 2: Configuring Jenkins

Jenkins is one of the major part in setting up Poor Man’s CI. Lets look into how Jenkins can be configured and how can we make it automate task.

Jenkins can be downloaded for the OS you are using from their website. After downloading the mistake that I did was using Jenkins with Global Credentials, as pointed by lsedlar on the channel, because of this I was not able to get the “Trigger by URL” option in the project.

Initial configuration is pointed by lsedlar in his blog. I will be covering extra configuration to have it working for local development. First and foremost being the authentication , this can be done by  Manage Jenkins –> Configure Global Security. Selection_013


Give Read, View and Discover to anonymous and add another user and give all the permission to that user. You need to restart Jenkins service.

sudo systemctl restart jenkins.service

On web ui jenkins will ask you to sign in ,  create a user with the username you gave all the permission and log in with the user. Now add New Item  and create a Freestyle Project. Now configure the project , click on  “This build is parameterized” and configure it according to Poor Man CI’s. Once that is done, select the option as shown below:


Once that is done you can use this token to trigger build using a POST request. The trick is you need to pass parameters too with the URL. Next thing is you need to tell Jenkins what to do and where to do. Since we are dealing with Jenkins and git we need a local repo or some URL to the git repo. For every operation carrying out in the repository the directory should have the group and user set to jenkins else you cat just put the repo in /var/lib/jenkins.

Download and install Git Plugin for Jenkins. Once that is done you need to point the git plugin to the repository you are going to test.


Once jenkins know where to perform action you need to tell what to perform this is done in the build section of the configuration and select Execute Shell.

if [ -n "$REPO" -a -n "$BRANCH" ]; then
git remote rm proposed || true
git remote add proposed "$REPO"
git fetch proposed
git checkout origin/master
git config --global ""
git config --global "Your Name"
git merge --no-ff "proposed/$BRANCH" -m "Merge PR"

We are almost done, the last thing is we need an auth token for the user. Go to Manage Jenkins –> Manager User. Now get the api token for the user. Make sure that branch you are passing as parameter exists in the repository. Lets trigger the build using cuRL.


API Token: 728507950f65eec1d77bdc9c2b09e14b



curl -X POST http://fhackdroid:728507950f65eec1d77bdc9c2b09e14b@localhost:8080/job/pagureExp/buildWithParameters\?token\=BEEFCAFE\&REPO\=file:///$\{JENKINS_HOME\}/new_one_three\&BRANCH\=checking\&cause\=200