February 08, 2016

Video: Fedora 23 LXC - Debian SID and CentOS 7 XFCE containers via X2Go

Being a LONG-TIME OpenVZ user, I've been avoiding LXC some. Mainly because it wasn't quite done yet. I thought I'd give it a try on Fedora 23 to see how well it works... and the answer is surprisingly... fairly well. I made two screencast (without sound). I just used the lxc-{whatever} tools rather than virt-manager. Both containers just use the default network config (DHCP handed out via DNSMasq provided by libvirtd) which is NAT'ed private addresses... and were automatically configured and just worked.

Here's a list of all of the container OS Templates they offer on x86:

centos 6 amd64 default 20160205_02:16
centos 6 i386 default 20160205_02:16
centos 7 amd64 default 20160205_02:16
debian jessie amd64 default 20160204_22:42
debian jessie i386 default 20160204_22:42
debian sid amd64 default 20160207_11:58
debian sid i386 default 20160204_22:42
debian squeeze amd64 default 20160204_22:42
debian squeeze i386 default 20160204_22:42
debian wheezy amd64 default 20160204_22:42
debian wheezy i386 default 20160204_22:42
fedora 21 amd64 default 20160205_01:27
fedora 21 i386 default 20160205_01:27
fedora 22 amd64 default 20160205_01:27
fedora 22 i386 default 20160205_01:27
gentoo current amd64 default 20160205_14:12
gentoo current i386 default 20160205_14:12
opensuse 12.3 amd64 default 20160205_00:53
opensuse 12.3 i386 default 20160205_00:53
oracle 6.5 amd64 default 20160205_11:40
oracle 6.5 i386 default 20160205_11:40
plamo 5.x amd64 default 20160207_11:59
plamo 5.x i386 default 20160207_13:13
ubuntu precise amd64 default 20160205_03:49
ubuntu precise i386 default 20160205_03:49
ubuntu trusty amd64 default 20160205_03:49
ubuntu trusty i386 default 20160205_03:49
ubuntu trusty ppc64el default 20160201_03:49
ubuntu vivid amd64 default 20160205_03:49
ubuntu vivid i386 default 20160205_03:49
ubuntu wily amd64 default 20160205_03:49
ubuntu wily i386 default 20160205_03:49
ubuntu xenial amd64 default 20160205_03:49
ubuntu xenial i386 default 20160205_03:49

The first one shows the basics of LXC installation on Fedora 23 (per their wiki page on the subject) as well as creating a Debian SID container, getting it going, installing a lot of software on it including XFCE and most common desktop software... and accessing it via X2Go... and configuring XFCE the way I like it. This one was made on my home laptop and my network is a bit slow so I cut out a few long portions where packages were downloading and installing but everything else is there... yes including quite a bit of waiting for stuff to happen.

<video controls="controls" height="454" poster="/files/vp9/lxc-on-fedora-23-debian-sid-GUI-container.png" preload="none" src="/files/vp9/lxc-on-fedora-23-debian-sid-GUI-container.webm" width="720"></video>
lxc-on-fedora-23-debian-sid-GUI-container.webm (25 MB, ~41.5 minutes)

The second video is very similar to the first but it is a remote ssh session with my work machine (where the network is way faster) and shows making a CentOS 7 container, installing XFCE and the same common desktop software, and then connecting to it via X2Go using an ssh proxy, and configuring XFCE how I like it. It was done in a single, un-edited take and includes a bit of waiting as stuff downloads and installs... so you get the complete thing from start to finish.

<video controls="controls" height="436" poster="/files/vp9/lxc-on-fedora-23-centos-7-GUI-container.png" preload="none" src="/files/vp9/lxc-on-fedora-23-centos-7-GUI-container.webm" width="720"></video>
lxc-on-fedora-23-centos-7-GUI-container.webm (22.7 MB, ~31 minutes)

I recorded the screencasts with vokoscreen at 25 frames-per-second @ slightly larger than 720p resolution... and then converted them to webm (vp9) with ffmpeg @ 200kbit video. They compressed down amazing well. I recommend playback in full-screen as the quality is great. Enjoy!

read more

February 07, 2016

DevConf 2016 is over
DevConf 2016 is over. I was doing the mining from social networks to get something interesting on screens at corridors. DevConf was really big and it became trending topic. Unfortunately, this caused hijacking of #devconfcz hashtag. I am really sorry for that. I have collected all pictures from social networks into this post (please scroll … Continue reading "DevConf 2016 is over"
New release: usbguard-0.4

I’m not dead yet. And the project is still alive too. It’s been a while since the last release, so it’s time to do another. The biggest improvements were made to the rule language by introducing the rule conditions and to the CLI by introducing a new command, usbguard, for interacting with a running USBGuard daemon instance and for generating initial policies.

Here’s an example of what you can do with the new rule conditions feature:

allow with-interface one-of { 03:00:01 03:01:01 } if !rule-applied

This one-liner in the policy will ensure that a USB keyboard will be authorized only once. If somebody connects another USB keyboard, it won’t be allowed. Of course, if you diconnect yours, then that one won’t be authorized either when connected again. Another, somewhat similar example is this:

allow with-interface one-of { 03:00:01 03:01:01 } if !allowed-matches(with-interface one-of { 03:00:01 03:01:01 })

That one will allow to connect a USB keyboard only if no other is currently connected. You can narrow down the match to a specific type, serial number, or whatever else that the rule language supports. Including other conditions.

Another feature that improves the usability of USBGuard is the new command-line interface which allows you, among other things, to generate initial policies for your system. To quicky generate a policy based on all the connected USB devices, run:

# usbguard generate-policy > rules.conf
# vi rules.conf
(review/modify the generated rule set)
# cp rules.conf /etc/usbguard/rules.conf
# chmod 0600 /etc/usbguard/rules.conf
# systemctl restart usbguard

There are some options to tweak the resulting policy. See the usbguard(1) manual page for further details.

And last but not least, thanks to Philipp Deppenwiese, USBGuard is now packaged for the Gentoo Linux distribition.

Major changes

  • The daemon is now capable of dropping process capabilities and uses a seccomp based syscall whitelist. Options to enable these features were added to the usbguard-daemon command.
  • Devices connected at the start of the daemon are now recognized and the DevicePresent signal is sent for each of them.
  • New configuration options for setting the implicit policy target and how to handle the present devices are now available.
  • String values read from the device are now properly escaped and length limits on these values are enforced.
  • The library API was extended with the Device and DeviceManager classes.
  • Implemented the usbguard CLI, see usbguard(1) for available commands.
  • Initial authorization policies can be now easily generated using the usbguard generate-policy command.
  • Extended the rule language with rule conditions. See usbguard-rules.conf(5) for details.
  • Moved logging code into the shared library. You can use static methods of the Logger class to configure logging behaviour.
  • Removed the bundled libsodium and libqb libraries.
  • Fixed several bugs.
  • Resolved issues: #46, #45, #41, #40, #37, #32, #31, #28, #25, #24, #21, #16, #13, #9, #4

WARNING: Backwards incompatible changes

  • The device hashing procedure was altered and generates different hash values. If you are using the hash attribute in your rules, you’ll have to update the values.
  • The bundled libsodium and libqb were removed. You’ll have to compile and install them separately if your distribution doesn’t provide them as packages.


If you are using Fedora or the USBGuard Copr repository, run:

$ sudo dnf update usbguard


Tarballs can be downloaded here:

February 06, 2016

Look over the fence – StartUp Weekend Phnom Penh

Linux and Free Software plays in South East Asia not that role as in Europe or North America. To change that at least a bit, I came here. The asian culture plays definitely a role and this was often discussed. But it plays lesser the vital role as we think and as the linked article shows, we will not find an easy an solution for the cultural differences. From my perspective it is lesser necessary that we adopt, the most asians I met are willing to accept the differences and can live with them.
The people in South East Asia have other interests and that is the problem, Google has here more success with his Google Developer Community. There are only a few successful FOSS events and most of them are one timer opposite the BarCamp scene is huge. BarCamp Yangon 14.000, Bangkok 7.000, Phnom Penh and HCMC 5.000 visitors. If you go to them you will meet a lot of tech interested people but you will realize that the interest in start up topics is much higher. I am not the only one who realized that, Pravin made the expirience on his visit to BarCamp Yangon also.

So it was time for me, to digg deeper into the StartUp scene and look if we have common areas and if we could inter-connect so that we have an profit from. So I participated in StartUp Weekend Phnom Penh 2016 as an mentor. What is StartUp Weekend? We would call it Hackfest, people come together to work on a specific problem. On start some people pitch (we would call it lightning talk)  about a problem they want to solve and then people can vote for it and join that team. Then they work on the problem with the target to build out a business with it and on the end they have an final pitch again, before a jury.

On this StartUp Weekend 2 of the idea was directly open source related, but both of them do not understand some principles of the open source and free software world and for that will be lesser successful. But there was other interesting ideas of technical nature. One was the idea to uniting different smart cards into one, but later on became, because of the limited storage on a magnet card, an smart phone app and solved the problem just partly. The other one started as an app from the begin on. All in all there was 11 teams, with ideas like student card, and idea about used books, an medical consultation app for the people living in the provinces and some others more.

So there are overlaps with the free software community, especially there are special editions for developer. In this ones we should engage, to win new users and contributors. We have to go some other ways in Asia, that is for sure. All in all it was an interesting experience to look over the fence.

Keystone Implied roles with CURL

Keystone now has Implied Roles.  What does this mean?  Lets say we define the role Admin to  imply the Member role.  Now, if you assigned someone Admin on a project they are automatically assigned the Member role on that project implicitly.

Let’s test it out:

Since we don’t yet have client or CLI support, we’ll have to make due with curl and jq for now.

This uses the same approach Keystone V3 Examples

. ~/adminrc

export TOKEN=`curl -si -d @token-request.json -H "Content-type: application/json" $OS_AUTH_URL/auth/tokens | awk '/X-Subject-Token/ {print $2}'`

export ADMIN_ID=`curl -H"X-Auth-Token:$TOKEN" -H "Content-type: application/json" $OS_AUTH_URL/roles?name=admin | jq --raw-output '.roles[] | {id}[]'`

export MEMBER_ID=`curl -H"X-Auth-Token:$TOKEN" -H "Content-type: application/json" $OS_AUTH_URL/roles?name=_member_ | jq --raw-output '.roles[] | {id}[]'`

curl -X PUT -H"X-Auth-Token:$TOKEN" -H "Content-type: application/json" $OS_AUTH_URL/roles/$ADMIN_ID/implies/$MEMBER_ID

curl  -H"X-Auth-Token:$TOKEN" -H "Content-type: application/json" $OS_AUTH_URL/role_inferences 

Now, create a new user and and assign them only the user role.

openstack user create Phred
openstack user show Phred
| Field     | Value                            |
| domain_id | default                          |
| enabled   | True                             |
| id        | 117c6f0055a446b19f869313e4cbfb5f |
| name      | Phred                            |
$ openstack  user set --password-prompt Phred
User Password:
Repeat User Password:
$ openstack project list
| ID                               | Name  |
| fdd0b0dcf45e46398b3f9b22d2ec1ab7 | admin |
$ openstack project list
| ID                               | Name  |
| fdd0b0dcf45e46398b3f9b22d2ec1ab7 | admin |
openstack role add --user 117c6f0055a446b19f869313e4cbfb5f --project fdd0b0dcf45e46398b3f9b22d2ec1ab7 e3b08f3ac45a49b4af77dcabcd640a66

Copy token-request.json and modify the values for the new user.

 curl  -d @token-request-phred.json -H "Content-type: application/json" $OS_AUTH_URL/auth/tokens | jq '.token | {roles}'
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  1643  100  1098  100   545  14742   7317 --:--:-- --:--:-- --:--:-- 14837
  "roles": [
      "id": "9fe2ff9ee4384b1894a90878d3e92bab",
      "name": "_member_"
      "id": "e3b08f3ac45a49b4af77dcabcd640a66",
      "name": "admin"

February 05, 2016

Giving up democracy to get it back

Do services like Facebook and Twitter really help worthwhile participation in democracy, or are they the most sinister and efficient mechanism ever invented to control people while giving the illusion that they empower us?

Over the last few years, groups on the left and right of the political spectrum have spoken more and more loudly about the problems in the European Union. Some advocate breaking up the EU, while behind the scenes milking it for every handout they can get. Others seek to reform it from within.

Yanis Varoufakis on motorbike

Most recently, former Greek finance minister Yanis Varoufakis has announced plans to found a movement (not a political party) that claims to "democratise" the EU by 2025. Ironically, one of his first steps has been to create a web site directing supporters to Facebook and Twitter. A groundbreaking effort to put citizens back in charge? Or further entangling activism in the false hope of platforms that are run for profit by their Silicon Valley overlords? A Greek tragedy indeed, in the classical sense.

Varoufakis rails against authoritarian establishment figures who don't put the citizens' interests first. Ironically, big data and the cloud are a far bigger threat than Brussels. The privacy and independence of each citizen is fundamental to a healthy democracy. Companies like Facebook are obliged - by law and by contract - to service the needs of their shareholders and advertisers paying to study and influence the poor user. If "Facebook privacy" settings were actually credible, who would want to buy their shares any more?

Facebook is more akin to an activism placebo: people sitting in their armchair clicking to "Like" whales or trees are having hardly any impact at all. Maintaining democracy requires a sufficient number of people to be actively involved, whether it is raising funds for worthwhile causes, scrutinizing the work of our public institutions or even writing blogs like this. Keeping them busy on Facebook and Twitter renders them impotent in the real world (but please feel free to alert your friends with a tweet)

Big data is one of the areas that requires the greatest scrutiny. Many of the professionals working in the field are actually selling out their own friends and neighbours, their own families and even themselves. The general public and the policy makers who claim to represent us are oblivious or reckless about the consequences of this all-you-can-eat feeding frenzy on humanity.

Pretending to be democratic is all part of the illusion. Facebook's recent announcement to deviate from their real-name policy is about as effective as using sunscreen to treat HIV. By subjecting themselves to the laws of Facebook, activists have simply given Facebook more status and power.

Data means power. Those who are accumulating it from us, collecting billions of tiny details about our behavior, every hour of every day, are fortifying a position of great strength with which they can personalize messages to condition anybody, anywhere, to think the way they want us to. Does that sound like the route to democracy?

I would encourage Mr Varoufakis to get up to speed with Free Software and come down to Zurich next week to hear Richard Stallman explain it the day before launching his DiEM25 project in Berlin.

Will the DiEM25 movement invite participation from experts on big data and digital freedom and make these issues a core element of their promised manifesto? Is there any credible way they can achieve their goal of democracy by 2025 without addressing such issues head-on?

Or put that the other way around: what will be left of democracy in 2025 if big data continues to run rampant? Will it be as distant as the gods of Greek mythology?

Still not convinced? Read about Amazon secretly removing George Orwell's 1984 and Animal Farm from Kindles while people were reading them, Apple filtering the availability of apps with a pro-Life bias and Facebook using algorithms to identify homosexual users.

testing flannel

I noticed today (maybe I’ve noticed before, but forgotten) that the version of flannel in Fedora 23 is older than what’s available in CentOS. It looks like this is because no one tested the more-recent version of flannel in Fedora’s Bodhi, a pretty awesome application for testing packages.

Why not? Maybe because it isn’t always obvious how to test a package like flannel, but I here’s how I tested it, and added karma to the package in Bodhi.

I use flannel when I cluster atomic hosts together with kubernetes. I typically use the release versions of centos or fedora atomic, but the fedora project also provides an ostree image built from fedora’s updates-testing repo, where packages await karma from testers.

I prepare three atomic hosts with vagrant:

[my-laptop]$ git clone https://github.com/jasonbrooks/contrib.git

[my-laptop]$ cd contrib/ansible/vagrant

[my-laptop]$ export DISTRO_TYPE=fedora-atomic

[my-laptop]$ vagrant up --no-provision --provider=libvirt

Next, I rebase the trio of hosts to the testing tree:

[my-laptop]$ for i in {kube-node-1,kube-master,kube-node-2}; do vagrant ssh $i -c &amp;quot;sudo rpm-ostree rebase fedora-atomic:fedora-atomic/f23/x86_64/testing/docker-host&amp;quot;; done

[my-laptop]$ vagrant reload

Reloading the hosts switches them to the testing image, and runs the ansible provisioning scripts that configure the kubernetes cluster. Now to ssh to one of the boxes, confirm that I’m running an image with the newer flannel, and then run a test app on the cluster to make sure that everything is in order:

[my-laptop]$ vagrant ssh kube-master

[kube-master]$ rpm -q flannel

[kube-master]$ sudo atomic host status
  TIMESTAMP (UTC)         VERSION   ID             OSNAME            REFSPEC                                                        
* 2016-02-03 22:47:33     23.63     65cc265ae1     fedora-atomic     fedora-atomic:fedora-atomic/f23/x86_64/testing/docker-host     
  2016-01-26 18:16:33     23.53     22f0b303da     fedora-atomic     fedora-atomic:fedora-atomic/f23/x86_64/docker-host

[kube-master]$ sudo atomic run projectatomic/guestbookgo-atomicapp

That last command pulls down an atomicapp container that deploys a guestbook example app from the kubernetes project. The app includes two redis slaves, a redis master, and a trio of frontend apps that talk to those backend pieces. The bits of the app are spread between my two kubelet nodes, with flannel handling the networking in-between. If this app is working, then I’m confident that flannel is working.

[kube-master]$ kubectl get svc guestbook
guestbook                 3000/TCP   app=guestbook   55m

[kube-master]$ exit

[my-laptop]$ vagrant ssh kube-node-1

[kube-node-1]$ curl
# Server

The app is working, flannel appears to be doing its job, so I marched off to bodhi to offer up my karma:

instant karma

PHPUnit 5.2

RPM of PHPUnit version 5.2 are available in remi repository for Fedorra ≥ 21 and in remi-test repository for Enterprise Linux (CentOS, RHEL...).

Documentation :

emblem-notice-24.pngThis new major version requires PHP ≥ 5.6.

Installation, Fedora:

dnf --enablerepo=remi install phpunit

Installation, Enterprise Linux:

yum --enablerepo=remi,remi-test,remi-php56 install phpunit

Notice: this tool is an essential component of PHP QA in Fedora.

NayuOS Review – Free & Open Source Alternative To Chrome OS With Node.js And Without Google Services
Note: NayuOS have the same design as Chrome OS and many differences under the hood. Introduction NayuOS – free and open source operating system and fork of Chrome OS without proprietary software like Adobe Flash, multimedia codecs and Google services. “Nayu” is a Chinese word that means “open the Universe”. If you are interested in … Continue reading NayuOS Review – Free & Open Source Alternative To Chrome OS With Node.js And Without Google Services
Fedora nominated for Blackshield Awards
The Fedora Project, the ISECOM and companies like audius GmbH enabled me since years to teach security along with Fedora and the OSSTMM in india [1], [2], [3] ... a long way since my first indian event foss.in 2009.

Seems like it payed out - making it to the finalists of the nullcon blackshield awards is just WOW ;)

Do not forget to vote! nullcon 2016 Blackshield Award Voting
Support for 8/10/12 bit color depths in HandBrake!

HandBrake is now using a freshly built x265 library that enables full color depth support at 8, 10 and 12 bits. You can now convert videos in these format! This has been enabled in the 64 bit builds of the x265 library; for both Fedora 23 and CentOS/RHEL 7.

Also, NUMA support has been added to the libraries. Just by chance I have an SGI UV 200 (the predecessor of the current SGI UV 300) lying around.


This goes along with the 10 bit support for x264 that was enabled some time ago; so I’ve made some adjustments to the libraries and now there is more consistency between x264/x265. Both are loaded at runtime by HandBrake:

$ ls -alghs /usr/lib64/libx26*
668K -rwxr-xr-x. 1 root 667K Feb  5 09:55 /usr/lib64/libx264_main10.so
764K -rwxr-xr-x. 1 root 763K Feb  5 09:55 /usr/lib64/libx264.so.148
3.4M -rwxr-xr-x. 1 root 3.4M Feb  5 09:05 /usr/lib64/libx265_main10.so
3.4M -rwxr-xr-x. 1 root 3.4M Feb  5 09:05 /usr/lib64/libx265_main12.so
3.2M -rwxr-xr-x. 1 root 3.2M Feb  5 09:05 /usr/lib64/libx265.so.68
programs won’t start

So recently I got pointed to an aging blocker bug that needed attention, since it negatively affected some rawhide users: they weren’t able to launch certain applications. Three known broken applications were gnome-terminal, nautilus, and gedit. Other applications worked, and even these 3 applications worked in wayland, but not Xorg. The applications failed with messages like:

Gtk-WARNING **: cannot open display:


org.gnome.Terminal[2246]: Failed to parse arguments: Cannot open display:

left in the log. These messages means that the programs are unable to create a connection to the X server. There are only a few reasons this error message could get displayed:

    — The socket associated with the X server has become unavailable. In the old days this could happen if, for instance, the socket file got deleted from /tmp. Adam Jackson fixed the X server a number of years ago, to also listen on abstract sockets to avoid that problem. This could also happen if SELinux was blocking access to the socket, but users reported seeing the problem even with SELinux put in permissive mode.
    — The X server isn’t running. In our case, clearly the X server is running since the user can see their desktop and launch other programs
    — The X server doesn’t allow the user to connect because that user wasn’t given access, or that user isn’t providing credentials. These programs are getting run as the same user who started the session, so that user definitely has access.
    — GDM doesn’t require users to provide separate credentials to use the X server, so that’s not it either.
    — $DISPLAY isn’t set, so the client doesn’t know which X server to connect to. This is the only likely cause of the problem. Somehow $DISPLAY isn’t getting put in the environment of these programs.

So the next question is, what makes these applications “special”? Why isn’t $DISPLAY set for them, but other applications work fine? Every application has a .desktop file associated with it, which is a small config file giving information about the application (name, icon, how to run it, etc). When a program is run by gnome-shell, gnome-shell uses the desktop file of that program to figure out how to run it. Most of the malfunctioning programs have this in their desktop files:


That means that the shell shouldn’t try to run the program directly, instead it should ask the dbus-daemon to run the program on the shell’s behalf. Incidentally, the dbus-daemon then asks systemd to run the program on the dbus-daemon’s behalf. That has lots of nice advantages, like automatically integrating program output to the journal, and putting each service in its own cgroup for resource management. More and more programs are becoming dbus activatable because it’s an important step toward integrating systemd’s session management features into the desktop (though we’re not fully there yet, that initiative should become a priority at some point in the near-to-mid future). So clearly the issue is that the dbus-daemon doesn’t have $DISPLAY in its activation environment, and so programs that rely on D-Bus activation aren’t able to open a display connection to the X server. But why?

When a user logs in, GDM will start a dbus-daemon for that user before it starts the user session. It explicitly makes sure that DISPLAY is in the environment when it starts the dbus-daemon so things should be square. They’re obviously not, though, so I decided to try to reproduce the problem. I turned off my wayland session and instead started up an Xorg (actually I used a livecd since I knew for sure the livecd could reproduce the problem) and then looked at a process listing for the dbus-daemon:

/usr/bin/dbus-daemon --session --address=systemd: --nofork --nopidfile --systemd-activation

This wasn’t run by GDM ! GDM uses different command line arguments that these when it starts the dbus-daemon. Okay, so if it wasn’t getting started by GDM it had to be getting started by the systemd during the PAM conversation right before GDM starts the session. I knew this, because there isn’t really thing other than systemd that runs after the user hits enter at the login screen before gdm starts the user’s session. Also, the command line arguments above in the dbus-daemon instance say ‘–systemd-activation’ which is pretty telling. Furthermore, if a dbus-daemon is already running GDM will avoid starting a second one, so this all adds up. I was surprised that we were using the so called “user bus” instead of session bus already in rawhide. But, indeed, running

$ systemctl --user status dbus.service
● dbus.service - D-Bus User Message Bus
Loaded: loaded (/usr/lib/systemd/user/dbus.service; static; vendor preset: enabled)
Active: active (running) since Tue 2016-02-02 15:04:41 EST; 2 days ago

show’s we’re clearly starting the dbus-daemon before GDM starts the session. Of course, this poses the problem. The dbus-daemon can’t possibly have DISPLAY set in its environment if it’s started before the X server is started. Even if it “wanted” to set DISPLAY it couldn’t even know what value to use, since there’s no X server running yet to tell us the DISPLAY !

So what’s the solution? Many years ago I added a feature to D-Bus to allow a client to change the environment of future programs started by the dbus-daemon. This D-Bus method call, UpdateActivationEnvironment, takes a list of key-value pairs that are just environment variables which get put in the environment of programs before they’re activated. So, the fix is simple, GDM just needs to update the bus activation environment to include DISPLAY as soon as it has a DISPLAY to include.

Special thanks to Sebastian Keller who who figured out the problem before I got around to investigating the issue.

February 04, 2016

Fedora Community Booth Live Stream from SCALE14x

We streamed live from the Fedora Booth at SCALE14x to give people an inside look at scale from the expo floor. We had the chance to talk with many people including Corey Doctorow. So here we present all the hours of video we streamed and recorded from the expo hall floor..

This item belongs to: movies/opensource_movies.

This item has files of the following types: Archive BitTorrent, MPEG4, Metadata

PHP version 5.5.32, 5.6.18 and 7.0.3

RPM of PHP version 7.0.3 are available in remi-php70 repository for Fedora and Enterprise Linux (RHEL, CentOS).

RPM of PHP version 5.6.18 are available in remi repository for Fedora  21 and  remi-php56 repository for Fedora and Enterprise Linux.

RPM of PHP version 5.5.32 are available in remi repository for Fedora 20 and in remi-php55 repository for  Enterprise Linux.

emblem-important-2-24.pngPHP version 5.4 have reached its end of life and is no longer maintained by the project. Given the very important number of downloads by the users of my repository the version is still available in  remi repository for Enterprise Linux (RHEL, CentOS...) and includes  security fix (from version 5.5.31). The upgrade to a maintained version is strongly recommended. (A new version with security fix from 5.5.32 should  be soon available)

These versions are also available as Software Collections.

security-medium-2-24.pngThese versions fix some security bugs, so update is strongly recommended.

Version announcements:

emblem-important-2-24.png 5.5.27 release was the last planned release that contains regular bugfixes. All the consequent releases contain only security-relevant fixes, for the term of one year (July 2016).

emblem-notice-24.pngInstallation : use the Configuration Wizard and choose your version and installation mode.

Replacement of default PHP by version 7.0 installation (simplest):

yum-config-manager --enable remi-php70
yum update

Parallel installation of version 7.0 as Software Collection (x86_64 only):

yum install php70

Replacement of default PHP by version 5.6 installation (simplest):

yum-config-manager --enable remi-php56
yum update

Parallel installation of version 5.6 as Software Collection (x86_64 only):

yum install php56

Replacement of default PHP by version 5.5 installation (simplest):

yum-config-manager --enable remi-php55
yum update

Parallel installation of version 5.5 as Software Collection (x86_64 only):

yum --enablerepo=remi install php55

And soon in the official updates:

emblem-important-2-24.pngTo be noticed :

  • EL7 rpm are build using RHEL-7.2
  • EL6 rpm are build using RHEL-6.7
  • a lot of new extensions are also available, see the PECL extension RPM status page

emblem-notice-24.pngInformation, read:

Base packages (php)

Software Collections (php55 / php56 / php70)

Rio Design Hackfest

Rio hackfest final photo

A couple of weeks ago, I had the pleasure of attending a design hackfest in Rio de Janeiro, which was hosted by the good people at Endless. The main purpose of the event was to foster a closer relationship between the design teams at GNOME and Endless. Those of us on the GNOME side also wanted to learn more about Endless users, so that we can support them better.

The first two days of the event were taken up with field visits, first at a favela in Rio itself, and second in a more rural setting about an hour and a half’s drive out of town. In both cases we got to meet Endless field testers, ask them questions about their lives and computer usage.

After the field trips, it was time to hit the office for three days of intensive design discussions. We started from a high level, discussing the background of GNOME 3, and looking at the similarities and differences between Endless’s OS and GNOME 3. Then, over the course of three days, we focused on specific areas where we have a mutual interest, like the shell, search, Software and app stores, and content apps like Photos, Music and Videos.

DSCF9796 DSCF9812 DSCF0034 DSCF0099 DSCF0117 DSCF9903 DSCF9904 DSCF9921 DSCF0017

All in all, the event was a big success. Everyone at Endless was really friendly and easy to work with, and we had lots of similar concerns and aspirations. We’ve started on a process of working closer together, and I expect there to be more joint design initiatives in the future.

I’d like to give a big thank you to Endless for hosting, and for sponsoring the event. I’d also like to thank the GNOME Foundation for providing travel sponsorship.


DNF 1.1.6 and DNF-PLUGINS-CORE 0.1.16 Released

Another version od DNF and DNF-PLUGINS-CORE has been released. Recently released DNF adds socks5 proxy support and repoquery has new --unneeded and --recent switches available. Additionally a a lot of bugs have been fixed. For more information see DNF and plugins release notes.

Australians stuck abroad and alleged sex crimes

Two Australians have achieved prominence (or notoriety, depending on your perspective) for the difficulty in questioning them about their knowledge of alleged sex crimes.

One is Julian Assange, holed up in the embassy of Ecuador in London. He is back in the news again today thanks to a UN panel finding that the UK is effectively detaining him, unlawfully, in the Ecuadorian embassy. The effort made to discredit and pursue Assange and other disruptive technologists, such as Aaron Swartz, has an eerie resemblance to the way the Inquisition hunted witches in the middle ages and beyond.

The other Australian stuck abroad is Cardinal George Pell, the most senior figure in the Catholic Church in Australia. The Royal Commission into child sex abuse by priests has heard serious allegations claiming the Cardinal knew about and covered up abuse. This would appear far more sinister than anything Mr Assange is accused of. Like Mr Assange, the Cardinal has been unable to travel to attend questioning in person. News reports suggest he is ill and can't leave Rome, although he is being accommodated in significantly more comfort than Mr Assange.

If you had to choose, which would you prefer to leave your child alone with?

Justin W. Flory: How do you Fedora?

We recently interviewed Justin W. Flory on how he uses Fedora. This is part of a series on the Fedora Magazine where we profile Fedora users and how they use Fedora to get things done. If you are interested in being interviewed for a further installment of this series, you can contact us on the feedback form.

Who is Justin W. Flory?

HDYF - Desktop

Justin W. Flory is a student majoring in systems administration and networking at the Rochester Institute of Technology (RIT). His minor is in Free and Open Source Software. He has professional training as a barista and supports direct trade coffee. “I am also a coffee fanatic,” Flory said. “I can make some pretty fantastic espresso with the right equipment.” Justin has been fascinated with computers since a young age. He credits Minecraft with changing his life. “Minecraft is a game that has changed my life, beginning with my own early experience with entrepreneurship and later my experience with the Spigot community, which landed me the opportunity to go to London this past July to attend the annual Minecraft convention, MINECON. It also indirectly introduced me to Linux and Fedora.”

Flory is not a big fan of movies and he doesn’t watch TV, but his favorite two movies are Inception and The Matrix. Justin’s favorite food is smoked salmon with cream cheese on crackers.

The Fedora Community

Justin wanted to become an Ambassador for most of 2014, but did not take the leap until after attending Flock 2015. At Flock 2015, Flory was pulled in by the tight-knit nature of the Fedora community. “I could see that the friendships made from Fedora went beyond IRC, lines of code, and more into real life.”

When asked about one thing he would like people to know about the Fedora Project, Flory said, “You don’t have to be a code whiz to be a Fedoran. There are so many different places you can help.” Justin is also inspired by being part of a community that is passionate and dedicated to making a positive impact on the world.

The person most responsible for helping Flory become involved is Remy DeCausemaker. “I have to thank Remy DeCausemaker for opening the Fedora door for me,” Justin said. DeCausemaker gave him advice with regards to attending Flock 2015 and has helped Flory become more involved in the Fedora Community. He also wanted to thank Paul Frields, Ryan Lerch, Gabriele Trombini, Bee Padalkar, Patrick Uiterwijk, and many more.

Justin contributes mostly through Fedora Community Operations (CommOps) and Fedora Marketing. He finds CommOps exciting because he gets a bird’s eye view of the entire Fedora Project. Flory stated, “I’m still learning the ropes, but I feel like I’m able to see the big map of Fedora, observe where everyone else is, and help figure out how to make sure everyone can land and take off safely.”

When asked about the advice he would give people who are thinking about becoming involved in the Fedora Project, Flory was very emphatic. “Do it! Don’t wait!” He recommends joining #fedora-join on Freenode and asking for some help finding a place to contribute.

What hardware?

Flory has several pieces of hardware. He has a self-built desktop he made in 2014, a laptop, and another self-built server made in 2015. His desktop has an AMD FX-6300 processor coupled with 8GB of RAM and an Nvidia GeForce GTX 660 Ti graphics card. The boot drive is an SSD, but he pairs that with a traditional 1TB hard drive. The FOSS Fighter, as he calls it, is currently running Fedora 23 Workstation.

Justin’s laptop is a Toshiba C55-A equipped with an Intel Core i3 processor and 4GB of RAM. It is currently running Fedora 23 Workstation, but he is considering getting an upgrade or running Xfce due to the age of the laptop.

Justin W. Flory, how do you Fedora? The FOSS Fighter

Inside the FOSS Fighter

The server is another home-built system and is equipped with an Intel Core i7-4790S and 24GB of RAM. RIT provides a free RHEL license, so Flory runs RHEL 7 on the server. It currently hosts RITcraft, RIT’s official Minecraft server.

What software?

HDYF - Rhythmbox

As a music lover, Flory depends on Rhythmbox for playing his library and Scrobbling his plays to Last.fm and Libre.fm. He also makes use of MusicBrainz’s Picard to categorize, sort, and correct metadata for all of his music.

For messaging, he makes use of both IRC and Telegram. For IRC, Flory is a fan of HexChat and is active on Freenode, SpigotMC, and Espernet as jflory7. For personal messaging, Justin uses Telegram on his laptop using the desktop app or on his Android when he is without his laptop.

As a student, he depends on LibreOffice for his productivity suite. Writer helps him take notes in class, creating PDFs, and other tasks. He makes use of Dropbox to keep his files synchronized on all of his devices.

bitmath-1.3.0 released

It’s been quite a while since I’ve posted any bitmath updates (bitmath is a Python module I wrote which simplifies many facets of interacting with file sizes in various units as python objects) . In fact, it seems that the last time I wrote about bitmath here was back in 2014 when 1.0.8 was released! So here is an update covering everything post 1.0.8 up to 1.3.0.

New Features

  • A command line tool, bitmath, you can use to do simple conversions right in your shell [docs]!
  • New utility function bitmath.parse_string for parsing a human-readable string into a bitmath object
  • New utility: argparse integration: bitmath.BitmathType. Allows you to specify arguments as bitmath types
  • New utility: progressbar integration: bitmath.integrations.BitmathFileTransferSpeed. A more functional file transfer speed widget
  • New bitmath module function: bitmath.query_device_capacity(). Create bitmath.Byte instances representing the capacity of a block device
    • This my favorite enhancement
    • In an upcoming  blog post I’ll talk about just how cool I thought it was learning how to code this feature
    • Conceptual and practical implementation topics included
  • The bitmath.parse_string() function now can parse ‘octet’ based units
  • New utility function: bitmath.best_prefix()
    • Return an equivalent instance which uses the best human-readable prefix-unit to represent it
    • This is way cooler than it may sound at the surface, I promise you

Bug Fixes

  • #49 – Fix handling unicode input in the bitmath.parse_string function. Thanks drewbrew!
  • #50 – Update the setup.py script to be python3.x compat. Thanks ssut!
  • #55 “best_prefix for negative values”. Now bitmath.best_prefix() returns correct prefix units for negative values. Thanks mbdm!


To help with the Fedora Python3 Porting project, bitmath now comes in two variants in Fedora/EPEL repositories (BZ1282560). The Fedora and EPEL updates are now in the repos. TIP: python2-bitmath will obsolete the python-bitmath package. Do a dnf/yumupdate‘ operation just to make sure you catch it.

The PyPi release has already been pushed to stable.

Back in bitmath-1.0.8 we had 150 unit tests. The latest release has almost 200! Go testing! :confetti:

February 03, 2016

On Subresource Certificate Validation

Ryan Castellucci has a quick read on subresource certificate validation. It is accurate; I fixed this shortly after joining Igalia. (Update: This was actually in response to a bug report from him.) Run his test to see if your browser is vulnerable.

Epiphany, Xombrero, Opera Mini and Midori […] were loading subresources, such as scripts, from HTTPS servers without doing proper certificate validation. […] Unfortunately Xombrero and Midori are still vulnerable. Xombrero seems to be dead, and I’ve gotten no response from them. I’ve been in touch with Midori, but they say they don’t have the resources to fix it, since it would require rewriting large portions of the code base in order to be able to use the fixed webkit.

I reported this to the Midori developers in late 2014 (private bug). It’s hard to understate how bad this is: it makes HTTPS completely worthless, because an attacker can silently modify JavaScript loaded via subresources.

This is actually a unique case in that it’s a security problem that was fixed only thanks to the great API break, which has otherwise been the cause of many security problems. Thanks to the API break, we were able to make the new API secure by default without breaking any existing applications. (But this does no good for applications unable to upgrade.)

(A note to folks who read Ryan’s post: most mainstream browsers do silently block invalid certificates, but Safari will warn instead. I’m not sure which behavior I prefer.)

CentOS 7 Server Hardening Guide


So… you’ve just setup a shiny new server and you want to take measures to keep the bad guys out? Well, here I will give you a few tips on how to do just that.

This guide was written with CentOS 7.1 in mind but other up-to-date variants such as Fedora and RHEL should be pretty similar if not the same.

Hardening SSH (Secure Shell)


Most of you will be using this protocol as a means to remotely administrate your Linux server and your right to. Using SSH is by far the best method to administer your server due to its use of encrypted communications unlike it’s older cousins rlogin and telnet which provide no secure methods of communication.

Create a standard user

Use the ‘useradd’ command to add a username of your choice.

useradd YOURUSER

Set a password for your newly created user.


Add your user to the WHEEL group to enable that user to use the sudo command.

usermod -aG wheel YOURUSER

Create an authentication key


This method of authenticating with your server is much more secure that using a standard password, part of this process will require you to create the key on your local machine which you will be connecting from.

You will be asked if you would like to protect the key with a password, I advise you to do this but it’s not mandatory.

Creating the key on Mac/Linux

ssh-keygen -b 4096
Press Enter to use the default names id_rsa and id_rsa.pub in /home/your_username/.ssh before entering your passphrase.

Upload your public key to your server

For Linux


For Mac

On your server do.

sudo mkdir -p ~/.ssh && sudo chmod -R 700 ~/.ssh

From your Mac do the following making sure to substitute ‘youruser’ and ‘yourserver’.

scp ~/.ssh/id_rsa.pub YOURUSER@YOURSERVER.0:~/.ssh/authorized_keys

Now on to the configuration changes.

Open up the SSH config file for editing

In this section we will be performing the following actions

  • Disallowing root logins
  • Setting allowed users
  • Changing the default port
  • Disabling password authentication
  • Force protocol 2

You can replace nano with your favourite text editor such as vi.

sudo nano /etc/ssh/sshd_config

Disallow root logins.

Find the line that says

#PermitRootLogin yes

and change it to

PermitRootLogin no

Setting your user as an allowed user.

Add the following line to the bottom of your sshd_config file substituting ‘YOURUSER’ with your newly created account.


Change the default service port.

Find the line that says

Port 22

Change to something other than 22 such as 22000

Port 22000

Disabling password authentication

We can disable password authentication because we will now be using our newly created key pair to authenticate to the server.

Look for the line that has

#PasswordAuthentication yes

and replace it with the below line.

PasswordAuthentication no

Only use SSH protocol 2

SSH Protocol 1 is generally considered obsolete as it’s vulnerable and old so lets go ahead and only use SSH Protocol 2. Protocol 2 should be enforced by default but it’s worth checking.

Look for the line that says.

#Protocol 2

Uncomment the line so it looks like this.

Protocol 2

There are many more options that we could set but this should be suffice in securing your SSH Service.

Save the file and restart the SSH service by doing the following.

sudo systemctl reload sshd.service

You should now be able to login on your chosen port with your authorised keys by connecting like this.



Fail2ban is a handy tool/service that monitors system log files to detect potential intrusion attempts and places bans using a variety of methods.

To install on CentOS we need to enable the EPEL repository by doing the following.

sudo yum install epel-release

Once the installation has completed we need to then go ahead and install Fail2ban

sudo yum install fail2ban fail2ban-systemd

Fail to ban comes with a wealth of options that would deserve a post all to itself so in this instance we will create a basic configuration file that will help secure your server, especially the SSH service.

Using SELinux? Then you will want to update your policy by doing the following.

yum update -y selinux-policy*


We will be configuring fail2ban for use with Firewalld as it is implemented by default in CentOS 7.

Create a sshd.local file ready for editing.

sudo nano /etc/fail2ban/jail.d/sshd.local

Add the following lines.

enabled = true
port = 22000
logpath = %(sshd_log)s
maxretry = 3
bantime = 86400

Save the file and go ahead and start fail2ban.

sudo systemctl enable fail2ban
sudo systemctl start fail2ban

You should now have a working fail2ban installation which will automatically ban IP addresses after 3 failed attempts at logging in to your system via SSH.

Apache Hardening

The default Apache configuration just works but there’s a few tweaks we can do here and there that makes the bad guys job a little harder. One of the things we can do is try and prevent information leakage.

By default Apache gives out server version information on error pages. We can prevent this by adding a couple of lines to our httpd.conf file.

Version banner

Open up the httpd config file ready for editing.

sudo nano /etc/httpd/conf/httpd.conf

add the following lines to the bottom of the file

ServerTokens Prod
ServerSignature Off


Trace Requests

To protect yourself from Cross Site Tracing attacks append the following line to the end of your configuration file.

TraceEnable off

Set the HttpOnly and Secure flag

To mitigate against most of the common Cross Site Scripting (XSS) attacks you can set the following directive, again add the following line at the end of your configuration file.

Header edit Set-Cookie ^(.*)$ $1;HttpOnly;Secure

X-Frame Options

Adding this option to your configuration file will indicate whether or not a browser should be allowed to open a webpage in a frame or iframe. This will prevent site content embedded into other sites. See – https://www.owasp.org/index.php/Clickjacking.

Append the following line in your configuration file.

Header always append X-Frame-Options SAMEORIGIN

XSS Protection Again

To ensure and enforce web browser Cross Site Scripting protection append the following to your configuration file.

Header set X-XSS-Protection “1; mode=block”

Now with those options set we need to restart the Apache daemon.

sudo systemctl reload httpd.service


Now to keep tabs on all of those logs, Logwatch is a great tool to monitor your servers logs and email the administrator a digest on a daily basis.


sudo yum install logwatch sendmail
Now start sendmail.
sudo systemctl start sendmail


The default configuration file for Logwatch is located at the below path.


This file contains information on which directories for Logwatch to track, how the digest is sent and where the digest is sent to.

By default Logwatch keeps track of everything in /var/log but if you have other log files that you wish to add you can do this by adding the below to your logwatch.conf under the heading ‘Default Log Directory’.

LogDir = /some/path/to/your/logs

Email your daily digest

let’s go ahead and edit the logwatch.conf file.

sudo nano /usr/share/logwatch/default.conf/logwatch.conf

We need to change add your email into the configuration file so that the digest gets delivered to your inbox.

Look for the following section.

# Default person to mail reports to.  Can be a local account or a
# complete email address.  Variable Output should be set to mail, or
# --output mail should be passed on command line to enable mail feature.
MailTo = root

change ‘root’ to your own personal email address or wherever you want the digest sending to.

Adding Logwatch to Cron

Open up the crontab.

crontab -e

Now add the following line to the end of the file. This line will make logwatch run at midnight each day.

00 00  * * *          /usr/sbin/logwatch

This guide was a little quick and dirty so you have any additions to this guide I would love to hear them, also if you think something is wrong or could have been done more efficiently please get in contact.

F23-20160202 Updated Lives Available. Complete with 4.3.4-300 Kernel

Updated Lives for 23  are  available in torrent and raw iso download format from: (Includes GNOME,KDE,LXDE,MATE,CINNAMON,SOAS,XFCE)

Fedora 23 Updated Lives

Additional Spins available from:

Fedora Spins

All Versions also available  via Official Torrent from:

All Official Fedora Torrents

Filed under: Community, Fedora, Fedora 22 Torrents
Fedora 24 bekommt grafisches System-Upgrade

Laut einem Änderungsvorschlag für Fedora 24 wird GNOME Software 3.20 auch als grafische Oberfläche für System-Upgrade fungieren können. Um ein grafisches Upgrade von Fedora 23 auf Fedora 24 zu ermöglichen, wird deshalb ausnahmsweise GNOME Software in Fedora 23 auch ein Update auf 3.20 erhalten. Der Rest des GNOME Desktops wird jedoch weiterhin auf Version 3.18 belassen.

Die veröffentlichten News werden nach bestem Wissen und Gewissen zusammengetragen. Eine Garantie für die Vollständigkeit und/oder Richtigkeit wird nicht übernommen.
Travels, 2016.

The post Travels, 2016. appeared first on The Grand Fallacy.

I realized that some folks around the Fedora community may wonder why they don’t see me around as often this week and next week. I’m still alive and well, but I’m traveling in the Czech Republic. I’m currently in Brno for some Red Hat departmental meetings. Following that, I’ll be attending the Devconf.cz event. Then I’ll be back in the Brno office for a few days of other work and team meetings. I’ll be back in my home office on Monday, February 15th and around as usual at that point.

Exprience of BarCamp Yangon 2016

Arriving at Yangon

    Lucky enough i got a chance and i actually able to made it to BarCamp 2016. I had bad experience with Bangkok Airway during travel, that listed in other blog.

    But its full of nice experience after coming to Myanmar. People here are very supporting and kind and indeed nice place to visit.

    Our plan was to represent Fedora strongly with multiple talks. Me travelled from India and Leap Soak from Cambodia. We had 2 meetings before BarCamp Yan and his team mates.

    Yan suggested to be at BarCamp around 8pm, so we will get slot quickly. I already thought its too early but since this is my first BarCamp tried to be early.

BarCamp Day 1

    Me and LeapSoak reached early around 8:30 to MICT Park. Location was indeed good with nice campus and good crowd.

    Inauguration started around 9am, we were standing in front row but not helped much since all talks were in Burmese language ;)

    After inauguration, everyone started running to propose there talk for BarCamp. They were allowing only 1 talk per person. Me submitted talk on Learning Unicode. I thought there will be some voting on talks to get it finalized. But it was almost 13 parallel tracks and enough to accommodate all talks.

    Since most of the talks were in burmese language we were simply doing networking there. During time i requested if i can submit one more talk and luckily got permission :)  I submitted other talk on "Basics of FOSS and Introduction to Fedora". It was scheduled in main conference room.

    We planned to distribute Fedora 23 DVD's during this talks but we actually forgotten DVD's at hotel ;)  Our hotel was near to Venue so i quickly took taxi and collected DVD's for distrbutiion.

    Felt good for getting chacne to talk on "Basic of FOSS and Introduction to Fedora". Got number of questions mostly on Fedora side. I felt few people were finding it difficult to understand either English or may be its our ascent, so Yan did transaltion for me for few questions.  We distributed 50 DVD's to interested participants and also announced regarding Fedora 23 release event just after BarCamp.

    Mine second talk was from 2:00 to 2:45pm. It was on hot topic at Myanmar that is "Learning Unicode" :)

    Room was bit smaller, for around 20-30 people. 30+ students attended talk by standing. I covered all basics of Unicode. Number of people came after talk and said they want to move ahead with Unicode from zawgyi and need support from me. I forwarded my presentation to them.

    Since all other talks were in Burmese, was simply waited there for some time and went to Hotel.

    Leap mate with some old friends at BarCamp first day and we decided to go China Town for dinner, had good time with all.

BarCamp Day 2

Second day was bit relaxing. Reached to BarCamp around 9:30am, still came to know, they accepting talks around 10am. Attended there first talk, something like KeyNote. It had few impressive stats regarding How many people registered, how attended on first day. etc.

I submitted my talk on "Fedora Globalization - How to cater local needs" got 1-1:45pm slot in room 204.

Had one technical issue, projector cable was not working.
 There was less audience, just thought will start presenting on Laptop.  Again covered some basics on "why do need language supports?" and few other things. I mentioned we need some active people from Myanmar in Fedora Globalization. Meanwhile organizers brought working project cable and i moved to projector :)

Had over all good talk and lots of questions and discussion. Few more people again mentioned issues regarding Unicode and Zaw-gyi. I provided slides of my Unicode talk to them. I think there are number of people wants to move Unicode but i am still not clear what is issue. I think they are planning to write some article on issues. Looking forward to work with him on this.
Around 5pm, BarCamp had closing ceremony, where again language of communication was Burmese :)  They distributed gifts with lucky draw to students.

Few key points
  • Fedora talks were well taken, students need more guidance to get starting with Fedora.
  • I thought BarCamp is more risky from quality side, but had good topics from so many people. Start-up was trending topic, almost 5-6 people talked on it.
  • May be good to have few more talks in English language, need more international speakers.
  • No internet, even for speakers
  • There was good marketing drive from companies like telenor and universities at Venue.
  • Many attendees. !!
Fedora release event in Yangon, Myanmar 2016

This happened in Fedora - Myanmar community office. Location is nice and ideal for hacking
We came around 10:30 but since few more people were stuck in traffic decided to start bit late. Cake was ready and then me and Leap decorated it with DVD's and stickers.
 We started with introduction of all participants. Most of the participants were already Fedora users around Fedora 20. Interesting part was they all were more from the users side of Fedora, so decided to put more strength on how they can move ahead and become contributor.
 As per schedule Yan started with introduction of Fedora. He talked on "how he came to Fedora world". He mentioned regarding earlier he was not happy with Fedora for not providing media codecs but later learned its because Fedora follows strictly FOSS principles.

Then Leap, started talk on how one can contribute to Fedora even if he is from different domains. I was adding few points with him as per required.

It was already around 1pm, so we decided to cut cake and go ahead with lunch.  During same time around 10 students joined from "University of Computer Studies Yangon”.

Lunch was very special since it was home cooked by Wai Yan (John Reginald) and other Fedora members here . Really excellent place, you can cook, eat, work and hangout :)  Appreciated cook and gang helped to him. Indeed very lucky to get in-home cooked food in Myanmar. Much much better than outside food.

Must say thanks to Yan Naing Myint for sponsoring lunch and lovely cake for Fedora 23 release !!

During lunch, we had some discussion with students, we got some interesting questions. Leap helped to answered regarding Admin side and me more from Development side :) I wondered no one from Myanmar contribute to GSoC.

Two students were from Tamil and one from Bihar and they specifically mentioned it, Glad to see Indians in event.

We started after lunch and Yan again started with Fedora intro talk. This time it was in Burmese language, he explained number of topic. Including how to get Fedora and what are Fedora philosophies.

 Then we asked Leap to talk on How to contribute again :) 3rd time in 2 days he was talking on this topic ;)

This time even much better and had good impact on audience. 
 Then i started bit interactive talk on "What students required?" and "How can they get it from Fedora?". I mentioned one point during talk is to start using Fedora and start play with system and report issues. Also shown them what patch does mean.

Audience requested if they can know how to install Fedora.  Here Yan started with demonstration of Fedora 23 DVD's. He explained in details most of the topic and in middle i highlighted missing things from Myanmar, i.e English (Myanmar)  locale, no translations and recommended if they are interested they can contribute into it.

We all decided to take Tea break, while packages were getting installed. We moved to small local stall for Tea.

 Then it was already 6pm and attendees were planning to go home, thought will quickly check newly installed system. Later Yan shown them how to create FAS, there name Wiki page and bugzilla account.

After everyone went we started working on testing Internationalization support for Myanmar (Burmese) language. I demonstrated few things to Yan and asked him to check with hall.  Then we started creating Myanmar localization group, we completed all steps.  Luckily we found noriko on IRC, she also checked those things Only thing remained was to create group in Zanata.
Around 8:30 me and Leap left from Venue.  Mostly decided not to meet on second day, since we had flights around afternoon and attendees mostly comes post lunch session.  Also our Hotel was bit far from event place and Yangon is also well know for Traffic Jams ;)

This way we had nice Fedora 23 release event. Next day planned more work by attendees to Fedora. i.e. becoming contributor to Fedora. But not much happened since it was working day.
CzP @ SCALE 14x: ESK, BoF, DRM
This year the conference season started with SCALE for me. The Southern California Linux Expo is the largest open source event in the United States and this is the second year that Balabit has also participated. This year the event took place in the Pasadena Convention Center in a very nice environment, the historic city […]
Getting started with atomicapp

atomicapp is a reference implementation of the Nulecule Specification. It can be used to bootstrap container applications and to install and run them. atomicapp is designed to be run in a container context. Examples using this tool may be found in the Nulecule library.

If you want to know the internals of atomicapp, how it works, etc., or contribute to it's development, this post is for you.


Install python virtualenv utils

virtualenv is a tool to create isolated Python environments. virtualenv creates a folder which contains all the necessary executables to use the packages that a Python project would need. virtualenvwrapper is a set of extensions to easily create, manage and destroy virtualenvs. I personally prefer using virtualenvwrapper for my Python developement work.

sudo dnf install -y python-virtualenvwrapper  

Restart your shell after the above is installed.

Getting the code

git clone https://github.com/projecatomic/atomicapp.git  

Setup for development

cd atomicapp  
mkvirtualenv atomicapp  
python setup.py develop  
echo "alias atomicapp=~/.virtualenvs/atomicapp/bin/atomicapp" >> ~/.virtualenvs/atomicapp/bin/postactivate  
echo "alias sudo='sudo '" >> ~/.virtualenvs/atomicapp/bin/postactivate  
echo "unalias atomicapp && unalias sudo" >> ~/.virtualenvs/atomicapp/bin/postdeactivate  

Understanding the code

├── cli
│   ├── __init__.py
│   └── main.py
├── constants.py
├── __init__.py
├── nulecule
│   ├── base.py
│   ├── container.py
│   ├── exceptions.py
│   ├── __init__.py
│   ├── lib.py
│   └── main.py
├── plugin.py
├── providers
│   ├── docker.py
│   ├── external
│   ├── __init__.py
│   ├── kubernetes.py
│   ├── marathon.py
│   ├── openshift.py
│   └── README.md
├── requirements.py
└── utils.py


The entry point for atomicapp CLI is atomicapp/cli/main.py. The CLI args and options are added in atomicapp.cli.main.CLI class. The CLI commands are handled by functions named as cli_<command_name> in atomicapp.cli.main.


atomicapp.nulecule is the crux of atomicapp as it implements the Nulecule specification. The CLI interacts with NuleculeManager in atomicapp.nulecule.main to execute various commands. NuleculeManager provides the entrypoints to various functionalites: unpack, genanswers, fetch, run, stop, clean, to manage the life cycle of a Nulecule application.

atomicapp.nulecule.base implements Nulecule to represent a Nulecule application from Nulecule SPEC file, and NuleculeComponent to represent an item in the graph attribute of the Nulecule application. atomicapp.nulecule.base.NuleculeComponent interacts with the underlying providers using atomicapp.providers in it's deploy() and undeploy() methods.


atomicapp.providers implements interfaces for atomicapp to interact with the different providers: docker, kubernetes, openshift, etc. A provider class usually has three important methods:

  • init: to do initialization required by the provider
  • run: Run artifacts on provider
  • stop: Stop artifacts on provider


The above should have given you a brief overview of the atomicapp and should be good enough to help you get started with atomicapp development. In case you have any query, please feel free to ping us on #nulecule on Freenode or create an issue at https://github.com/projecatomic/atomicapp/issues/.

Make your Gnome title bars smaller

I don’t like the size of title bars in the stock Gnome 3. They are big and take to much space on my tiny 12″ screen! But I’ve found an easy solution to this.

gnome-window-title-barAll you need to do is to put the following css code into ~/.config/gtk-3.0/gtk.css

.header-bar.default-decoration {
 padding-top: 3px;
 padding-bottom: 3px;
 font-size: 0.8em;

.header-bar.default-decoration .button.titlebutton {
 padding: 0px;
Tester für Firefox und GTK3 gesucht

Momentan scheint es einige Probleme mit Firefox und GTK3 zu geben. Um der Ursache auf den Grund zu gehen, bittet Martin Stransky alle testwilligen User um ihre Mithilfe.

Um sich an der Fehlersuche zu beteiligen, muss zuerst das aktuelle Update für Firefox (Fedora 22 / Fedora 23) installiert sein. In dieser Version werden Abstürze von Firefox nicht mehr an Mozilla, sondern mittels ABRT an Fedora gemeldet.

Damit die Fehlermeldung mittels ABRT funktioniert, muss anschließend noch Firefox aus der ABRT-Blacklist entfernt werden. Dazu muss die Datei /etc/abrt/abrt-action-save-package-data.conf mittels

su -c'nano /etc/abrt/abrt-action-save-package-data.conf'

bearbeitet und die Zeile

BlackList = nspluginwrapper, valgrind, strace, mono-core, firefox, bash


BlackList = nspluginwrapper, valgrind, strace, mono-core, bash

geändert werden.

Die veröffentlichten News werden nach bestem Wissen und Gewissen zusammengetragen. Eine Garantie für die Vollständigkeit und/oder Richtigkeit wird nicht übernommen.
I hate benchmarking

Among development tasks, one of my least favorite is benchmarking and I tend to procrastinate on it (by writing blog posts, for example). Allow me to enumerate some reasons why I hate doing benchmarking.

  • Almost anything can be a benchmark if you believe it is one.

  • Benchmarks often conflict with each other. Improve one, another goes down.

  • Benchmarks are often used to convince people of something. Combining the points above, this involves picking your favorite benchmark that moves in the right direction and then hoping it doesn't move someone else’s favorite benchmark in the opposite direction.

  • Sometimes it's hard to come up with a benchmark to show your code is actually doing anything. Is the code not actually having any effect or is the benchmark wrong?

  • Some benchmarks come as part of their own framework which means you need to set that up to get any data.

  • Benchmarks inevitably take time to run which either ends up with me staring at a screen waiting for a benchmark to finish or half-heartedly working on another task. The same gripe applies for compiling but now I'm waiting on compiling AND benchmarking.

  • Once the benchmark actually finishes, how do you interpret the result? Is the benchmark consistent if you run it multiple times?

During my recent foray into benchmarking I ended up having to write this code to figure the results what I was seeing:

for i in $(seq 1 $CNT); do
    d=`echo $d_calc | bc -l`
    mean=`echo $mean_calc | bc -l`
    M2=`echo  $M2_calc| bc -l`

echo "mean $mean"
V=`echo $V_calc | bc -l`
DEV=`echo $DEV_calc | bc -l`
echo "variance $V"
echo "stdev $DEV"

This is what gets me about benchmarking. I always feel as if I get side tracked by having to jump through all kinds of different hoops just to get a meaningful result. Debugging crashes always seems more straight forward to me (“Did you fix the crash? Did you fix the crash in a reasonable way? Good job!”). Debugging benchmark issues always feels like a slog (“Okay where is it slowing down. Time to guess what to look at with ftrace. Wait this slows down something else”). None of this complaining should be taken as saying that benchmarks aren't valuable, or that I can't do it. Everyone has stuff that they find particularly tedious to deal with and one of those for me is benchmarking.

February 02, 2016

FOSDEM 2016 Report

Hello everyone, this year I've been to FOSDEM again. Here is a quick report of what I did, saw and liked during the event.

Day 0 - Elixir & Erlang

Friday started with an unexpected shopping trip. The airline broke my luggage and I had to buy a replacement. The irony is that just before taking off I saw a guy with an Osprey Meridian and thought how cool that was. The next day I was running around Brussels to find the exact same model! I also searched to buy the book Teach Your Child How to Think by Edward De Bono but all 4 bookstores I checked were out of stock.

With the luggage problem solved I headed to BetaCowork for Brussels Erlang Factory Lite where I learned a bit about Erlang and Elixir. I also managed to squeeze a meeting with Gilbert West talking about open source bugs.

I found particularly interesting the Elixir workshop and the talk Erlang In The Wild: A Governmental Web Application by Pieterjan Montens. Later I've managed to get a hold of him and talk some more about his experiences working for the government. As I figured out later we are likely to have mutual friends.

BetaCowork was hosting FOSDEM related events during the entire week. There was a GNOME event and the Libre Office Italian team was there as well. Next time definitely worth a longer visit.

Friday night was reserved for a dinner with the Red Hat Eclipse team and a fair amount of beer at Delirium afterwards where I've met my friend Giannis Konstantinidis and the new Fedora Ambassador for Albania - Jona Azizai.

All-in-all pretty good Friday!

Day 1 - Testing and Automation

Testing and Automation

FOSDEM was hosting the Testing and automation devroom again. I've spent the entire Saturday there.

Definitely the most interesting talk was Testing interoperability with closed-source software through scriptable diplomacy which introduced Frida. Frida is a testing tool which injects a JavaScript VM into your process and you can write scripts driving the application automatically. It was designed as means to control closed source software but can definitely be used for open source apps as well.

I've talked to both Karl and Ole about Frida and my use-case for testing interactive terminal programs. That should be easy to do with Frida - just hook into the read and write functions and write some JavaScript or Python to run the test. Later we talked about how exactly Frida attaches to the running process and what external dependencies are needed if I'm to inject Frida into the Fedora installation environment.

In Testing embedded systems Itamar Hassin talked about testing medical devices and made a point about regulation, security and compliance. Basically you are not allowed to ship non-application code on production system. However that code is necessary instrumentation to allow external integration testing. I suspect most developers and QA engineers will never have to deal with so strict regulations but that is something to have in mind if you test software in a heavily regulated industry.

Testing complex software in CI was essentially a presentation about cwrap which I've seen at FOSDEM 2014. It touched the topic from a slightly different angle though. Tests in any open source project should have the following properties

  • Be able to execute without the need of a complex environment;
  • Enable full CI during code review (dependent on previous property);
  • Be able to create complete integration tests.

The demo showed how you can execute Samba's test suite locally without preparing a domain controller for example. This helps both devel and external contributors. Now contrast this with how Red Hat QE will do the testing - they will create a bunch of virtual and bare metal machines, configure all related services and then execute the same test scripts to verify that indeed Samba works as expected.

Btw I've been thinking what if I patch cwrap to overwrite read and write ? That will also make it possible to test interactive console programs, wouldn't it?

Jenkins as Code by Marcin and Lukasz was a blast. I think there were people standing alongside the walls. The guys shared their experience with Jenkins Job DSL plugin. The plugin is very flexible and powerful, using Groovy as its programming language. The only drawback is that it is sometimes too complex to use and requires a steep learning curve. Maybe Jenkins Job Builder is better suited if you don't need all that flexibility and complexity. I've met both of them afterwards and talked a bit more about open source bugs.

At Saturday evening I've visited a panel discussion about the impact of open source on the tech industry at Co.Station organized by The Startup Bus. There was beer, pizza, entrepreneur networking and talking about startups and open source. Do I need to say more? I will post a separate blog post about the interesting start-ups I've found so stay tuned.

Day 2 - Ping-pong with BBC Open Source


Sunday was my lazy day. I've attended a few talks but nothing too interesting. I've managed to go around the project stands, filed a bug against FAS and scored the highest ping-pong score for the day at the BBC Open Source stand.

BBC has a long history of being involved with education in the UK and the micro:bit is their latest project. I will love to see this (or similar) delivered by the thousands here in Bulgaria so if you think about sponsoring let me know.

Accidentally I met Martin Sivak - a former Anaconda developer whom I've worked with in the past. He is now at the Virtualization Development team at Red Hat and we briefly talked about the need for more oVirt testing. I have something in mind about this which will be announced in the next 2 months so stay tuned.

Fedora 23 Release Conference
<fb:like href="http://yannaing.pro/2016/02/03/fedora-23-release-conference/" layout="button_count" share="true" show_faces="false" width="450"></fb:like>

On February 1, 2016 “Fedora 23 Release Conference” took place at Fedora Project – Myanmar Community’s Headoffice which is at Building 33 (3rd Floor), Za Lun Street, San Chaung Township, Yangon, Myanmar.

Pravin Satpute from India, Leap Sok from Cambodia warmly discussed with the local community people and with students from “University of Computer Studies Yangon”. We started introducing ourselves and explaining from which version of Fedora is started using and how we are using Fedora. Moreover, we shared some reasons to use Fedora and difficulties with Fedora. After that we did short explanation on “Introduction on Fedora” and “How to contribute to Fedora”. Moreover we had some language issues with respective perspectives.

<figure class="wp-caption alignnone" id="attachment_315" style="width: 1024px;">12593817_10201614057471242_8493409360386408054_o<figcaption class="wp-caption-text">Credit: Wai Yan Min</figcaption></figure>

After spending some time chatting we enjoyed Fedora 23 Release by cutting the cake. The cake was served to everyone by Yan Naing Myint, Fedora Project – Myanmar Community’s leader, with no tickets.

<figure class="wp-caption alignnone" id="attachment_318" style="width: 300px;">12670242_1256939567666125_4812131912319161323_n<figcaption class="wp-caption-text">Photo Credit: Leap Sok</figcaption></figure>

Soon after the celebration took place, had a lunch altogether with the people. And the lunch was cooked by Wai Yan (John Reginald) and served by Fedora Project – Myanmar Community people.

After the happy lunch, Yan Naing Myint started the topic “Introduction to Fedora” by explaining where to get Fedora, what are Workstation, Server and Cloud, what are spins and labs, four foundations of Fedora, release cycle of Fedora and so on.

<figure class="wp-caption alignnone" id="attachment_319" style="width: 1024px;">12633242_1680234528910829_1504140826_o<figcaption class="wp-caption-text">Photo Credit: Win Than Htike</figcaption></figure>

After that, Leap Sok explained about How can a person contribute to Fedora in different ways. He explained what are Writers, Designers, People Person, OS Developers, Translator, Web Dev/Admin and so on, encouraging the students to feel free to contribute on Fedora.

<figure class="wp-caption alignnone" id="attachment_320" style="width: 1024px;">12615592_10201614058071257_405290855337228482_o<figcaption class="wp-caption-text">Photo Credit: Wai Yan Min</figcaption></figure>

After Leap Sok, Pravin started the topic of “What the students need and What can Fedora provide”. He did explained about his experiences in developing in Fedora, working with the patches and bugzilla and so on.

<figure class="wp-caption alignnone" id="attachment_321" style="width: 1024px;">12622275_10201614059511293_7041911317984074267_o<figcaption class="wp-caption-text">Photo Credit: Wai Yan Min</figcaption></figure>

While Leap Sok and Pravin were on having speech to the audience, Yan Naing Myint provided assistance on some translations and explanations in order for the people know more.

After that, we took a break by going to road-side tea shop together with the audience and chatted happily.

After the break, back into the conference room, Yan Naing Myint explained how to open FAS accounts, how to create wiki profile and how to create bugzilla accounts.

We took photo together holding Fedora 23 Workstation DVD (64bit) in hand [Featured Image] and successfully and happily ended the the event.

After the event, by the help of Pravin, Yan Naing Myint had become Myanmar Language Translation Coordinator and also Pravin provided the very basic 1st step on Myanmar Language Translation. After that, Pravin and Leap Sok left Fedora Project – Myanmar Community’s Headoffice building in the late evening.

ေဖေဖၚ၀ါရီလ ၁ ရက္ေန႔ ၂၀၁၆ ခုႏွစ္တြင္ Fedora Project – Myanmar Community ၏ ရံုးခ်ဳပ္ (The Cyber Wings Team တုိက္ ၃၃ (၃လႊာ)၊ ဇလြန္လမ္း၊ စမ္းေခ်ာင္းျမိဳ႕နယ္၊ ရန္ကုန္ျမို႕) ၌ Fedora 23 Release Conference ျပဳလုပ္ခဲ့ပါတယ္။

India ႏုိင္ငံ မွ Pravin Satpute ႏွင့္ Cambodia ႏုိင္ငံမွ Leap Sok တုိ႔သည္ ျပည္တြင္း Community အဖြဲ႕၀င္မ်ားႏွင့္တကြ တက္ေရာက္လာၾကေသာ ရန္ကုန္ ကြန္ပ်ဴတာ တကၠသိုလ္မွ ေက်ာင္းသားေက်ာင္းသူမ်ားႏွင့္ ေႏြးေထြးစြာ ေဆြးေနြး စကားေျပာခဲ့ၾကပါတယ္။ ကိုယ့္ကိုကိုယ္မိတ္ဆက္ၾကျပီး Fedora ကို မည္သည့္ Version မွ စတင္အသံုးျပဳခဲ့ေၾကာင္း ႏွင့္ လက္ရွိတြင္ Fedora အားမည္ကဲ့သို႕အသံုးျပဳလ်က္ရွိေၾကာင္း စသည္ျဖင့္ ေျပာၾကားခဲ့ပါတယ္။ ထုိ႔ျပင္ Fedora ကို ဘာေၾကာင့္သံုးၾကေၾကာင္း ႏွင့္ Fedora ႏွင့္သက္ဆုိင္သည့္ အခက္အခဲမ်ားကိုလည္း ရင္းႏွီးစြာ အျပန္အလွန္ေဆြးေႏြးခဲ့ၾကပါတယ္။ ထုိ႔ေနာက္ “Introduction to Fedora” ေခါင္းစဥ္နွင့္ “How to contribute to Fedora” ေခါင္းစဥ္မ်ားကို အက်ဥ္းခ်ဳပ္ အနည္းငယ္ ေျပာၾကားခဲ့ျပီး ျမန္မာဘာသာနွင့္ပတ္သက္သည့္ အကိစၥမ်ားကိုလည္း လူတစ္ဦးခ်င္းစီ၏ အျမင္မ်ားျဖင့္ ပါ၀င္ေဆြးေႏြးခဲ့ၾကပါတယ္။

<figure class="wp-caption alignnone" id="attachment_315" style="width: 1024px;">12593817_10201614057471242_8493409360386408054_o<figcaption class="wp-caption-text">Credit: Wai Yan Min</figcaption></figure>

စကားစမီေျပာရင္း အခ်ိန္အနည္းငယ္အၾကာမွာေတာ့ ကိုရန္ႏုိင္ျမင့္မွ သီးျခားစီစဥ္ေပးေသာ Fedora 23 Release ကို Cake မုန္႕ခြဲကာ စတင္ခဲ့ပါတယ္။

<figure class="wp-caption alignnone" id="attachment_318" style="width: 300px;">12670242_1256939567666125_4812131912319161323_n<figcaption class="wp-caption-text">Photo Credit: Leap Sok</figcaption></figure>

ကိတ္မုန္႕ခြဲကာ အဖြင့္ စတင္ျပီးေနာက္မွာေတာ့ အတူတကြ ေန႕လည္စာစားခဲ့ၾကပါတယ္။ ထုိေန႕လည္စာကိုေတာ့ Wai Yan (John Reginald) မွ ကိုယ္တုိင္ခ်က္ျပဳတ္ခဲ့ျပီး Fedora Project – Myanmar Community မွ အဖြဲ႕၀င္မ်ားက တည္ခင္းဧည့္ခံခဲ့ၾကပါတယ္။

ေပ်ာ္စရာေကာင္းေသာ ေန႕လည္စာအျပီးမွာေတာ့ ရန္ႏုိင္ျမင့္မွ “Introduction to Fedora” ေခါင္းစဥ္ျဖင့္ Fedora ကို မည္ကဲ့သို႕ ရယူႏုိင္မည္၊ Fedora ၏ Workstation, Server ႏွင့္ Cloud မ်ားအေၾကာင္း၊ Spin ႏွင့္ Lab မ်ားအေၾကာင္း၊ Fedora ၏ Four Foundation အေၾကာင္း၊ Release Cycle မ်ားအေၾကာင္း စသည္တုိ႕ကို အက်ယ္တ၀င့္ရွင္းလင္းေျပာၾကားခဲ့ပါတယ္။

<figure class="wp-caption alignnone" id="attachment_319" style="width: 1024px;">12633242_1680234528910829_1504140826_o<figcaption class="wp-caption-text">Photo Credit: Win Than Htike</figcaption></figure>

ထုိ႔ေနာက္ Leap Sok မွ Fedora အား ကိုယ္တုိင္ ပါ၀င္ကူညီႏုိင္မည့္ နည္းလမ္းမ်ားျဖစ္ေသာ Writers, Designers, People Person, OS Developers, Translator, Web Dev/Admin ႏွင့္ အျခား နည္းလမ္းမ်ားအား တက္ေရာက္ၾကသည့္ ေက်ာင္းသူေက်ာင္းသားမ်ားအား ရွင္းလင္းေျပာၾကားခဲ့ပါတယ္။

<figure class="wp-caption alignnone" id="attachment_320" style="width: 1024px;">12615592_10201614058071257_405290855337228482_o<figcaption class="wp-caption-text">Photo Credit: Wai Yan Min</figcaption></figure>

Leap Sok မွ ေျပာၾကားျပီးခ်ိန္မွာေတာ့ Pravin မွ ဆက္လက္ျပီး “What the students need and What can Fedora Provide” ၾကားျဖတ္ေခါင္းစဥ္ျဖင့္ ၄င္းၾကံဳေတြ႕ခဲ့ေသာ အေတြ႕အၾကံဳမ်ား၊ Patch မ်ားႏွင့္အလုပ္လုပ္ျခင္းအေၾကာင္းမ်ား၊ Bugzilla ႏွင့္အလုပ္လုပ္ျခင္းအေၾကာင္းမ်ားကို ဆက္လက္ ေဆြးေႏြး ေဟာေျပာေပးခဲ့ပါတယ္။

<figure class="wp-caption alignnone" id="attachment_321" style="width: 1024px;">12622275_10201614059511293_7041911317984074267_o<figcaption class="wp-caption-text">Photo Credit: Wai Yan Min</figcaption></figure>

Leap Sok ႏွင့္ Pravin တုိ႔မွ တက္ေရာက္သူမ်ားအား ေဟာေျပာေဆြးေႏြးေပးေနစဥ္ တက္ေရာက္လာသူမ်ားပိုမို နားလည္သိရွိႏုိင္ရန္အတြက္ ရန္ႏုိင္ျမင့္မွ ဘာသာျပန္ အေနျဖင့္လည္းေကာင္း ထပ္ေဆာင္း ရွင္းလင္းမႈမ်ားျဖင့္လည္းေကာင္း ကူညီေပးခဲ့ပါတယ္။

ထုိ႕ေနာက္မွာေတာ့ အားလံုးအတူတကြ လမ္းေဘးလက္ဘက္ရည္ဆုိင္သို႕သြားေရာက္ကာ Break Time အတြင္း Coffee ႏွင့္တကြ အျခား စားစရာမ်ားအား သံုးေဆာင္ခဲ့ၾကပါတယ္။

ထုိ႕ေနာက္မွာေတာ့ က်င္းပရာအခန္းက်ယ္သို႕ျပန္ခဲ့ျပီး ရန္ႏုိင္ျမင့္မွ Fedora Account System တြင္ မည္သို႕ Account ဖြင့္ထားရမည္၊ Fedora Wiki Profile အားမည္သို႕ျပင္ဆင္ထားႏုိင္သည္ ႏွင့္ Bugzilla account မ်ားကို ဖြင့္လွစ္ပံုမ်ားကို အခ်ိန္အနည္းငယ္ယူကာ ဆက္လက္ရွင္းျပခဲ့ပါတယ္။

ထုိ႕ေနာက္ အေပၚဆံုးတြင္ ေဖၚျပခဲ့ေသာပံုအတုိင္း Fedora 23 Workstation DVD (64bit) မ်ားကိုကိုင္ေဆာင္ကာ အမွတ္တရ ဓါတ္ပံုအတူတကြ ရိုက္ျပီး အခန္းအနားကို ေအာင္ျမင္စြာ ယုတ္သိမ္းခဲ့ပါတယ္။

အခန္းအနားအျပီးမွာေတာ့ Pravin ၏အကူအညီျဖင့္ ရန္ႏုိင္ျမင့္အား Fedora ၏ ျမန္မာစာ Translation Coordinator အျဖစ္ ေလွ်ာက္ေစခဲ့ျပီး Myanmar Translation အတြက္လိုအပ္သည့္ ပထမအဆင့္လုပ္ေဆာင္ခ်က္မ်ားကို ရွင္းလင္းေျပာၾကားေပးခဲ့ျပီး Fedora Project – Myanmar Community ရံုးခ်ဳပ္မွ Pravin ႏွင့္ Leap Sok ျပန္လည္ထြက္ခြာသြားခဲ့ပါတယ္။

ဖေဖေါ်ဝါရီလ ၁ ရက်နေ့ ၂၀၁၆ ခုနှစ်တွင် Fedora Project – Myanmar Community ၏ ရုံးချုပ် (The Cyber Wings Team တိုက် ၃၃ (၃လွှာ)၊ ဇလွန်လမ်း၊ စမ်းချောင်းမြို့နယ်၊ ရန်ကုန်မြို့) ၌ Fedora 23 Release Conference ပြုလုပ်ခဲ့ပါတယ်။

India နိုင်ငံ မှ Pravin Satpute နှင့် Cambodia နိုင်ငံမှ Leap Sok တို့သည် ပြည်တွင်း Community အဖွဲ့ဝင်များနှင့်တကွ တက်ရောက်လာကြသော ရန်ကုန် ကွန်ပျူတာ တက္ကသိုလ်မှ ကျောင်းသားကျောင်းသူများနှင့် နွေးထွေးစွာ ဆွေးနွေး စကားပြောခဲ့ကြပါတယ်။ ကိုယ့်ကိုကိုယ်မိတ်ဆက်ကြပြီး Fedora ကို မည်သည့် Version မှ စတင်အသုံးပြုခဲ့ကြောင်း နှင့် လက်ရှိတွင် Fedora အားမည်ကဲ့သို့အသုံးပြုလျက်ရှိကြောင်း စသည်ဖြင့် ပြောကြားခဲ့ပါတယ်။ ထို့ပြင် Fedora ကို ဘာကြောင့်သုံးကြကြောင်း နှင့် Fedora နှင့်သက်ဆိုင်သည့် အခက်အခဲများကိုလည်း ရင်းနှီးစွာ အပြန်အလှန်ဆွေးနွေးခဲ့ကြပါတယ်။ ထို့နောက် “Introduction to Fedora” ခေါင်းစဉ်နှင့် “How to contribute to Fedora” ခေါင်းစဉ်များကို အကျဉ်းချုပ် အနည်းငယ် ပြောကြားခဲ့ပြီး မြန်မာဘာသာနှင့်ပတ်သက်သည့် အကိစ္စများကိုလည်း လူတစ်ဦးချင်းစီ၏ အမြင်များဖြင့် ပါ၀င်ဆွေးနွေးခဲ့ကြပါတယ်။

<figure class="wp-caption alignnone" id="attachment_315" style="width: 1024px;">12593817_10201614057471242_8493409360386408054_o<figcaption class="wp-caption-text">Credit: Wai Yan Min</figcaption></figure>

စကားစမီပြောရင်း အချိန်အနည်းငယ်အကြာမှာတော့ ကိုရန်နိုင်မြင့်မှ သီးခြားစီစဉ်ပေးသော Fedora 23 Release ကို Cake မုန့်ခွဲကာ စတင်ခဲ့ပါတယ်။

<figure class="wp-caption alignnone" id="attachment_318" style="width: 300px;">12670242_1256939567666125_4812131912319161323_n<figcaption class="wp-caption-text">Photo Credit: Leap Sok</figcaption></figure>

ကိတ်မုန့်ခွဲကာ အဖွင့် စတင်ပြီးနောက်မှာတော့ အတူတကွ နေ့လည်စာစားခဲ့ကြပါတယ်။ ထိုနေ့လည်စာကိုတော့ Wai Yan (John Reginald) မှ ကိုယ်တိုင်ချက်ပြုတ်ခဲ့ပြီး Fedora Project – Myanmar Community မှ အဖွဲ့ဝင်များက တည်ခင်းဧည့်ခံခဲ့ကြပါတယ်။

ပျော်စရာကောင်းသော နေ့လည်စာအပြီးမှာတော့ ရန်နိုင်မြင့်မှ “Introduction to Fedora” ခေါင်းစဉ်ဖြင့် Fedora ကို မည်ကဲ့သို့ ရယူနိုင်မည်၊ Fedora ၏ Workstation, Server နှင့် Cloud များအကြောင်း၊ Spin နှင့် Lab များအကြောင်း၊ Fedora ၏ Four Foundation အကြောင်း၊ Release Cycle များအကြောင်း စသည်တို့ကို အကျယ်တဝင့်ရှင်းလင်းပြောကြားခဲ့ပါတယ်။

<figure class="wp-caption alignnone" id="attachment_319" style="width: 1024px;">12633242_1680234528910829_1504140826_o<figcaption class="wp-caption-text">Photo Credit: Win Than Htike</figcaption></figure>

ထို့နောက် Leap Sok မှ Fedora အား ကိုယ်တိုင် ပါ၀င်ကူညီနိုင်မည့် နည်းလမ်းများဖြစ်သော Writers, Designers, People Person, OS Developers, Translator, Web Dev/Admin နှင့် အခြား နည်းလမ်းများအား တက်ရောက်ကြသည့် ကျောင်းသူကျောင်းသားများအား ရှင်းလင်းပြောကြားခဲ့ပါတယ်။

<figure class="wp-caption alignnone" id="attachment_320" style="width: 1024px;">12615592_10201614058071257_405290855337228482_o<figcaption class="wp-caption-text">Photo Credit: Wai Yan Min</figcaption></figure>

Leap Sok မှ ပြောကြားပြီးချိန်မှာတော့ Pravin မှ ဆက်လက်ပြီး “What the students need and What can Fedora Provide” ကြားဖြတ်ခေါင်းစဉ်ဖြင့် ၄င်းကြုံတွေ့ခဲ့သော အတွေ့အကြံုများ၊ Patch များနှင့်အလုပ်လုပ်ခြင်းအကြောင်းများ၊ Bugzilla နှင့်အလုပ်လုပ်ခြင်းအကြောင်းများကို ဆက်လက် ဆွေးနွေး ဟောပြောပေးခဲ့ပါတယ်။

<figure class="wp-caption alignnone" id="attachment_321" style="width: 1024px;">12622275_10201614059511293_7041911317984074267_o<figcaption class="wp-caption-text">Photo Credit: Wai Yan Min</figcaption></figure>

Leap Sok နှင့် Pravin တို့မှ တက်ရောက်သူများအား ဟောပြောဆွေးနွေးပေးနေစဉ် တက်ရောက်လာသူများပိုမို နားလည်သိရှိနိုင်ရန်အတွက် ရန်နိုင်မြင့်မှ ဘာသာပြန် အနေဖြင့်လည်းကောင်း ထပ်ဆောင်း ရှင်းလင်းမှုများဖြင့်လည်းကောင်း ကူညီပေးခဲ့ပါတယ်။

ထို့နောက်မှာတော့ အားလုံးအတူတကွ လမ်းဘေးလက်ဘက်ရည်ဆိုင်သို့သွားရောက်ကာ Break Time အတွင်း Coffee နှင့်တကွ အခြား စားစရာများအား သုံးဆောင်ခဲ့ကြပါတယ်။

ထို့နောက်မှာတော့ ကျင်းပရာအခန်းကျယ်သို့ပြန်ခဲ့ပြီး ရန်နိုင်မြင့်မှ Fedora Account System တွင် မည်သို့ Account ဖွင့်ထားရမည်၊ Fedora Wiki Profile အားမည်သို့ပြင်ဆင်ထားနိုင်သည် နှင့် Bugzilla account များကို ဖွင့်လှစ်ပုံများကို အချိန်အနည်းငယ်ယူကာ ဆက်လက်ရှင်းပြခဲ့ပါတယ်။

ထို့နောက် အပေါ်ဆုံးတွင် ဖေါ်ပြခဲ့သောပုံအတိုင်း Fedora 23 Workstation DVD (64bit) များကိုကိုင်ဆောင်ကာ အမှတ်တရ ဓါတ်ပုံအတူတကွ ရိုက်ပြီး အခန်းအနားကို အောင်မြင်စွာ ယုတ်သိမ်းခဲ့ပါတယ်။

အခန်းအနားအပြီးမှာတော့ Pravin ၏အကူအညီဖြင့် ရန်နိုင်မြင့်အား Fedora ၏ မြန်မာစာ Translation Coordinator အဖြစ် လျှောက်စေခဲ့ပြီး Myanmar Translation အတွက်လိုအပ်သည့် ပထမအဆင့်လုပ်ဆောင်ချက်များကို ရှင်းလင်းပြောကြားပေးခဲ့ပြီး Fedora Project – Myanmar Community ရုံးချုပ်မှ Pravin နှင့် Leap Sok ပြန်လည်ထွက်ခွာသွားခဲ့ပါတယ်။

<fb:like href="http://yannaing.pro/2016/02/03/fedora-23-release-conference/" layout="button_count" share="true" show_faces="false" width="450"></fb:like>
Secure Boot — Fedora, RHEL, and Shim Upstream Maintenance: Government Involvement or Lack Thereof

You probably remember when I said some things about Secure Boot in June of 2014. I said there’d be more along those lines, and there is.

So there’s another statement about that here.

I’m going to try to remember to post a message like this once per month or so. If I miss one, keep an eye out, but maybe don’t get terribly suspicious unless I miss several in a row.

Note that there are parts of this chain I’m not a part of, and obviously linux distributions I’m not involved in that support Secure Boot. I encourage other maintainers to offer similar statements for their respective involvement.

Where are your symbols, debuginfo and sources?

A package is more than a binary – make it observable


I gave a presentation at Fosdem 2016 in the distributions developer room. This article is an extended version of the slide presenter notes. You can get the original from the talk page (press ‘h’ for help and ‘p’ to get the presenter view for the slides).

If any of this sounds interesting and you would like to get help implementing some of this for your distribution please contact me. I work upstream on valgrind and elfutils, which take advantage of having symbols and debuginfo available for programs. And elfutils is used by systemtap, systemd and perf to use some of that information (if available). I am also happy to work on gdb, gcc, rpm, binutils, etc. if that would help make some of this possible or more usable. I work for Red Hat which might explain some of my bias towards how Fedora handles some of this. Please do point out when I am making assumptions that are just plain wrong for other distributions. Ideally all this will be usable cross-distros and when running programs in VMs or containers based on different distro images, you’ll simply reach in and trace, profile and debug anything running seamlessly.


The main goal is to seamlessly go from a binary back to the original source. In this case limited to “native” ELF code (stuff you build with GCC or any other compiler that produces native code and sufficiently good debuginfo). Once we get this right for “native code” we can look at how to setup similar conventions for other language execution environments.

Whether you are tracing, profiling or debugging a locally running program, get a core file or want to interpret trace or profile data captured on some system, we want to make sure as many symbols, debuginfo and sources are available and easily accessible. Get them wherever they are. Or have a standard way to get them.

I know how most of this stuff works in Fedora, and how to improve some things for that distribution. But I would need help with other distributions and sanity checking that these ideas make sense in other context.


If you are running Free Software then you should be able to get back at the source code of your binaries. Code is never perfect and real issues always happen in production. Every user really is (and should allowed to be) a “debugger”. Observing (tracing, profiling) their system as it is running. Since actual running (optimized) code in a specific setup really is different from development code. You will observe different behavior in an actual deployed binary compared to how it behaved on the packager or developer setup.

And this isn’t just for the user on the machine getting useful backtraces. The user might just capture a trace or profile on their machine. Or you might get a core file that needs analysis “off-line”. Then having everything ready beforehand makes recreating a “debug environment” that matches the “production environment” precisely so much easier.

Meta observation

We do want users to trace, profile and debug processes running on their systems so they know precisely what is going on with their machine. But we also care about security so all code should run with the minimal privileges possible. Different users shouldn’t be able to trace each other processes, services should run in separate security context, processes handling sensitive data should make sure to pin security sensitive memory to prevent dumping such data to disk and processes that aren’t supposed to use introspection syscalls should be sandboxed. That is all good stuff. It makes sure users/services can synchronize, signal, debug, trace and profile their own processes, but not more than that.

There are however some kernel tweaks that don’t obey process separation and don’t respect different security scopes. Like setting selinux/yama ptrace_deny/scope. Enabling those will break stuff and will cause use of more privileged code than necessary. These “deny ptrace” features aren’t just about blocking the ptrace system call. They don’t just block “debuggers”. They block all inter-process synchronization, signaling, tracing and profiling by normal (unprivileged) users. Both settings were tried in Fedora and were disabled by default in the end. Because with them users can no longer observe their own processes. So they will have to raise their privileges to root. It also means a privileged monitoring process cannot just drop privileges to trace or profile lesser privileged code. So you’ll have to debug, profile and trace as root! It can also be seen as a form of security theater since a compromised process that is running in the same user/security context, might not be able to easily observe another process directly, but it can still get at the same inputs, read and manipulate the other process files, settings, install preload code to disable any restrictions, etc. Making observing other processes much more cumbersome, but not impossible.

So please don’t use these system breaking tweaks on normal setups where users and administrators should be able to monitor their own processes. We need real solutions that don’t require running everything as root and that respects normal user privileges and security contexts.


A build-id is a globally unique identifier for an executable ELF image. Luckily everybody gets this right now (all support is upstream and enabled by default in the GNU toolchain). An build-id is an (allocated) ELF note put into the binary by the linker. It is (normally) the SHA1 hash over all code sections in the ELF image. The build-id can be found in each executable, shared library, kernel, module, etc. It is loaded into memory and automatically dumped into core files.

When you know the build-ids and the addresses where the ELF images are/were loaded then you have enough information to match any address to original source.

If your build is reproducible then the build-id will also be exactly the same. The build-id identifies the executable code. So stripping symbols or adding debuginfo doesn’t change it. And in theory with reproducible builds you could “just” rebuild your binaries with all debug options turned on (GCC guarantees that producing debug output will not change the executable code generated) and not strip out any symbols. But that is not really practical and a bit cumbersome (you will need to also keep around the exact build environment for any binary on the system).

Because they are so useful and so essential it really makes sense to make it an error when no build-id is found in an executable or shared library, not just warn about it when creating a package.

backtraces/unwind tables

Backtraces are the backbone of everything (tracing, profiling, debugging). They provide the necessary context for any observation. If you have any observability this should be it. To make it possible to get accurate and precise backtraces in any context always use gcc -fasynchronous-unwind-tables everywhere. It is already the default on the most widely used architectures, but you should enable it on all architectures you support. Fedora already does this (either through making it the default in gcc or by passing it explicitly in the build flags used by rpmbuild).

This will get you unwind tables which are put into .eh_frame sections, which are always kept with the binary and loaded in memory and so can be accessed easily and fast. frame pointers only get you so far. It is always the tricky code, signal handlers, fork/exec/clone, unexpected termination in the prologue/epilogue that manipulates the frame pointer. And it is often this tricky situations where you want accurate backtraces the most and you get bad results when only trying to rely on frame pointers. Maintaining frame pointers bloats code and reduces optimization oppertunities. GCC is really good at automatically generating it for any higher level language. And glibc now has CFI for all/most hand written assembler.

The only exception might be the kernel (for reasons mainly to do with the fact that linux kernel modules are ET_REL files, loadable .eh_frame sections are somewhat problematic). But even for the kernel please do generate accurate unwind tables and then put those in the .debug_frame section (which can then be stripped out and put into a separate debug file later). You can do this with the .cfi_sections assembler directive.

function symbols

When you do get backtraces for observations it would be really nice to immediately be able to match any addresses to the function names from the original source code. But normally only the .dynsym symbols are available (these are only those symbols that are necessary for dynamic linking your application and shared libraries). The full .symtab is normally stripped away since it is strictly only necessary for the linker combining object files.

Because .dynsym provides too little symbols and .symtab provides too much symbols, Fedora introduced the mini-symtab (sometimes called mini-debuginfo). This is a special (non-loaded) .gnu_debugdata section that contains a xz compressed ELF image. This ELF image contains minimal .symtab + .strtab sections for just the function symbols of the original .symtab section.

gdb and elfutils support using/reading .gnu_debugdata upstream. But it is only generated only by some obscure function inside the rpm find-debuginfo.sh script. This really should be its own reusable script/program.

An alternative might be to just not strip the full .symtab out together with the full debuginfo and maybe use the new ELF compressed section support. (valgrind needs the full .symtab in some cases – although only really for ld.so, and valgrind doesn’t support compressed sections at the moment).

Together with accurate unwind tables having the function symbols available (and not stripped away or put into a separate debug file that might not be immediately accessible) provides the minimal requirements for easy, simple and useful tracing and profiling.

Full debuginfo

Other debug information can be stored separately from the main executable, but we still need to generate it. Some recommendations:

  • Always use -g (-gdwarf-4 is the default in recent GCC)
  • Do NOT disable -fvar-tracking-assignments
  • gdb-add-index (.gdb_index)
  • Maybe use -g3 (adds macro definitions)

This is a lot of info and sadly somewhat entangled. But please always generate it and then strip it into a separate .debug file.

This will give you inlines (program structure, which code ended up where). Arguments to functions and local variables, plus which value they have at which point in the program.

The types and structures used by the program. Matching addresses to source lines. .gdb_index provides debuggers a quick way to navigate some of these structures, so they don’t need to scan it all at startup, even if you only want to use a small portion. -g3 used to be very expensive, but recent GCC versions generate much denser data. Nobody really uses them much though, since nobody generates them… so chicken, egg. Both indexing and dense macros are proposed as DWARFv5 extensions.

Not using -fvar-tracking-assignments, which is the default with gcc now, really provides very poor results. Some projects disable it because they are afraid that generating extra debuginfo will somehow impact the generated code. If it ever does that is a bug in GCC. If you do want to double check then you can enable GCC -fcompare-debug or define the environment variable GCC_COMPARE_DEBUG to explicitly make GCC check this invariant isn’t violated.


Full debuginfo is big! So yes, compression is something to think about. But ELF section compression is the wrong level. It isn’t supported by many programs (valgrind for example doesn’t). There are two variants (if you use any please not .zdebug, which is now a deprecated GNU extension). It prevents simply mmapping the data and using an index to only read/use what you need. It causes very slow startup.

You should however use DWZ, the DWARF optimization and duplicate removal tool. Given all debuginfo in a package this tool will make sure duplicate DWARF information is stored in a common place, reducing the size of the individual debug files.

You could use both DWZ and ELF section compression together if you really want to get the most compression. But I would recommend using DWZ only and then compress the whole file(s) for storage (like in a package), but install them uncompressed for direct usage.


The DWARF debuginfo references sources, and you really want to have them easily available. So package the (generated) sources (as they were compiled) and put them somewhere under /usr/src/debug/[package-version]/.

There is however one gotcha. DWARF references the sources where they were build. So unless you put and build the sources precisely where you want to install them you will have to adjust the. This can be done in two ways:

  • rpm debugedit
  • gcc -fdebug-prefix-map=old=new

debugedit is both more and less flexible. It is more flexible because it provides you the actual source file list used in the DWARF describing the program. It is less flexible because it isn’t a full DWARF rewriter. It adjusts the location/directories as long as they are smaller… So setup a big enough build root PATH name. It is probably time to rewrite debugedit to support proper DWARF rewriting and make it an independent tool that can easily be reused not just by rpm.

Separating and “linking”

There are two ways to “link” your binaries to their debug files:

  • .gnu_debuglink section in main file with name (and CRC) of .debug file
  • /usr/lib/debug/.build-id/XX/XXXXXXXXXXXXXXXXXXXXXX.debug

The .gnu_debuglink name has to be searched under well known paths (/usr/lib/debug + original location and/or subdirs). This makes it fragile, but more tools support it and it is the fallback used for when there is no build-id. But it might be time to deprecate/remove it because they inherently conflict between package versions.

Fedora supports both linking build-id.debug -> debuglink file. Fedora also throws in extra link to main exe under .build-id. But in the debuginfo package, so that link could mismatch if the main package and debug package versions don’t match up. It is not recommended to mimic this setup.

Preventing conflict

This is work in progress in Fedora:

  • Want to install both 64-bit and 32-bit debug package.
  • Have older/newer version of a debuginfo package installed. (inspecting a core file).

By making debuginfo packages parallel installable across arches and versions you should be able easily trace, profile and debug 32 and 64 bit programs at the same time. Inspect a core file generated against slightly different versions of the executable and libraries installed on the developer machine. And be able to install all debug files matching the executables running in a container for deep inspection.

To get there:

  • Hash in full name-version-arch of package into build-id.
  • Get rid of .gnu_debuglink files.
  • No more build-id main file backlinks.
  • Put sources under full name-version-arch subdir

This is where I still have more questions than answers. build-ids can conflict for minor version updates (the files should be completely identical though). Should we hash-in the full package name to make them unique again or accept that multiple packages can provide the same ELF images/build-ids? Dropping .gnu_debuglink (or changing install/renamed paths) will need program updates. Should the build-id-main-file backlinks be moved into the main package?

Should we really package debug files?

We might also want to explore alternatives to parallel installable debuginfo packages. Would it make sense to completely do away with debuginfo packages by:

  • Making /usr/lib/debug and /usr/src/debug “magic” fuse file systems?
  • Populate through something like darkserver
  • Have a cross-distro federated registry of build-ids?

Something like the above is being experimented with in the Clear Linux Project.

System CA certificate trust management review and planning meeting at DevConf
DevConf Fedora badge - System CA certificate trust management planning at DevConf 2016

System CA certificate trust management review and planning will happen at DevConf 2016 this year

The current system CA certificate trust store management tool as implemented by p11-kit supports only limited number of use-cases. We are trying to gather information from various people administering and developing for Fedora and Red Hat Enterprise Linux on how it could be improved.

For this purpose we want to arrange an informal session during DevConf at Brno where we would discuss the current state of the implementation and gather input in the form of use-cases. These use-cases would be interesting to support with future development of p11-kit and additional tools.

Current System CA Use-Cases

Let’s summarize the currently supported use-cases:

  • listing all trusted anchors with pkcs11: URIs and their labels
  • listing all blacklisted certificates
  • adding trusted anchor
  • removing previously added trusted anchor

See the trust command documentation for details.

Missing System CA Use-Cases

We were able to identify these missing use-cases:

  • listing the purpose for which the trusted anchors are trusted
  • listing other attributes of the trusted anchors
  • listing only changes from the trust store made by sysadmin (differences from the trust database as shipped in the ca-certificates package)
  • modifying the purpose for which the trusted anchors are trusted
  • blocking or masking the trusted or blacklisted certificates which are shipped in the ca-certificates package – For example the sysadmin might want to block all the certification authorities from some country that he does not regard as trustworthy.

We are interested in hearing what the high level sysadmin tasks would be eased by improvements in this area, which of the missing use-cases should be implemented first and whether there are any additional use-cases whose support is needed. We would also like to gather feedback on how the trust store management interface should look like. Whether for example the current command-line UI of the trust tool is sufficient.

Come visit at DevConf

The meeting will happen on Friday Feb 5th 2016 13:10-14:30 at the DevConf venue in the room C228.

The post System CA certificate trust management review and planning meeting at DevConf appeared first on Fedora Community Blog.

February 01, 2016

On WebKit Security Updates

Linux distributions have a problem with WebKit security.

Major desktop browsers push automatic security updates directly to users on a regular basis, so most users don’t have to worry about security updates. But Linux users are dependent on their distributions to release updates. Apple fixed over 100 vulnerabilities in WebKit last year, so getting updates out to users is critical.

This is the story of how that process has gone wrong for WebKit.

Before we get started, a few disclaimers. I want to be crystal clear about these points:

  1. This post does not apply to WebKit as used in Apple products. Apple products receive regular security updates.
  2. WebKitGTK+ releases regular security updates upstream. It is safe to use so long as you apply the updates.
  3. The opinions expressed in this post are my own, not my employer’s, and not the WebKit project’s.

Browser Security in a Nutshell

Web engines are full of security vulnerabilities, like buffer overflows, null pointer dereferences, and use-after-frees. The details don’t matter; what’s important is that skilled attackers can turn these vulnerabilities into exploits, using carefully-crafted HTML to gain total control of your user account on your computer (or your phone). They can then install malware, read all the files in your home directory, use your computer in a botnet to attack websites, and do basically whatever they want with it.

If the web engine is sandboxed, then a second type of attack, called a sandbox escape, is needed. This makes it dramatically more difficult to exploit vulnerabilities. Chromium has a top-class Linux sandbox. WebKit does have a Linux sandbox, but it’s not any good, so it’s (rightly) disabled by default. Firefox does not have a sandbox due to major architectural limitations (which Mozilla is working on).

For this blog post, it’s enough to know that attackers use crafted input to exploit vulnerabilities to gain control of your computer. This is why it’s not a good idea to browse to dodgy web pages. It also explains how a malicious email can gain control of your computer. Modern email clients render HTML mail using web engines, so malicious emails exploit many of the same vulnerabilities that a malicious web page might. This is one reason why good email clients block all images by default: image rendering, like HTML rendering, is full of security vulnerabilities. (Another reason is that images hosted remotely can be used to determine when you read the email, violating your privacy.)

WebKit Ports

To understand WebKit security, you have to understand the concept of WebKit ports, because different ports handle security updates differently.

While most code in WebKit is cross-platform, there’s a large amount of platform-specific code as well, to improve the user and developer experience in different environments. Different “ports” run different platform-specific code. This is why two WebKit-based browsers, say, Safari and Epiphany (GNOME Web), can display the same page slightly differently: they’re using different WebKit ports.

Currently, the WebKit project consists of six different ports: one for Mac, one for iOS, two for Windows (Apple Windows and WinCairo), and two for Linux (WebKitGTK+ and WebKitEFL). There are some downstream ports as well; unlike the aforementioned ports, downstream ports are, well, downstream, and not part of the WebKit project. The only one that matters for Linux users is QtWebKit.

If you use Safari, you’re using the Mac or iOS port. These ports get frequent security updates from Apple to plug vulnerabilities, which users receive via regular updates.

Everything else is broken.

Since WebKit is not a system library on Windows, Windows applications must bundle WebKit, so each application using WebKit must be updated individually, and updates are completely dependent on the application developers. iTunes, which uses the Apple Windows port, does get regular updates from Apple, but beyond that, I suspect most applications never get any security updates. This is a predictable result, the natural consequence of environments that require bundling libraries.

(This explains why iOS developers are required to use the system WebKit rather than bundling their own: Apple knows that app developers will not provide security updates on their own, so this policy ensures every iOS application rendering HTML gets regular WebKit security updates. Even Firefox and Chrome on iOS are required to use the system WebKit; they’re hardly really Firefox or Chrome at all.)

The same scenario applies to the WinCairo port, except this port does not have releases or security updates. Whereas the Apple ports have stable branches with security updates, with WinCairo, companies take a snapshot of WebKit trunk, make their own changes, and ship products with that. Who’s using WinCairo? Probably lots of companies; the biggest one I’m aware of uses a WinCairo-based port in its AAA video games. It’s safe to assume few to no companies are handling security backports for their downstream WinCairo branches.

Now, on to the Linux ports. WebKitEFL is the WebKit port for the Enlightenment Foundation Libraries. It’s not going to be found in mainstream Linux distributions; it’s mostly used in embedded devices produced by one major vendor. If you know anything at all about the internet of things, you know these devices never get security updates, or if they do, the updates are superficial (updating only some vulnerable components and not others), or end a couple months after the product is purchased. WebKitEFL does not bother with pretense here: like WinCairo, it has never had security updates. And again, it’s safe to assume few to no companies are handling security backports for their downstream branches.

None of the above ports matter for most Linux users. The ports available on mainstream Linux distributions are QtWebKit and WebKitGTK+. Most of this blog will focus on WebKitGTK+, since that’s the port I work on, and the port that matters most to most of the people who are reading this blog, but QtWebKit is widely-used and deserves some attention first.

It’s broken, too.


QtWebKit is the WebKit port used by Qt software, most notably KDE. Some cherry-picked examples of popular applications using QtWebKit are Amarok, Calligra, KDevelop, KMail, Kontact, KTorrent, Quassel, Rekonq, and Tomahawk. QtWebKit provides an excellent Qt API, so in the past it’s been the clear best web engine to use for Qt applications.

After Google forked WebKit, the QtWebKit developers announced they were switching to work on QtWebEngine, which is based on Chromium, instead. This quickly led to the removal of QtWebKit from the WebKit project. This was good for the developers of other WebKit ports, since lots of Qt-specific code was removed, but it was terrible for KDE and other QtWebKit users. QtWebKit is still maintained in Qt and is getting some backports, but from a quick check of their git repository it’s obvious that it’s not receiving many security updates. This is hardly unexpected; QtWebKit is now years behind upstream, so providing security updates would be very difficult. There’s not much hope left for QtWebKit; these applications have hundreds of known vulnerabilities that will never be fixed. Applications should port to QtWebEngine, but for many applications this may not be easy or even possible.

Update: As pointed out in the comments, there is some effort to update QtWebKit. I was aware of this and in retrospect should have mentioned this in the original version of this article, because it is relevant. Keep an eye out for this; I am not confident it will make its way into upstream Qt, but if it does, this problem could be solved.


WebKitGTK+ is the port used by GTK+ software. It’s most strongly associated with its flagship browser, Epiphany, but it’s also used in other places. Some of the more notable users include Anjuta, Banshee, Bijiben (GNOME Notes), Devhelp, Empathy, Evolution, Geany, Geary, GIMP, gitg, GNOME Builder, GNOME Documents, GNOME Initial Setup, GNOME Online Accounts, GnuCash, gThumb, Liferea, Midori, Rhythmbox, Shotwell, Sushi, and Yelp (GNOME Help). In short, it’s kind of important, not only for GNOME but also for Ubuntu and Elementary. Just as QtWebKit used to be the web engine for choice for Qt applications, WebKitGTK+ is the clear choice for GTK+ applications due to its nice GObject APIs.

Historically, WebKitGTK+ has not had security updates. Of course, we released updates with security fixes, but not with CVE identifiers, which is how software developers track security issues; as far as distributors are concerned, without a CVE identifier, there is no security issue, and so, with a few exceptions, distributions did not release our updates to users. For many applications, this is not so bad, but for high-risk applications like web browsers and email clients, it’s a huge problem.

So, we’re trying to improve. Early last year, my colleagues put together our first real security advisory with CVE identifiers; the hope was that this would encourage distributors to take our updates. This required data provided by Apple to WebKit security team members on which bugs correspond to which CVEs, allowing the correlation of Bugzilla IDs to Subversion revisions to determine in which WebKitGTK+ release an issue has been fixed. That data is critical, because without it, there’s no way to know if an issue has been fixed in a particular release or not. After we released this first advisory, Apple stopped providing the data; this was probably just a coincidence due to some unrelated internal changes at Apple, but it certainly threw a wrench in our plans for further security advisories.

This changed in November, when I had the pleasure of attending the WebKit Contributors Meeting at Apple’s headquarters, where I was finally able meet many of the developers I had interacted with online. At the event, I gave a presentation on our predicament, and asked Apple to give us information on which Bugzilla bugs correspond to which CVEs. Apple kindly provided the necessary data a few weeks later.

During the Web Engines Hackfest, a yearly event that occurs at Igalia’s office in A Coruña, my colleagues used this data to put together WebKitGTK+ Security Advisory WSA-2015-0002, a list of over 130 vulnerabilities disclosed since the first advisory. (The Web Engines Hackfest was sponsored by Igalia, my employer, and by our friends at Collabora. I’m supposed to include their logos here to advertise how cool it is that they support the hackfest, but given all the doom and gloom in this post, I decided perhaps they would perhaps prefer not to have their logos attached to it.)

Note that 130 vulnerabilities is an overcount, as it includes some issues that are specific to the Apple ports. (In the future, we’ll try to filter these out.) Only one of the issues — a serious error in the networking backend shared by WebKitGTK+ and WebKitEFL — resided in platform-specific code; the rest of the issues affecting WebKitGTK+ were all cross-platform issues. This is probably partly because the trickiest code is cross-platform code, and partly because security researchers focus on Apple’s ports.

Anyway, we posted WSA-2015-0002 to the oss-security mailing list to make sure distributors would notice, crossed our fingers, and hoped that distributors would take the advisory seriously. That was one month ago.

Distribution Updates

There are basically three different approaches distributions can take to software updates. The first approach is to update to the latest stable upstream version as soon as, or shortly after, it’s released. This is the strategy employed by Arch Linux. Arch does not provide any security support per se; it’s not necessary, so long as upstream projects release real updates for security problems and not simply patches. Accordingly, Arch almost always has the latest version of WebKitGTK+.

The second main approach, used by Fedora, is to provide only stable release updates. This is more cautious, reflecting that big updates can break things, so they should only occur when upgrading to a new version of the operating system. For instance, Fedora 22 shipped with WebKitGTK+ 2.8, so it would release updates to new 2.8.x versions, but not to WebKitGTK+ 2.10.x versions.

The third approach, followed by most distributions, is to take version upgrades only rarely, or not at all. For smaller distributions this may be an issue of manpower, but for major distributions it’s a matter of avoiding regressions in stable releases. Holding back on version updates actually works well for most software. When security problems arise, distribution maintainers for major distributions backport fixes and release updates. The problem is that this not feasible for web engines; due to the huge volume of vulnerabilities that need fixed, security issues can only practically be handled upstream.

So what’s happened since WSA-2015-0002 was released? Did it convince distributions to take WebKitGTK+ security seriously? Hardly. Fedora is the only distribution that has made any changes in response to WSA-2015-0002, and that’s because I’m one of the Fedora maintainers. (I’m pleased to announce that we have a 2.10.7 update headed to both Fedora 23 and Fedora 22 right now. In the future, we plan to release the latest stable version of WebKitGTK+ as an update to all supported versions of Fedora shortly after it’s released upstream.)


Ubuntu releases WebKitGTK+ updates somewhat inconsistently. For instance, Ubuntu 14.04 came with WebKitGTK+ 2.4.0. 2.4.8 is available via updates, but even though 2.4.9 was released upstream over eight months ago, it has not yet been released as an update for Ubuntu 14.04.

By comparison, Ubuntu 15.10 (the latest release) shipped with WebKitGTK+ 2.8.5, which has never been updated; it’s affected by about 40 vulnerabilities fixed in the latest upstream release. Ubuntu organizes its software into various repositories, and provides security support only to software in the main repository. This version of WebKitGTK+ is in Ubuntu’s “universe” repository, not in main, so it is excluded from security support. Ubuntu users might be surprised to learn that a large portion of Ubuntu software is in universe and therefore excluded from security support; this is in contrast to almost all other distributions, which typically provide security updates for all the software they ship.

I’m calling out Ubuntu here not because it is specially-negligent, but simply because it is our biggest distributor. It’s not doing any worse than most of our other distributors.


Debian provides WebKit updates to users running unstable, and to testing except during freeze periods, but not to released version of Debian. Debian is unique in that it has a formal policy on WebKit updates. Here it is, reproduced in full:

Debian 8 includes several browser engines which are affected by a steady stream of security vulnerabilities. The high rate of vulnerabilities and partial lack of upstream support in the form of long term branches make it very difficult to support these browsers with backported security fixes. Additionally, library interdependencies make it impossible to update to newer upstream releases. Therefore, browsers built upon the webkit, qtwebkit and khtml engines are included in Jessie, but not covered by security support. These browsers should not be used against untrusted websites.

For general web browser use we recommend Iceweasel or Chromium.

Chromium – while built upon the Webkit codebase – is a leaf package, which will be kept up-to-date by rebuilding the current Chromium releases for stable. Iceweasel and Icedove will also be kept up-to-date by rebuilding the current ESR releases for stable.

(Iceweasel and Icedove are Debian’s de-branded versions of Firefox and Thunderbird, the product of an old trademark spat with Mozilla.)

Debian is correct that we do not provide long term support branches, as it would be very difficult to backport security fixes. But it is not correct that “library interdependencies make it impossible to update to newer upstream releases.” This might have been true in the past, but for several years now, we have avoided requiring new versions of libraries whenever it would cause problems for distributions, and — with one big exception that I will discuss below — we ensure that each release maintains both API and ABI compatibility. (Distribution maintainers should feel free to get in touch if we accidentally introduce some compatibility issue for your distribution; if you’re having trouble taking our updates, we want to help. I recently worked with openSUSE to make sure WebKitGTK+ can still be compiled with GCC 4.8, for example.)

The risk in releasing updates is that WebKitGTK+ is not a leaf package: a bad update could break some application. This seems to me like a good reason for application maintainers to carefully test the updates, rather than a reason to withhold security updates from users, but it’s true there is some risk here. One possible solution would be to have two different WebKitGTK+ packages, say, webkitgtk-secure, which would receive updates and be used by high-risk software like web browsers and email clients, and a second webkitgtk-stable package that would not receive updates to reduce regression potential.

Recommended Distributions

We regularly receive bug reports from users with very old versions of WebKit, who trust their distributors to handle security for them and might not even realize they are running ancient, unsafe versions of WebKit. I strongly recommend using a distribution that releases WebKitGTK+ updates shortly after they’re released upstream. That is currently only Arch and Fedora. (You can also safely use WebKitGTK+ in Debian testing — except during its long freeze periods — and Debian unstable, and maybe also in openSUSE Tumbleweed. Just be aware that the stable releases of these distributions are currently not receiving our security updates.) I would like to add more distributions to this list, but I’m currently not aware of any more that qualify.

The Great API Break

So, if only distributions would ship the latest release of WebKitGTK+, then everything would be good, right? Nope, because of a large API change that occurred two and a half years ago, called WebKit2.

WebKit (an API layer within the WebKit project) and WebKit2 are two separate APIs around WebCore. WebCore is the portion of the WebKit project that Google forked into Blink; it’s too low-level to be used directly by applications, so it’s wrapped by the nicer WebKit and WebKit2 APIs. The difference between the WebKit and WebKit2 APIs is that WebKit2 splits work into multiple secondary processes. Asides from the UI process, an application will have one or many separate web processes (for the actual page rendering), possibly a separate network process, and possibly a database process for IndexedDB. This is good for security, because it allows the secondary processes to be sandboxed: the web process is the one that’s likely to be compromised first, so it should not have the ability to access the filesystem or the network. (Remember, though, that there is no Linux sandbox yet, so this is currently only a theoretical benefit.) The other main benefit is robustness. If a web site crashes the renderer, only a single web process crashes (corresponding to one tab in Epiphany), not the entire browser. UI process crashes are comparatively rare.

Intermission: Certificate Verification

Another advantage provided by the API change is the opportunity to handle HTTPS connections more securely. In the original WebKitGTK+ API, applications must handle certificate verification on their own. This was a serious mistake; predictably, applications performed no verification at all, or did so improperly. For instance, take this Shotwell bug which is not fixed in any released version of Shotwell, or this Banshee bug which is still open. Probably many more applications are affected, because I have not done a comprehensive check. The new API is secure by default; applications can ignore verification errors, but only if they go out of their way to do so.

Remember that even though WebKitGTK+ 2.4.9 was released upstream over eight months ago, Ubuntu 14.04 is still on 2.4.8? It’s worth mentioning that 2.4.9 contains the fix for that serious networking backend issue I mentioned earlier (CVE-2015-2330). The bug is that TLS certificate verification was not performed until an HTTP response was received from the server; it’s supposed to be performed before sending an HTTP request, to prevent secure cookies from leaking. This is a disaster, as attackers can easily use it to get your session cookie and then control your user account on most websites. (Credit to Ross Lagerwall for reporting that issue.) We reported this separately to oss-security due to its severity, but that was not enough to convince distributions to update. But most applications in Ubuntu 14.04, including Epiphany and Midori, would not even benefit from this fix, because the change only affects WebKit2; remember, there’s no certificate verification in the original WebKitGTK+ API. (Modern versions of Epiphany do use WebKit2, but not the old version included in Ubuntu 14.04.) Old versions of Epiphany and Midori load pages even if certificate verification fails; the verification result is only used to change the status of a security indicator, basically giving up your session cookies to attackers.

Removing WebKit1

WebKit2 has been around for Mac and iOS for longer, but the first stable release for WebKitGTK+ was the appropriately-versioned WebKitGTK+ 2.0, in March 2013. This release actually contained three different APIs: webkitgtk-1.0, webkitgtk-3.0, and webkit2gtk-3.0. webkitgtk-1.0 was the original API, used by GTK+ 2 applications. webkitgtk-3.0 was the same thing for GTK+ 3 applications, and webkit2gtk-3.0 was the new WebKit2 API, available only for GTK+ 3 applications.

Maybe it should have remained that way.

But, since the original API was a maintenance burden and not as stable or robust as WebKit2, it was deleted after the WebKitGTK+ 2.4 release in March 2014. Applications had had a full year to upgrade; surely that was long enough, right? The original WebKit API layer is still maintained for the Mac, iOS, and Windows ports, but the GTK+ API for it is long gone. WebKitGTK+ 2.6 (September 2014) was released with only one API, webkit2gtk-4.0, which was basically the same as webkit2gtk-3.0 except for a couple small fixes; most applications were able to upgrade by simply changing the version number. Since then, we have maintained API and ABI compatibility for webkit2gtk-4.0, and intend to do so indefinitely, hopefully until GTK+ 4.0.

A lot of good that does for applications using the API that was removed.

WebKit2 Adoption

While upgrading to the WebKit2 API will be easy for most applications (it took me ten minutes to upgrade GNOME Initial Setup), for many others it will be a significant challenge. Since rendering occurs out of process in WebKit2, the DOM API can only be accessed by means of a shared object injected into the web process. For applications that perform only a small amount of DOM manipulation, this is a minor inconvenience compared to the old API. For applications that use extensive DOM manipulation — the email clients Evolution and Geary, for instance — it’s not just an inconvenience, but a major undertaking to upgrade to the new API. Worse, some applications (including both Geary and Evolution) placed GTK+ widgets inside the web view; this is no longer possible, so such widgets need to be rewritten using HTML5. Say nothing of applications like GIMP and Geany that are stuck on GTK+ 2. They first have to upgrade to GTK+ 3 before they can consider upgrading to modern WebKitGTK+. GIMP is working on a GTK+ 3 port anyway (GIMP uses WebKitGTK+ for its help browser), but many applications like Geany (the IDE, not to be confused with Geary) are content to remain on GTK+ 2 forever. Such applications are out of luck.

As you might expect, most applications are still using the old API. How does this work if it was already deleted? Distributions maintain separate packages, one for old WebKitGTK+ 2.4, and one for modern WebKitGTK+. WebKitGTK+ 2.4 has not had any updates since last May, and the last real comprehensive security update was over one year ago. Since then, almost 130 vulnerabilities have been fixed in newer versions of WebKitGTK+. But since distributions continue to ship the old version, few applications are even thinking about upgrading. In the case of the email clients, the Evolution developers are hoping to upgrade later this year, but Geary is completely dead upstream and probably will never be upgraded. How comfortable are you with using an email client that has now had no security updates for a year?

(It’s possible there might be a further 2.4 release, because WebKitGTK+ 2.4 is incompatible with GTK+ 3.20, but maybe not, and if there is, it certainly will not include many security fixes.)

Fixing Things

How do we fix this? Well, for applications using modern WebKitGTK+, it’s a simple problem: distributions simply have to start taking our security updates.

For applications stuck on WebKitGTK+ 2.4, I see a few different options:

  1. We could attempt to provide security backports to WebKitGTK+ 2.4. This would be very time consuming and therefore very expensive, so count this out.
  2. We could resurrect the original webkitgtk-1.0 and webkitgtk-3.0 APIs. Again, this is not likely to happen; it would be a lot of work to restore them, and they were removed to reduce maintenance burden in the first place. (I can’t help but feel that removing them may have been a mistake, but my colleagues reasonably disagree.)
  3. Major distributions could remove the old WebKitGTK+ compatibility packages. That will force applications to upgrade, but many will not have the manpower to do so: good applications will be lost. This is probably the only realistic way to fix the security problem, but it’s a very unfortunate one. (But don’t forget about QtWebKit. QtWebKit is based on an even older version of WebKit than WebKitGTK+ 2.4. It doesn’t make much sense to allow one insecure version of WebKit but not another.)

Or, a far more likely possibility: we could do nothing, and keep using insecure software.

BarCamp Yangon 2016
<fb:like href="http://yannaing.pro/2016/02/02/barcamp-yangon-2016/" layout="button_count" share="true" show_faces="false" width="450"></fb:like>

I participated in BarCamp Yangon 2016 (January 30 – 31, 2016) for both days with 5 topics which are from those planned earlier before it.


Day #1

In the first day, at around 8:45 AM, we arrived to Myanmar ICT Park  where BarCamp Yangon 2016 took place. Soon after we arrived, we celebrated Opening Ceremony of it together with Pravin Satpute and Leap Sok. After the opening ceremony, I started to register some of my topics.

Before starting my topics, I assisted Pravin for his very first talk in Yangon/Myanmar by introducing him to the audience, assisting him for any translation and providing assistance for Fedora Related Q&A.

Topic#1 “Building a Mini Google Apps Server for Offices” (~25 people attended)

This topic intended for offices & Server Administrators and it is aimed to point out that there is a way to integrate different services with LDAP, which is the central Authentication  database. This topic got no presentations and it is performed on-the-fly with Virtual Private Server (VPS) from Digital Ocean.

IMG_20160130_135837 IMG_20160130_141606

Topic#2 “Hide & Seek on an IP” (~50 people attended)

This topic is for those who want to hide their IP, Network Administrators & Network Security related individuals. This topic got no presentation too as I performed on-the-fly with 3 Virtual Private Servers. Especially scoping on logs of Apache Web Server, I performed presentation with Virtual Private Networks (VPN), SSH Tunnels & Squid Proxy pointing out that how to hide the IP and how to catch a matching IP from the Network Administrators side.

IMG_20160130_151430 IMG_20160130_151457

Topic#3 “Security Hardening on Linux” (~30 people attended)

Especially for System Administrators and Server Administrators this topic is aimed for, explaining about permissions in Linux. From the very basic of Discretionary Access Control (DAC), I also explained about Mandatory Access Control (MAC), scoping it to SELinux.


After all 3 topics in the first day were done, we left Myanmar ICT Park around 5:00 PM.

Day #2

On the 2nd day of BarCamp Yangon 2016, we arrived to Myanmar ICT Park around 8:30 AM and as usual started registration for the following topics.


Topic#1 “Secure Communication with Cryptography” (~100 people attended)

Download> secure-communication-with-cryptography.pdf

This topic took place at Conference Hall of MICT Park’s Main Building on 12:00 PM. I started from why and how the data is leaked and how the data can be steal. Then, I explained encryption/decryption methods to cover the raw data from being stolen. OpenSSL, GnuPG and Steganography are included. Audience is so many that the conference hall was about to be full. Got very nice questions from the audience and I’m sure I did answered them well.

<figure class="wp-caption alignnone" id="attachment_333" style="width: 300px;">12670845_10153847448352567_2850427345279920349_n<figcaption class="wp-caption-text">Photo Credit: BarCamp Yangon Facebook Page</figcaption></figure>


Topic#2 “Internet Security for End-Users” (~70 people attended)

Download> internetsecurity-userlevel.pdf

On 2:00 PM, at MCCi Jade, this topic was taken. Since it was aimed for end-users, I did explained the ways of attacking on an individual/company together with possible best solutions for each types of attacks like Viruses, Embedded programs, Phishing, Keystroke Logging, Social Engineering, Man-in-the-Middle, Sniffing and Hybrid Attacks, etc,.


After I had speech on both topics, me, Pravin and Leap Sok took photo together with Fedora Project – Myanmar Community peoples and left Myanmar ICT Park in the evening.


ဇန္န၀ါရီလ ၃၀ ႏွင့္ ၃၁ ရက္ေန႔ ၂၀၁၆ ခုႏွစ္တြင္က်င္းပေသာ BarCamp Yangon 2016 တြင္ ေခါင္းစဥ္ေပါင္း ၅ ခုနွင့္ ပါ၀င္ခဲ့ပါတယ္။


Day #1

ပထမေန႔တြင္ေတာ့ BarCamp Yangon 2016 ကို က်င္းပေသာ Myanmar ICT Park သို႔ နံနက္ ၈း၄၅ မိနစ္ခန္႔တြင္ေရာက္ရွိခဲ့ပါတယ္။ ထုိ႔ေနာက္ ဖြင့္ပြဲအား Pravin Satpute ႏွင့္ Leap Sok တုိ႔ႏွင့္အတူ အျခား Fedora Project – Myanmar Community မွ အဖြဲ႕သားမ်ား၊ The Cyber Wings Team မွ အဖြဲ႕၀င္မ်ားႏွင့္အတူ တက္ေရာက္ခဲ့ပါတယ္။ ဖြင့္ပြဲအျပီးမွာေတာ့ Topic မ်ားအား စ တင္ မွတ္ပံုတင္ခဲ့ပါတယ္။

ကြ်န္ေတာ့္ Topic မ်ားကို စ တင္ မေဟာေျပာခင္မွာေတာ့ Pravin ၏ ျမန္မာႏုိင္ငံတြင္ ပထမဆံုး ေျပာၾကားသည့္ ေခါင္းစဥ္တြင္ ၄င္းအား မိတ္ဆက္ေပးျခင္း၊ ဘာသာျပန္ေရး ႏွင့္ Fedora ႏွင့္ပတ္သက္သည့္အေၾကာင္းအရာမ်ား အေမးအေျဖမ်ား ကို ကူညီေပးခဲ့ပါတယ္။

Topic#1 “Building a Mini Google Apps Server for Offices” (~25 people attended)

ဤ Topic ကေတာ့ ရံုးမ်ား ႏွင့္ Server Administrator မ်ားအတြက္ရည္ရြယ္ျပီး Server Installation ႏွင့္ LDAP ျဖင့္ မတူညီေသာ Service မ်ားကို ေပါင္းစပ္ကာ ၄င္း LDAP အား ဗဟို Authentication Database အျဖစ္ သံုးႏုိင္သည့္ နည္းလမ္းကို ေျပာျပသြားျခင္းျဖစ္ပါတယ္။ ဤ Topic မွာေတာ့ Presentation မပါ၀င္ပဲ Digital Ocean ရွိ Virtual Private Server ျဖင့္ တုိက္ရုိက္ ျပသေပးသြားျခင္းျဖစ္ပါတယ္။

IMG_20160130_135837 IMG_20160130_141606

Topic#2 “Hide & Seek on an IP” (~50 people attended)

ဤ Topic ကေတာ့ IP ကို ဖုံးကြယ္ခ်င္သူမ်ား၊ Network Administrator မ်ားႏွင့္ Network လံုျခံဳေရးႏွင့္သက္ဆုိင္သည့္ သူမ်ားအတြက္ရည္ရြယ္ခဲ့ပါတယ္။ ဤ Topic တြင္လည္း Presentation မပါ၀င္ပဲ VPS ၃ ခုျဖင့္ တုိက္ရိုက္ ျပသ သြားခဲ့တာျဖစ္ပါတယ္။ အဓိကအားျဖင့္ Apache Web Server ၏ Log မ်ားႏွင့္တကြ VPN ႏွင့္ SSH Tunnel ႏွင့္ Squid Proxy မ်ားကို အသံုးျပဳျပီး IP ကို ဖံုးကြယ္ေျပာင္းလဲျခင္းမ်ား ႏွင့္ ထုိသို႕ ဖံုးကြယ္ေျပာင္းလဲလုိက္ေသာ IP မ်ားကို Network Administrator မ်ားအေနျဖင့္ ျပန္လည္ေျခရာခံျခင္းမ်ား ကိုေျပာၾကားေပးသြားတာျဖစ္ပါတယ္။

IMG_20160130_151430 IMG_20160130_151457

Topic#3 “Security Hardening on Linux” (~30 people attended)

အထူးသျဖင့္ System Administrator မ်ားႏွင့္ Server Administrator မ်ားအတြက္ရည္ရြယ္၍ ဤ Topic သည္ Linux တြင္ ပါ၀င္ေသာ Permission မ်ားကို ျပသ သြားျခင္းျဖစ္ပါတယ္။ Discretionary Access Control ေခၚ DAC ၏ အေျခခံမွစ၍ SELinux ကို အထူးျပဳေသာ Mandatory Access Control မ်ားကိုပါ ရွင္းလင္းျပသေပးသြားတာျဖစ္ပါတယ္။

ေဖၚျပပါ Topic ၃ ခု ေဟာေျပာျပီး ညေန ၅ နာရီ၀န္းက်င္တြင္ MICT Park မွ ထြက္ခြာခဲ့ပါတယ္။

Day #2

On the 2nd day of BarCamp Yangon 2016, we arrived to Myanmar ICT Park around 8:30 AM and as usual started registration for the following topics.

BarCamp Yangon 2016 ၏ ဒုတိယေန႕တြင္ေတာ့ MICT Park သို႕ နံနက္ ၈ နာရီခြဲခန္႕တြင္ ေရာက္ရွိခဲ့ျပီး ေအာက္ပါ ေခါင္းစဥ္မ်ားကို ထံုးစံအတုိင္း မွတ္ပံုတင္ခဲ့ပါတယ္။


Topic#1 “Secure Communication with Cryptography” (~100 people attended)

Download> secure-communication-with-cryptography.pdf

ဤ Topic ကိုေတာ့ ေန႔လည္ ၁၂ နာရီတြင္ MICT Park ၏ Main Building ရွိ Conference Hall တြင္ ျပဳလုပ္ခဲ့ပါတယ္။ ဘာေၾကာင့္၊ မည္ကဲ့သို႕ သတင္းအခ်က္အလက္ ေပါက္ၾကားမႈမ်ိဳးျဖစ္ႏုိင္ျပီး မည္ကဲ့သို႕ ခုိးယူႏုိင္သည္မ်ားကို စတင္ေျပာၾကားခဲ့ျပီး ထုိသို႕ျဖစ္ျခင္းမ်ားကို ကာကြယ္လုိပါက Encryption/Decryption နည္းမ်ားကိုသံုးက မိမိ၏ သတင္းအခ်က္အလက္မ်ားကို ခုိးယူခံရျခင္းမွ ကာကြယ္ႏုိင္ရန္ ရွင္းလင္းေျပာၾကားခဲ့ပါတယ္။ ၄င္းတြင္ OpenSSL, GnuPG ႏွင့္ Steganography မ်ားပါ၀င္ခဲ့ပါတယ္။ တက္ေရာက္ခဲ့သူ အရမ္းမ်ားေသာေၾကာင့္ Conference Hall ျပည့္လုနီးပါးပါပဲ။ တက္ေရာက္သူမ်ားထံမွလည္း အလြန္ေကာင္းေသာ ေမးခြန္းမ်ားကို ေမးျမန္းခံရျပီး ၄င္းတုိ႕အားလည္း ေကာင္းစြာျပန္လည္ေျဖၾကားႏုိင္ခဲ့လိမ့္မယ္လုိ႔လည္း ယံုၾကည္ပါတယ္။

<figure class="wp-caption alignnone" id="attachment_333" style="width: 300px;">12670845_10153847448352567_2850427345279920349_n<figcaption class="wp-caption-text">Photo Credit: BarCamp Yangon Facebook Page</figcaption></figure>


Topic#2 “Internet Security for End-Users” (~70 people attended)

Download> internetsecurity-userlevel.pdf

ေန႔ခင္း ၂ နာရီတြင္ေတာ့ MCCi Jade အခန္းတြင္ ဤ Topic ကို ေျပာၾကားခဲ့ပါတယ္။ အဓိကအားျဖင့္ End-user ေတြကို ရည္ရြယ္ေသာေၾကာင့္ လူတစ္ဦးခ်င္းစီနဲ႕ Company မ်ားကို မည္ကဲ့သို႕တုိက္ခုိက္တတ္သည္မ်ားအား အေကာင္းဆံုးျဖစ္ႏုိင္မည့္ ေျဖရွင္းခ်က္မ်ားႏွင့္တကြ ရွင္းလင္းေျပာၾကားခဲ့ပါတယ္။ ၄င္းတုိ႕ထဲတြင္ Viruses, Embedded programs, Phishing, Keystroke Logging, Social Engineering, Man-in-the-Middle, Fake Emails, Sniffing and Hybrid Attacks, ႏွင့္ အျခားအေၾကာင္းအရာမ်ား ပါ၀င္ခဲ့ပါတယ္။


ေဖၚျပပါ ေခါင္းစဥ္မ်ား ေျပာၾကားအျပီးမွာေတာ့ ကြ်န္ေတာ္၊ Pravin ႏွင့္ Leap Sok တုိ႔သည္ Fedora Project – Myanmar Community မွ အဖြဲ႕၀င္မ်ားႏွင့္ အမွတ္တရ ဓါတ္ပံုမ်ား ရိုက္ကာ MICT Park မွ ညေနပိုင္းတြင္ ထြက္ခြာခဲ့ပါတယ္။


ဇန်နဝါရီလ ၃၀ နှင့် ၃၁ ရက်နေ့ ၂၀၁၆ ခုနှစ်တွင်ကျင်းပသော BarCamp Yangon 2016 တွင် ခေါင်းစဉ်ပေါင်း ၅ ခုနှင့် ပါ၀င်ခဲ့ပါတယ်။


Day #1

ပထမနေ့တွင်တော့ BarCamp Yangon 2016 ကို ကျင်းပသော Myanmar ICT Park သို့ နံနက် ၈း၄၅ မိနစ်ခန့်တွင်ရောက်ရှိခဲ့ပါတယ်။ ထို့နောက် ဖွင့်ပွဲအား Pravin Satpute နှင့် Leap Sok တို့နှင့်အတူ အခြား Fedora Project – Myanmar Community မှ အဖွဲ့သားများ၊ The Cyber Wings Team မှ အဖွဲ့ဝင်များနှင့်အတူ တက်ရောက်ခဲ့ပါတယ်။ ဖွင့်ပွဲအပြီးမှာတော့ Topic များအား စ တင် မှတ်ပုံတင်ခဲ့ပါတယ်။

ကျွန်တော့် Topic များကို စ တင် မဟောပြောခင်မှာတော့ Pravin ၏ မြန်မာနိုင်ငံတွင် ပထမဆုံး ပြောကြားသည့် ခေါင်းစဉ်တွင် ၄င်းအား မိတ်ဆက်ပေးခြင်း၊ ဘာသာပြန်ရေး နှင့် Fedora နှင့်ပတ်သက်သည့်အကြောင်းအရာများ အမေးအဖြေများ ကို ကူညီပေးခဲ့ပါတယ်။

Topic#1 “Building a Mini Google Apps Server for Offices” (~25 people attended)

ဤ Topic ကတော့ ရုံးများ နှင့် Server Administrator များအတွက်ရည်ရွယ်ပြီး Server Installation နှင့် LDAP ဖြင့် မတူညီသော Service များကို ပေါင်းစပ်ကာ ၄င်း LDAP အား ဗဟို Authentication Database အဖြစ် သုံးနိုင်သည့် နည်းလမ်းကို ပြောပြသွားခြင်းဖြစ်ပါတယ်။ ဤ Topic မှာတော့ Presentation မပါ၀င်ပဲ Digital Ocean ရှိ Virtual Private Server ဖြင့် တိုက်ရိုက် ပြသပေးသွားခြင်းဖြစ်ပါတယ်။

IMG_20160130_135837 IMG_20160130_141606

Topic#2 “Hide & Seek on an IP” (~50 people attended)

ဤ Topic ကတော့ IP ကို ဖုံးကွယ်ချင်သူများ၊ Network Administrator များနှင့် Network လုံခြုံရေးနှင့်သက်ဆိုင်သည့် သူများအတွက်ရည်ရွယ်ခဲ့ပါတယ်။ ဤ Topic တွင်လည်း Presentation မပါ၀င်ပဲ VPS ၃ ခုဖြင့် တိုက်ရိုက် ပြသ သွားခဲ့တာဖြစ်ပါတယ်။ အဓိကအားဖြင့် Apache Web Server ၏ Log များနှင့်တကွ VPN နှင့် SSH Tunnel နှင့် Squid Proxy များကို အသုံးပြုပြီး IP ကို ဖုံးကွယ်ပြောင်းလဲခြင်းများ နှင့် ထိုသို့ ဖုံးကွယ်ပြောင်းလဲလိုက်သော IP များကို Network Administrator များအနေဖြင့် ပြန်လည်ခြေရာခံခြင်းများ ကိုပြောကြားပေးသွားတာဖြစ်ပါတယ်။

IMG_20160130_151430 IMG_20160130_151457

Topic#3 “Security Hardening on Linux” (~30 people attended)

အထူးသဖြင့် System Administrator များနှင့် Server Administrator များအတွက်ရည်ရွယ်၍ ဤ Topic သည် Linux တွင် ပါ၀င်သော Permission များကို ပြသ သွားခြင်းဖြစ်ပါတယ်။ Discretionary Access Control ခေါ် DAC ၏ အခြေခံမှစ၍ SELinux ကို အထူးပြုသော Mandatory Access Control များကိုပါ ရှင်းလင်းပြသပေးသွားတာဖြစ်ပါတယ်။

ဖေါ်ပြပါ Topic ၃ ခု ဟောပြောပြီး ညနေ ၅ နာရီဝန်းကျင်တွင် MICT Park မှ ထွက်ခွာခဲ့ပါတယ်။

Day #2

On the 2nd day of BarCamp Yangon 2016, we arrived to Myanmar ICT Park around 8:30 AM and as usual started registration for the following topics.

BarCamp Yangon 2016 ၏ ဒုတိယနေ့တွင်တော့ MICT Park သို့ နံနက် ၈ နာရီခွဲခန့်တွင် ရောက်ရှိခဲ့ပြီး အောက်ပါ ခေါင်းစဉ်များကို ထုံးစံအတိုင်း မှတ်ပုံတင်ခဲ့ပါတယ်။


Topic#1 “Secure Communication with Cryptography” (~100 people attended)

Download> secure-communication-with-cryptography.pdf

ဤ Topic ကိုတော့ နေ့လည် ၁၂ နာရီတွင် MICT Park ၏ Main Building ရှိ Conference Hall တွင် ပြုလုပ်ခဲ့ပါတယ်။ ဘာကြောင့်၊ မည်ကဲ့သို့ သတင်းအချက်အလက် ပေါက်ကြားမှုမျိုးဖြစ်နိုင်ပြီး မည်ကဲ့သို့ ခိုးယူနိုင်သည်များကို စတင်ပြောကြားခဲ့ပြီး ထိုသို့ဖြစ်ခြင်းများကို ကာကွယ်လိုပါက Encryption/Decryption နည်းများကိုသုံးက မိမိ၏ သတင်းအချက်အလက်များကို ခိုးယူခံရခြင်းမှ ကာကွယ်နိုင်ရန် ရှင်းလင်းပြောကြားခဲ့ပါတယ်။ ၄င်းတွင် OpenSSL, GnuPG နှင့် Steganography များပါ၀င်ခဲ့ပါတယ်။ တက်ရောက်ခဲ့သူ အရမ်းများသောကြောင့် Conference Hall ပြည့်လုနီးပါးပါပဲ။ တက်ရောက်သူများထံမှလည်း အလွန်ကောင်းသော မေးခွန်းများကို မေးမြန်းခံရပြီး ၄င်းတို့အားလည်း ကောင်းစွာပြန်လည်ဖြေကြားနိုင်ခဲ့လိမ့်မယ်လို့လည်း ယုံကြည်ပါတယ်။

<figure class="wp-caption alignnone" id="attachment_333" style="width: 300px;">12670845_10153847448352567_2850427345279920349_n<figcaption class="wp-caption-text">Photo Credit: BarCamp Yangon Facebook Page</figcaption></figure>


Topic#2 “Internet Security for End-Users” (~70 people attended)

Download> internetsecurity-userlevel.pdf

နေ့ခင်း ၂ နာရီတွင်တော့ MCCi Jade အခန်းတွင် ဤ Topic ကို ပြောကြားခဲ့ပါတယ်။ အဓိကအားဖြင့် End-user တွေကို ရည်ရွယ်သောကြောင့် လူတစ်ဦးချင်းစီနဲ့ Company များကို မည်ကဲ့သို့တိုက်ခိုက်တတ်သည်များအား အကောင်းဆုံးဖြစ်နိုင်မည့် ဖြေရှင်းချက်များနှင့်တကွ ရှင်းလင်းပြောကြားခဲ့ပါတယ်။ ၄င်းတို့ထဲတွင် Viruses, Embedded programs, Phishing, Keystroke Logging, Social Engineering, Man-in-the-Middle, Fake Emails, Sniffing and Hybrid Attacks, နှင့် အခြားအကြောင်းအရာများ ပါ၀င်ခဲ့ပါတယ်။


ဖေါ်ပြပါ ခေါင်းစဉ်များ ပြောကြားအပြီးမှာတော့ ကျွန်တော်၊ Pravin နှင့် Leap Sok တို့သည် Fedora Project – Myanmar Community မှ အဖွဲ့ဝင်များနှင့် အမှတ်တရ ဓါတ်ပုံများ ရိုက်ကာ MICT Park မှ ညနေပိုင်းတွင် ထွက်ခွာခဲ့ပါတယ်။


Featured Image’s Credit: BarCamp Yangon’s Facebook Page -> https://web.facebook.com/barcampyangon

<fb:like href="http://yannaing.pro/2016/02/02/barcamp-yangon-2016/" layout="button_count" share="true" show_faces="false" width="450"></fb:like>
[Short Tip] What not to forget when controlling Windows Servers via Ansible Tower

Ansible Logo

Ansible does support Windows with an entire set of modules. Thus it is also possible to run Ansible playbooks targeting Windows systems right from Ansible Tower. However, since Windows does works via WinRM and not SSH, the appropriate variables must be set in the definition of the inventory of the machine:


The given screenshot shows the variables for Ansible 1.9. For 2.0 and above the variables are a bit different. Also, keep in mind that you might need to create an additional set of credentials.

Filed under: Fedora & RHEL, Linux, Microsoft, Shell, Short Tip, Technology
leaking buffers in wayland

So in my last blog post I mentioned Matthias was getting SIGBUS when using wayland for a while. You may remember that I guessed the problem was that his /tmp was filling up, and so I produced a patch to stop using /tmp and use memfd_create instead. This resolved the SIGBUS problem for him, but there was something gnawing at me: why was his /tmp filling up? I know gnome-terminal stores its unlimited scrollback buffer in an unlinked file in /tmp so that was one theory. I also have seen, in some cases firefox downloading files to /tmp. Neither explanation sat well with me. scrollback buffers don’t get that large very quickly and Matthias was seeing the problem several times a day. I also doubted he was downloading large files in firefox several times a day. Nonetheless, I shrugged, and moved on to other things…

…until Thursday. Kevin Fenzi mentioned on IRC that he was experiencing a 12GB leak in gnome-shell. That piqued my interest and seemed pretty serious, so I started to troubleshoot with him. My first question was “Are you using the proprietary nvidia driver?”. I asked this because I know the nvidia driver has in the past had issues with leaking memory and gnome-shell. When Kevin responded that he was on intel hardware I then asked him to post the output of /proc/$(pidof gnome-shell)/maps so we could see the make up of the lost memory. Was it the heap? or some other memory mapped regions? To my surprise it was the memfd_create’d shared memory segments from my last post ! So window pixel data was getting leaked. This explains why /tmp was getting filled up for Matthias before, too. Previously, the shared memory segments resided in /tmp after all, so it wouldn’t have taken long for them to use up /tmp.

Of course, the compositor doesn’t create the leaked segments, the clients do, and then those clients share them with the compositor. So we probed a little deeper and found the origin of the leaking segments; they were coming from gnome-terminal. My next thought was to try to reproduce. After a few minutes I found out that typing:

$ while true; do echo; done

into my terminal and then switching focus to and from the terminal window made it leak a segment every time focus changed. So I had a reproducer and just needed to spend some time to debug it. Unfortunately, it was the end of the day and I had to get my daughter from daycare, so I shelved it for the evening. I did notice before I left, though, one oddity in the gtk+ wayland code: it was calling a function named _gdk_wayland_shm_surface_set_busy that contained a call to cairo_surface_reference. You would expect a function called set_something to be idempotent. That is to say, if you call it multiple times it shouldn’t add a new reference to a cairo surface each time. Could it be the surface was getting set “busy” when it was already set busy, causing it to leak a reference to the cairo surface associated with the shared memory, keeping it from getting cleaned up later?

I found out the next day, that indeed, was the case. That’s when I came up with a patch to make sure we never call set_busy when the surface was already busy. Sure enough, it fixed the leak. I wasn’t fully confident in it, though. I didn’t have a full big picture understanding of the whole workflow between compositor and gtk+, and it wasn’t clear to me if set_busy was supposed to ever get called when the surface was busy. I got in contact with the original author of the code, Jasper St. Pierre, to get his take. He thought the patch was okay (modulo some small style changes), but also said that part of the existing code needed to be redone.

The point of the busy flag was to mark a shared memory region as currently being read by the compositor. If the buffer was busy, then the gtk+ couldn’t draw to it without risking stepping on the compositors toes. If gtk+ needed to draw to a busy surface, it instead allocated a temporary buffer to do the drawing and then composited that temporary buffer back to the shared buffer at a later time. The problem was, as written, the “later time” wasn’t necessarily when the shared buffer was available again. The temporary buffer was created right before the toolkit staged some pixel updates, and copied back to the shared buffer after the toolkit was done with that one draw operation. The temporary buffer was scoped to the drawing operation, but the shared buffer wouldn’t be available for new contents until the next frame event some milliseconds later.

So my plan, after conferring with Matthias, was to change the code to not rely on getting the shared buffer back. We’d allocate a “staging” buffer, do all draw operations to it, hand it off to the compositor when we’re done doing updates and forget about it. If we needed to do new drawing we’d allocate a new staging buffer, and so on. One downside of this approach is the new staging buffer has to be initialized with the contents of the previously handed off buffer. This is because, the next drawing operation may only update a small part of the window (say to blink a cursor), and we need the rest of the window to properly get drawn in that. This read back operation isn’t ideal, since it means copying around megabytes of pixel data. Thankfully, the wayland protocol has a mechanism in place to avoid the costly copy in most cases:

→ If a client receives a release event before the frame callback
→ requested in the same wl_surface.commit that attaches this
→ wl_buffer to a surface, then the client is immediately free to
→ re-use the buffer and its backing storage, and does not need a
→ second buffer for the next surface content update.

So that’s our out. If we get a release event on the buffer before the next frame event, the compositor is giving us the buffer back and we can reuse it as the next staging buffer directly. We would only need to allocate a new staging buffer if the compositor was tardy in returning the buffer to us. Alright, I had plan and hammered out a patch on friday. It didn’t leak, and from playing with the machine for while, everything seemed to function, but there was one hiccup: i set a breakpoint in gdb to see if the buffer release event was coming in and it wasn’t. That meant we were always doing the expensive copy operation. Again, I had to go, so I posted the patch to bugzilla and didn’t look at it again until the weekend. That’s when I discovered mutter wasn’t sending the release event for one buffer until it got replaced by another. I fixed mutter to send the release event as soon as it uploaded the pixel data to the gpu and then everything started working great, so I posted the finalized version of the gtk+ patch with a proper commit message, etc.

There’s still some optimization that could be done for compositors that don’t handle early buffer release. Rather than initializing the staging buffer using cairo, we could get away with doing a lone memcpy() call. We know the buffer is linear and each row is right next to the previous in memory, so memcpy might be faster than going through all the cairo/pixman machinery. Alternatively, rather than initializing the staging buffer up front with the contents of the old buffer, we could wait until drawing is complete, and then only draw the parts of the buffer that haven’t been overwritten. Hard to say what the right way to go is without profiling, but both weston on gl and mutter support the early release feature now, so maybe not worth spending too much time on anyway.

Input devices in Steam Big Picture mode

I’ve just updated the the Steam package to fix input detection of some device in Big Picture mode. The package comes now with some additional configuration files for input devices, to have them properly recognized and functioning in Big Picture mode. Check below for the complete list of input device configurations that have been added:

Configuration for the following devices is part of the original Steam tarball:

  • Steam Controller with USB adapter
  • HTC Vive HID Sensor with USB adapter

Detection for the following device has been modified to make it appear as a Game Pad and not as a mouse (due to its touchpad). This prevents the “ghost” keypresses in Steam Big Picture mode:

  • Nvidia Shield Controller with USB cable

Detection for the following device has been modified to have them properly detected as mice/keyboards and not joysticks due to a bug in the Linux kernel. This prevents the “ghost” keypresses in Steam Big Picture mode:

  • Microsoft Microsoft Wireless Optical Desktop® 2.10
  • Microsoft Wireless Desktop – Comfort Edition
  • Microsoft Microsoft® Digital Media Pro Keyboard
  • Microsoft Corp. Digital Media Pro Keyboard
  • Microsoft Microsoft® Digital Media Keyboard
  • Microsoft Corp. Digital Media Keyboard 1.0A
  • Microsoft Microsoft® Digital Media Keyboard 3000
  • Microsoft Microsoft® 2.4GHz Transceiver v6.0
  • Microsoft Microsoft® 2.4GHz Transceiver v8.0
  • Microsoft Corp. Nano Transceiver v1.0 for Bluetooth
  • Microsoft Wireless Mobile Mouse 1000
  • Microsoft Wireless Desktop 3000
  • Microsoft® SideWinder(TM) 2.4GHz Transceiver
  • Microsoft Corp. Wired Keyboard 600
  • Microsoft Corp. Sidewinder X4 keyboard
  • Microsoft® 2.4GHz Transceiver v9.0
  • Microsoft® Nano Transceiver v2.1
  • Microsoft Sculpt Ergonomic Keyboard (5KV-00001)
  • Microsoft® Nano Transceiver v1.0
  • Microsoft Wireless Keyboard 800
  • Microsoft® Nano Transceiver v2.0
  • WACOM CTE-640-U V4.0-3
  • Wacom Co., Ltd Graphire 4 6×8
  • Wacom Bamboo Pen and Touch CTH-460
  • A4 Tech Co., G7 750 mouse
  • A4 Tech Co., Ltd Bloody TL80 Terminator Laser Gaming Mouse
  • A4 Tech Co., Ltd Bloody RT7 Terminator Wireless
  • Modecom MC-5006 Keyboard
  • A4 Tech Co., Ltd Terminator TL9 Laser Gaming Mouse
  • A4 Tech Co., Ltd Bloody V5
  • A4 Tech Co., Ltd Bloody R3 mouse
  • A4 Tech Co., Ltd X-718BK Oscar Optical Gaming Mouse
  • A4 Tech Co., Ltd XL-750BK Laser Mouse
  • A4 Tech Co., Sharkoon Fireglider Optical
  • Cooler Master Storm Mizar Mouse

The relevant repository page has been updated accordingly. If you have any additional misbehaving device that does not currently work properly in Steam Big Picture mode, just contact me and I will try to add the device defnitions to the upstream repositories.

SCaLE 14x (2016) Event Report – Pasadena, California

SCaLE 14x (2016) Event Report

At a Glance: What is SCaLE?

Our Ambassadors in the Field

This report is for the following Ambassadors:

Welcome to SCaLE!

Welcome to SCaLE!

What is SCaLE?

SCaLE is a four day event focusing on a variety of free and open source training and speeches.

SCaLE is the fourteenth annual event in the Los Angeles (LA) area. This year, the venue moved from the hustle and bustle of the LAX airport location to Pasadena, home of Rose Parades and tech.

The SCaLE conference showcases Linux and open-source technology topics among the international community. The event’s chairperson, Mr. Ilan Rabinovitch, is a very conscientious, helpful, and friendly individual that inspires the rest of the SCaLE team to create such a great conference.

Day 1: 21 January


Brian Monroe with newest upgrade!

We arrived in the area Thursday morning to deliver banners for the Expo portion of the conference. This allowed our Ambassadors to begin familiarization with the new venue’s layout, including parking. The loading and unloading situation proved a little challenging for some because certain lots were a block away.

We checked in at the registration booth to receive our badges and admired our surroundings noting just how spacious the new buildings are compared to preceding conferences.

I attended a SCaLE Speaker workshop, which provided valuable presentation tips.

Later that day, I attended the FLOSS Reflection event. I enjoyed hearing about the outlook of Open Source by Jono Bacon and Maddog Hall and learned of the unfortunate passing of Ian Murdock, founder of Debian GNU/Linux. Also felt inspired hearing Keila Banks discuss how she spreads the word about Open Source with her school classmates.

I also sat in on the Linux Sucks event, with presenter Bryan Lunduke. Bryan provides a tongue-in-cheek view of what could definitely use improvement in the Linux world as well as what’s great.

I later checked into my hotel. Although the conditions outside were average and the hotel was far (about 11 blocks away, enough for a car drive), the inside of the room was clean and comfortable.


Home away from home for a few days…

Day 2: 22 January

Alex Acosta and Brian Monroe discussing some Ambassador business...

Alex Acosta and Brian Monroe discussing some Ambassador business…

On the 22nd, we returned to set up equipment at the Exhibit Hall. Brian rolled in a weighty event box,  Matthew brought in a broadcasting setup, Alex Acosta arrived for assist, and I (with Brian’s help) carried in the banners. After introducing ourselves and saying hello, we set up, configured, and, where needed, tested the live stream hardware, OLPC, media / swag, and banners. As an added bonus, I threw in a bowl of lollipops which turned out to be quite a hit.

We finished setting up well before our start time of 2 P.M., and observed how convenient our spot was in a high-traffic area adjacent to the Red Hat and OpenShift booths. It also helped being near the Linux New Media, Python, Docker, and O’Reilly tables, as that brought in much ancillary traffic.

Broadcasting setup is GO!

Broadcasting setup is GO!

Since we had a fair amount of Ambassadors, we just took shifts as needed for meal breaks or attending the occasional workshop. We did notice that there were not very many inexpensive options for places to eat that were a quick walk away. Quite a few of the nearby eateries, including Starbucks, had closed down despite posted hours indicating that they should otherwise be open.

Since we have a finite amount of swag, we placed a few items here and there to stem “swag” vacuuming, and handed shirts out to really great questions and feedback. Our strategy for this conference instead of just advocating Fedora involved actively listening to the needs of passersby as well.

I also realized that people tend to approach the Fedora table more if Ambassadors stand in front of the booth table instead of behind the table. Rather than quiz people on Fedora trivia and hand out items, I sensed that some were actively interested in learning more about trying Fedora, or needed to express constructively why their needs have moved from Fedora.

General questions I asked others were open-ended (not direct “yes” or “no” answers), to engage people into discussion and establish rapport. Such questions included:

  • So tell me what brings you here today?
  • How do you use Fedora?
  • If not using Fedora, what do you use and why?
  • Do you have any suggestions or comments for us to pass back upstream to Fedora?

The survey has spoken: Fedora makes near the top of the list!

Notable news during the evening included Bad Voltage’s Family Feud segment that indicated that Fedora made number 2 on the Linux popularity list of surveyed participants. A few audience members exclaimed “Wow!” or “Awesome!” in pleasant surprise.

Other highlights from the Bad Voltage show were that there is apparently a  Hannah Montana Linux distribution.

Suggestion / Feedback Box Items: 22 January

At our booth, we had a box for visitors to leave feedback, suggestions, and comments about Fedora. The following suggestions and feedback are from the box on Friday, January 22nd.

  1. Workspace: One felt burned (their words) by Gnome3. They were very unhappy by its instability and its incorporation into F23 without apparent ample testing.
  2. Spins: One requested an Raspberry Pi spin of Fedora, if it doesn’t exist. If it does exist, advertised awareness of its existence would be greatly appreciated.
  3. Usage: One person uses OpenBuild System and OpenQA (?) in their day-to-day.
  4. Newcomers: One expressed more accessibility for newcomers. Acronyms could be better spelled out in documentation.
  5. Newcomers: One felt that experts could be less condescending to the newcomers. These new topics have a high learning curve.
  6. Spins: One wonders if there was an e-commerce and/or web developer spin available.
  7. Newcomers: One felt installation for newcomers could be improved. They had to call a tech friend to finish off the installation; she never had to do this with one of the commercial operating systems.
  8. OLPC: One has an OLPC which gave him Broadcom headaches and he felt that could be improved.
  9. Newcomers / Dev: One felt there was a disconnected community. He cited that some type of on-boarding could help. Perhaps more classes at conferences or live demos online with how to get involved with the community and how to contribute.
  10. Marketing: One guest uses Ubuntu. This was his first time hearing about the Labs… his interest piqued. Sounds like more advertising of Labs / Spins could be done…
  11. Marketing: One uses Ubuntu Server. Describing how Fedora Server compares to Ubuntu might encourage him to switch.
  12. Embedded / Spins: One use of Fedora for embedded discovery. Could use a spin for that.
  13. Newcomers: One was requesting Fedora-sponsored and supported install-fests would be quite helpful.
  14. Comment: One commented they wanted something more modern.
  15. Firefox: One said it crashes. A lot.
  16. Workspace: One said that Ubuntu has some really nice fonts and has a nice look and feel. He wants nice stuff like that.
  17. Rosegarden: One commented that Rosegarden UI is glitchy. He commented a target model for that UI is something like Cakewalk™ GUI.
  18. Video drivers: One requested ATI driver support. Something that worked in 22 broke horribly in 23. He had to buy an NVIDIA card just to fix the issue, but wasn’t happy that it didn’t work cleanly moving up…
  19. Newcomers / Documentation / RDO / OpenStack: One requested that Fedora could be more user-friendly. They found RDO difficult and confusing. He also prefers Ubuntu to Fedora because it is more OpenStack friendly.
  20. Packaging: One mentioned something about “Package Marks”; he finds that he needs to reboot immediately when it does not appear to be necessary. Restarting later would be nice if it is recommended…
  21. Spins: A spin for ARM development was requested.
  22. SCaLE / Newcomers: A track like Ubucon but for Fedora was requested. Maybe it could include install-fest, getting started, and general application use.
  23. Docking/Request: One Ubuntu person commented that he really likes docking features and is hopeful to see inroads on the Fedora side of things…
  24. Bluetooth: One Fedora 20 person said that it couldn’t connect to his phone via Bluetooth to transfer files.

Day 3: 23 January

I had prep work for a large part of that day for my speaker presentation. See a recording of my talk here: Perry Rivera – Krita Presentation

Perry Rivera prepares for his upcoming talk...

Perry Rivera prepares for his upcoming talk…

I later joined in to assist and field questions from guests.

OLPC running Fedora under the hood...

OLPC running Fedora under the hood…

Suggestion / Feedback Box Items: 23 January

  1. RPM / Building / neovim: One person stated the RPM spec could be modernized and updated. The tutorials are ancient. Some training videos can be created or updated. How to translate from RPM, dpkg, pacman, and the universal installer would be greatly appreciated. Something like a Rosetta Stone for commands to use. The build process is confusing and could be demystified. What are the basic standards and best processes? A way to easily get neovim would be nice.
  2. Touch: One requested touchscreen support for Fedora.
  3. Newcomers: A few install-fest people stopped by and expressed enthusiasm for trying the Fedora distribution out.
  4. Newcomers: One currently uses Windows, but is interested in trying Fedora. He primarily uses his computer for browsing the web.

That evening, Game Night took place. Quite a fun and well attended event!

One of the many videogames/games at Game Night.

One of the many video games / games at Game Night

Day 4: 24 January

Matthew Williams interviews Ryan Jarvinen

Matthew Williams interviews Ryan Jarvinen

On Sunday, 24 January, we interviewed Ryan Jarvinen. We covered various topics, including CentOS’s history, Ubuntu, and Red Hat. We also covered the OpenShift project as it relates to building, deploying, and scaling. Finally, we asked him some big questions about Docker images.

We also interviewed Alex Acosta, Fedora Ambassador. We talked about his role as well as what interested him in becoming an Ambassador.

Suggestion / Feedback Box Items: 24 January

  1. Newcomers: Bumped into two install-fest participants. This couple was very happy to attend the install-fest and felt that the instruction was great. One person uses Fedora extensively on his work project.
  2. General kudos: Met someone from b3dtv who uses Fedora for operational day-to-day. He appreciates Fedora dev team’s efforts.
  3. Video-casting: Dual Monitor support for video-casting.
  4. Request: One guest uses Adobe Photoshop in his workflow, including keybindings, shapes, and tools. He was very enthusiastic when hearing more about the Krita software presentation from Perry Rivera.
  5. Request: One guest was wondering if someone could have Fedora meet-ups near Long Beach, assuming one doesn’t exist.
  6. Request / SELinux: One guest expressed that it should be advertised that SELinux is important, and perhaps train people of its operation. Also, make SELinux be less apt to be turned off by default in the main installer.
  7. KDE / Usability: One user had to downgrade from F23 to F21 because there were too many stability issues with KDE. He’s concerned that KDE5 crashes quite a bit. He suggested that perhaps until things get ironed out with KDE5, if it’s possible to retrofit it with KDE4 to keep it stable.
  8. Usage: One guest teaches Linux at UCLA extension and Santa Monica College. He expressed enthusiasm for Fedora.
  9. Kudos / GNOME: One guest mentioned that long ago, Fedora was heavy and not stable. Now, it’s very stable and more useful than other distributions, which are GUI- and app-heavy. He mentioned how he wouldn’t mind GNOME Classic to encourage this stability.
  10. Request: One guest wishes there was some type of static support e-mail address for people who don’t have access to or know how to use IRC.
  11. Pen Testing: One guest is primarily uses Kali for his day-to-day. Perhaps a Security Lab that is Kali-like for easier transition might be useful?
  12. Awareness and Diversity: (see Lessons Learned below)

Lessons Learned

We exhausted an entire box of F22 media by Friday, and an entire box of F21 media on Saturday. The peak day was Saturday. Based on the number of visitors, it seems like 4/5 of them use Fedora, and 1/5 do not, but might be willing to switch provided that their suggestions above might be incorporated. There seemed to be mostly support expressed for the Fedora distribution over the 4/5 group.

Our ambassadors chat with the community for feedback and comments...

Our ambassadors chat with the worldwide community for feedback and comments…

We did observe an overall, slight lack of diversity in attendance in relation to gender in visitors stopping by. It might help to study whether canvassing Ambassador participation from more groups and encouraging participation in conferences might yield a more positive attendance ratio.

Ambassador Requests / Ideas

  1. Perhaps someone might request Matthew Miller to do a possible future Keynote at SCaLE?
  2. Something like a Mini-Flock or Fedora Track at SCaLE. Also, a Fedora volunteer-run registration table for said event and activities to drum up interest.
  3. Another table perhaps if costs permit? The current setup seemed just a little cozy.
  4. Providing it is possible, sponsoring Game Night and/or supplying a Fedora display banner prominently during Game Night might prove advantageous.
  5. It is our thought that if we do order banners in the future that they may be lighter or with rolling mechanisms in case Ambassadors need to haul them certain distances.

Future ideas for the event box

  1. Newer distro media: A few requested F23 media. Although, in most cases, it almost doesn’t seem to matter, as people would generally take older media anyways. We had a few people politely wish for USB keys, but the same people generally took the install media anyways.
  2. Lollipops / candy: Minimal funds for reimbursement for donated sweets
  3. Ribbons for badges: Ribbons are gathering popularity and advertising interest at expos and conventions. It might be advantageous to make various sets of these for fun and pass them out.
  4. Stickers: More! Everyone loved them and want to use them for their boxes. Perhaps additional Fedora designs might garner popularity.
  5. Meta Key Sticker: An ambassador suggested making Meta Key Stickers that have the Fedora Logo to cover the Meta Key Logos…
  6. T-shirts: These were a hit and all disappeared by the very end. Could definitely use more.
  7. Pencils: The provided pens were very popular, but quite a few turned out to be duds. To avoid this awkwardness in the future, Fedora-branded pencils are all but guaranteed to work.
  8. Lanyards for the Ambassadors: It’s interesting to advocate Fedora while wearing the stock lanyard for a completely different organization because that’s what was received in the registration goodie bag. Having a Fedora-branded lanyards for the ambassadors to use at events would add value to our guests of the Fedora way.

Final Thoughts

It appears we left guests a lasting positive impression when they asked us if we work for anyone and when we responded that we donate our time to further the Fedora mission.

All staff members expressed a great attitude to be there. Feedback on talks and training given by SCaLE speakers indicated that many were engaged and instructed well.

That’s all the news from SCaLE 14x. Hope to see you next year!

Friends First Features Freedom!

Freedom, Friends, Features, First!

The post SCaLE 14x (2016) Event Report – Pasadena, California appeared first on Fedora Community Blog.

GPG key management, part 1

Welcome back to the GPG series, where we are exploring how to make use of GPG with other applications to secure and protect your data.This installment will cover key creation, key revocation certificate creation, and sending the public key to a key server. The second part of key management will cover exporting, revoking, adding and removing keys.

In Fedora, you have the option to use Seahorse or the command line to create a key. In this series, we will cover both.

GPG in the GUI: Seahorse

To start Seahorse, switch to the Activities overview and search for ‘seahorse‘ or ‘keys‘.

GPG Key Management: Open SeaHorse to get started

Activities > Keys

Open Seahorse (also named Passwords and Keys depending on your desktop environment).

GPG Key Management: Using Seahorse

Then select File > New

GPG Key Management: Generate keys with Seahorse

Then select PGP Key and Continue.

GPG Key Management: Seahorse advanced options

Default options.

Fill out the basic options with your name and email. The advanced options allow you to choose an encryption type, key strength, and expiration date. It is recommended that you use RSA and a key strength of 4096 bits. Choosing an expiration date is also highly recommended for security reasons. Some combinations of algorithm and key length are not secure, so stick with this recommendation unless you know what you are doing.

Recommendations for expiration dates range between two and five years. GPG gives you the ability to modify and extend key expiration dates later. An expiration date ensures helps keep your key secure. If you have multiple addresses, do not worry about which one to choose, because you can add extra email addresses after the key is created.

GPG in the command line

In Fedora, there are two commands for working with GPG: gpg and gpg2. If you are using GPG on the graphical desktop, use gpg2, because the resulting keyring and files integrate with desktop applications like Evolution and Seahorse.

To generate a key, use gpg2 –full-gen-key to choose all the key creation options.

$ gpg2 --full-gen-key

gpg (GnuPG) 2.1.9; Copyright (C) 2015 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

Please select what kind of key you want:
   (1) RSA and RSA (default)
   (2) DSA and Elgamal
   (3) DSA (sign only)
   (4) RSA (sign only)
Your selection?

Select option 1 to generate an RSA key for signing and encryption.

RSA keys may be between 1024 and 4096 bits long.
What keysize do you want? (2048) 4096

You will then be prompted to select a key length. The default is currently 2048, but the maximum length is recommended. As computing capabilities increase, shorter keys will become insecure to use. A longer key is likely to be secure for a longer time. While you can migrate to a new key, the process is a bit tedious.

Requested keysize is 4096 bits
Please specify how long the key should be valid.
         0 = key does not expire
      <n>  = key expires in n days
      <n>w = key expires in n weeks
      <n>m = key expires in n months
      <n>y = key expires in n years
Key is valid for? (0) 2

Key expires at Tue 26 Jan 2016 09:44:43 AM EST
Is this correct? (y/N)

When selecting a key expiration, ensure that you add the value for weeks, months, or years. As you can see from the example above, if you fail to do so, GPG defaults to days. As recommended with Seahorse, choose an expiration between two and five years.

GnuPG needs to construct a user ID to identify your key.

Real name: charles profitt
Email address: cprofitt@mail.com
You selected this USER-ID:
    "charles profitt <cprofitt@mail.com>"

Change (N)ame, (C)omment, (E)mail or (O)kay/(Q)uit? o

You are now prompted to enter values so GPG can construct a user ID. Enter your real name and email address. You can add extra email addresses after the key is created. After entering these values, GPG prompts you to check your entries and optionally change the values, accept them, or quit.

Change (N)ame, (C)omment, (E)mail or (O)kay/(Q)uit? o
We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.
We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.

gpg: key 5D50C86C marked as ultimately trusted
gpg: directory '/home/cprofitt/.gnupg/openpgp-revocs.d' created
public and secret key created and signed.

gpg: checking the trustdb
gpg: 3 marginal(s) needed, 1 complete(s) needed, PGP trust model
gpg: depth: 0  valid:   1  signed:   0  trust: 0-, 0q, 0n, 0m, 0f, 1u
gpg: next trustdb check due at 2016-01-26
pub   rsa4096/5D50C86C 2016-01-24 [expires: 2016-01-26]
      Key fingerprint = 647D B957 0E03 4247 8C30  6852 1462 A582 5D50 C86C
uid         [ultimate] charles profitt <cprofitt@mail.com>
sub   rsa4096/8D8FF075 2016-01-24 [expires: 2016-01-26]

Entropy is equally important to key length. Entropy generally means disorder. In key generation, it means disordered, random data that is virtually impossible to regenerate. If your key is not generated with enough entropy. it will be more vulnerable to attack. When the system gains enough entropy, it will generate your key with the options you selected.

It may take some time for your system to generate entropy. To assist the random number generator with this process, open other programs or move the mouse around. GPG will also prompt you to enter a passphrase for your key.

GPG Key Management: Set your key passphrase in gpg2

When you want to use either your public or private key, you will be prompted for your passphrase, so make sure you remember it! This makes the passphrase very important. Someone with your passphrase has the ability to assume your identity or decrypt your data. So make sure you use a strong passphrase. (Search Google for other ideas on making strong passphrases.) Do not use the same password used to log in to your account or the root account, or a password you use on any other system.

Sending a key to a key server

Now that you have a key pair, you need to send the public key to a keyserver so other people can find and use it. Your public key is used by others to verify messages you sign, or to decrypt a message sent with your private key. As we discussed in the first part of the series, keyservers on the internet collect and advertise public keys to make data exchange easier. They help you share a public key as widely as possible.

Remember that your public key is just one half of the key pair used in the asymmetric process of encrypting a message or file. A message encrypted with the public key can be decrypted only by the private key, and vice versa. This process will be discussed in greater details later in the series.


GPG Key Management: Sync public key to keyserver with Seahorse

Open Seahorse and navigate to your PGP keys. Then select Remote > Sync and publish keys…

GPG Key Management: Sync your public key with keyservers

The Sync button will be grayed out until you select a keyserver. To do this, click the Key Servers button.

GPG Key Management: Select a keyserver in Seahorse

The two default servers are both acceptable. In addition, the Fedora Project also provides a keyserver. To add this key server click the Add button. Enter the address keys.fedoraproject.org. Then select a server for publishing your public key in the drop-down menu below the list of keys. Finally, close the dialog box and the Sync button will be enabled.

Command line

To send your key to a keyserver, you must know your key identifier. To find your key identifier, use this command:

$ gpg2 --list-keys
pub   rsa4096/5D50C86C 2016-01-24 [expires: 2016-01-26]
uid         [ultimate] charles profitt <cprofitt@mail.com>
sub   rsa4096/8D8FF075 2016-01-24 [expires: 2016-01-26]

The key you want to send is your public key. This is the key on the first line starting with pub.

$ gpg2 --keyserver keys.fedoraproject.org --send-key 5D50C86C
gpg: sending key 5D50C86C to hkp server keys.fedoraproject.org

The key is now sent to the selected keyserver. With your public key available to others, you are ready to start using GPG to keep your communications authentic and secure.

Revocation certificate

When you created your key, GPG also created a revocation certificate. When you use the command line, this is more obvious, but Seahorse also creates a revocation certificate. In both cases, the certificate is located in ~/home/.gnupg/openpgp-revocs.d/.

A revocation certificate allows you to tell the world your key pair is no longer valid. You would use this certificate if your private key is stolen, compromised, or lost. You should also use the revocation certificate if you forget your passphrase and can no longer use your key pair. You should store your revocation certificate and private key in the most secure storage possible. You should also store your revocation certificate in a different location than the private key.

Many users put their private key and revocation certificates on two flash drives and store them in a lock box or safe. If your revocation certificate is not secure, a malicious actor could revoke your key pair, and cause a major disruption for you as well as those who use your public key.

FOSDEM - day 3

Today is my last day in Brussels and since my flight back is prtty late in the afternoon I took the occasion to visit a little bit Brussels.

View of Brussels

Fun fact is that after 4 days in Brussels for a FLOSS conference, I’ll spend my evening at my local Linux User Group :D.

On Boot Times

Why does it take as long to boot Fedora 23 in 2016 as it did to boot Windows 95 in 1995?

I knew we were slow, but I did not realize how slow:

$ systemd-analyze
Startup finished in 9.002s (firmware) + 5.586s (loader) + 781ms (kernel) + 24.845s (initrd) + 1min 16.803s (userspace) = 1min 57.019s

Two minutes. (Edit: The 25 seconds in initrd is mostly time spent waiting for me to enter my LUKS password. Still, 1.5 minutes.)

$ systemd-analyze blame
32.247s plymouth-quit-wait.service
22.837s systemd-cryptsetup@luks\x2df1993bc3\x2da397\x2d4b38\x2d9bef\x2d
18.058s systemd-journald.service
16.804s firewalld.service
9.314s systemd-udev-settle.service
8.905s libvirtd.service
7.890s dev-mapper-fedora_victory\x2d\x2droad\x2droot.device
5.712s abrtd.service
5.381s accounts-daemon.service
2.982s packagekit.service
2.871s lvm2-monitor.service
2.646s systemd-tmpfiles-setup-dev.service
2.589s systemd-journal-flush.service
2.370s dmraid-activation.service
2.230s proc-fs-nfsd.mount
2.024s systemd-udevd.service
2.000s lm_sensors.service
1.932s polkit.service
1.931s systemd-fsck@dev-disk-by\x2duuid-30901da9\x2dab7e\x2d41fc\x2d9b
1.852s systemd-fsck@dev-mapper-fedora_victory\x2d\x2droad\x2dhome.serv
1.795s iio-sensor-proxy.service
1.786s gssproxy.service
1.759s gdm.service


This review of Fedora 23 shows how severely our boot speed has regressed (spoiler: 56.5% slower than Fedora 21, 49% slower than Ubuntu 15.10). The review also shows that Fedora 23 takes twice as long to power off as Fedora 22.

I think we can do better.

Fighting Passive Surveillance should be our top priority



We definitely live in a different world since Snowden leaks, but for some people nothing has changed. We always knew that certain individuals are targeted by local or international law enforcement agencies. In some cases they even have a legal way of doing this. If you work on certain fields or operate as an activist in political issues, you always assumed or knew that your communications are monitored. We may have better knowledge on the way the do it, or which things they have broken and which not yet. But essentially nothing is new about this on the post-Snowden world.

What Snowden leaks actually changed, what we learned from the documents, is that there is a vast ongoing process of massive passive surveillance and data collection. It doesn't matter if you are considered important. It doesn't matter if you have something to hide or not. All of your communications are monitored, stored and analyzed. This is what changed. This is what we learned.

Let me pause my thoughts for a moment and share a controversial story...

Mobile email encryption

Would you store your private PGP key to your mobile smartphone? Many (most?) hackers/geeks would easily answer in a negative way. Mobile phones have two major security implications that our laptops (usually) don't.

  • Physical security. It's more easy to lose your phone, or for someone to steal it. It's a comparatively smaller device, usually carried away in your pocket. And once you lose it, all keys stored there should be considered compromised (which is a big problem on its own, since PGP doesn't offer Forward Secrecy).

  • More than one operating system. Even if you have taken all measures to secure your operating system, the problem is that your phone runs also a second operating system. The "radio" OS running on your baseband chip. It's a complete proprietary black box, that you don't know what it does. You don't even know if it's isolated from your "smart" operating system.

On a side note, mobile operating systems have a security advantage that almost all modern desktop operating systems (even most major linux distributions) lack. All applications are sandboxed. So even if you are running a malicious application (you know, like Angry Birds) it may do various unwanted things regarding your personal mobile usage (eg. track location) but it can't easily steal your PGP private key stored inside OpenKeyChain's isolated storage. Not many desktop operating systems can protect you from a malicious application getting access to your .gnupg or .ssh folder.

So, although these two points are completely valid and indeed mobile smartphones are less secure, we have to realize that this is where most users read their emails. In many cases, a mobile phone is the only device people read their emails. Many people have come to cryptoparties, and after getting in touch with the complete lack of usability that comes with the standard pgp gui stack (Thunderbird + Enigmail), they ask how the can do the same things on their mobile. Most hackers would react (or even deny to help) exactly because of the reasons mentioned above. Let me clear up the dilemma a bit: Most people have two options to choose. Either use email encryption on their mobile phone or don't use encryption at all. And unfortunately most hackers fail to see that for most people the threat model is passive surveillance.

Threat Model

Not all people are trying to protect from the same things or the same type of adversaries. Not all people have the same Threat Model.

I was very pleased to see Werner Koch presenting at 32c3 this year about the current status of GnuPG, where he mentioned that the focus from now on is the passive surveillance threat model. Building tools that focus on the passive surveillance threat model, means that usability and encryption by default is top priority.

I have participated and co-organized many Cryptoparties, Free Software Meetups, and related crypto/privacy events/workshops. And I believe that the passive surveillance threat model should also be our focus. Yes, sometimes we need to quickly determine if a person has a different threat model (eg. journalists), but most people participating in these kind of events are not targets (at least not NSA targets). We know that they collect everything, we know that they love pgp because it's rarely used and stands out. Let's make their job more difficult. Encrypt all things by default. Let's start from fighting against massive passive surveillance.

Comments and reactions on Diaspora or Twitter

January 31, 2016

Does the market care about security?
I had some discussions this week about security and the market. When I say the market I speak of what sort of products will people or won't people buy based on some requirements centered around security. This usually ends up at a discussion about regulation. That got me wondering if there are any industries that are unregulated, have high safety requirements, and aren't completely unsafe?

After a little research, it seems SCUBA is the industry I was looking for. If you read the linked article (which you should, it's great) the SCUBA story is an important lesson for the security industry. Our industry moves fast, too fast to regulate. Regulation would either hurt innovation or be useless due to too much change. Either way it would be very expensive. SCUBA is a place where the lack of regulation has allowed for dramatic innovation over the past 50 years. The article compares the personal aircraft industry which has substantial regulation and very little innovation (but the experimental aircraft industry is innovating due to lax regulation).

I don't think all regulation is bad, it certainly has its place, but in a fast moving industry it can bring innovation to a halt. And in the context of security, what could you even regulate that would actually matter? Given the knowledge gaps we have today any regulation would just end up being a box ticking exercise.

Market forces are what have kept SCUBA safe, divers and dive shops won't use or stock bad gear. Security today has no such bar, there are lots of products that would fall under the "unsafe" category that are stocked and sold by many. Can this market driven approach work for our security industry?

It's of course not that simple for security. Security isn't exactly an industry in itself. There are security products, then there are other products. If you're writing a web app security probably takes a back seat to features. Buyers don't usually ask about security, they ask about features. People buying SCUBA gear don't ask about safety, they just assume it's OK. When you run computer software today you either know it's insecure, or you're oblivious to what's going on. There's not really a happy middle.

Even if we had an industry body everyone joined, it wouldn't make a huge difference today. There is no software that exists without security problems. It's a wide spectrum of course, there are examples that are terrible and examples that do everything right. Today both groups are rewarded equally because security isn't taken into account in many instances. Even if you do everything right, you will still have security flaws in your software.

Getting the market to drive security is going to be tricky, security isn't a product, it's part of everything. I don't think it's impossible, just really hard. SCUBA has the advantage of a known and expected use case. Imagine if that gear was expected to work underwater, in space, in a fire, in the arctic, and you have to be able to eat pizza while wearing it? Nobody would even try to build something like that. The flexibility of software is also its curse.

In the early days of SCUBA there were a lot of accidents, by moving faster than the regulators could, they not only made the sport extremely safe, but probably saved what we know as SCUBA today. If it was heavily regulated I suspect much of the technology wouldn't look all that different from what was used 30+ years ago. Software regulation would probably keep things looking a like they do today, just with a lot of voodoo to tick boxes.

Our great challenge is how do we apply this lesson from SCUBA to security? Is there a way we can start creating real positive change that can be market driven innovation and avoid the regulation quagmire?

Join the conversation, hit me up on twitter, I'm @joshbressers
Scratch group projects – 2016


As I mentioned last year, programming is on the syllabus for our grade 10 students, and they have just finished this year’s group projects in Scratch. We’re moving on to python, but I’ve posted their projects at http://scratch.lesbg.com Feel free to play them and rate them. This is a first attempt for students, so do please be gentle on the ratings.

If you want to check out previous years’ projects, they’re also available at the links at the top left. If you have any comments or suggestions for the site itself, please leave them below.

Second Fedora Pune meetup in January

On 22nd of January evening we had the second Fedora meetup in Pune. There were 12 participants in this meet. We started a discussion about what happens when someone compiles a program written in C. Siddhesh started asking various questions about what do we think? He then started going into details for each steps into compiler, and assembler. We discussed about ELF files, went through various sections. After looking into __init, __fini, __main functions one of the participant said “my whole life was a lie!” :D No one thought about constructor, and destructor in a C program.

We also discussed about for loops in C, and in Python, and about list comprehensions. One more new point to me was that there are 6 registers available for each function call in x86. At the end the participants decided to find bugs/features in different projects they use regularly (we also suggested about Gnome apps), first everyone will try to fix those at home, and if they can not fix by next meeting, we will look into helping them during the meeting.

FOSDEM 2016 Talk: Live Migration of Virtual Machines From The Bottom Up

I just did a talk titled ‘Live Migration of Virtual Machines From The Bottom Up‘ at the FOSDEM conference in Brussels, Belgium.  The slides are available at this location.

The talk introduced the KVM stack (Linux, KVM, QEMU, libvirt) and live migration; introduced ways the higher layers (especially oVirt and OpenStack) use KVM and migration, and what challenges the KVM team faces in working with varying use-cases and new features added to make migration work, and work faster.

There was a video recording, I will post the link to it in a separate post.