August 03, 2015

<figure> <figcaption>src:</figcaption> </figure>

Since I got here, I've been doing my best to create a mental map of all the parts of Fedora. From what I've gathered thusfar, there are 13 subprojects (See: wiki sidebar), along with a number of web properties, and a slew of upstream communities that Fedorans are tapped into. But even after getting the broadest sense of how many moving parts there are, that still doesn't explain HOW, only who. I've said to myself "Gee whiz, if only there were a list of all the things that needed to be done to ship a release..." Today, thanks to jzb, I have found the HOLY GRAIL of "how a Fedora becomes a release" and I'm here to share it with you too!

Remi repository is changing

The "remi" repository exists for > 10 years, it have changed a lot, and some recent changes worth to be explained.


When I opened this repository, my first goal was to share the small backport stuff I do for my job of this time (an university), mostly for old Fedora version.

Then I added various repositories, for newer Fedora versions, RHEL and CentOS (and some other clones), I also increase the number of packages available, covering the LAMP stack (Apache, PHP, MySQL) and some applications.

Today: I choose to concentrate myself mostly on the PHP stack, to make available recent PHP versions, most of the existing extensions and their dependencies to RPM distribution users.


Initially the repository was providing backported packages from Fedora, but for years now, I'm also a Fedora contributor / packager.

Today: the repository is the upstream for the packages I maintain in Fedora. (including PHP and of lot of extensions and libraries).

So the changes happen first in remi, then in Fedora, then in the other 3rd party repositories which provide backports (ex php 7.0).

PHP update Policy

At the beginning, the repository was only providing the latest PHP version in the single "remi" repository, so major update was sometime required (e.g. 5.2 to 5.3)

Today: since Fedora 19 and EL-7, the PHP version in the "remi" is the same major version than the base distribution (e.g. 5.6 in Fedora 21, 5.4 in EL-7), and will never change. Only minor updates will be pushed (5.4.43 in remi, 5.4.16 in EL-7). Other more recent versions are in separate reppository (e.g. remi-php56 or remi-php70).

Others packages update Policy

Some people criticize the replacement of various packages when "remi" repository is enabled, which I understand, and can be sometime complex for new comers.


The "remi-safe" repository is 'really" safe, and doesn't replace any package from the base distribution. Only this repository is enabled by default.

Each "remi-php*" repository is not enabled by default, and when enabled only replace PHP (with its extensions) and nothing else. So updating PHP is a admin choice, and only PHP will be updated, so can be considered as safe.

The main "remi" repository is not enabled by default, it can be used for installing some others packages, and filtered using the includepkgs / exclude yum options. It is not designed to be safe, so should be used with caution (i.e. checking the yum transaction summary before accepting it).

More information

Please read the FAQ. and stay aware of changes reading this blog.

Also remember there is a forum and a IRC channel.

Secure Boot — Fedora, RHEL, and Shim Upstream Maintenance: Government Involvement or Lack Thereof

You probably remember when I said some things about Secure Boot in June of 2014. I said there’d be more along those lines, and there is.

So there’s another statement about that here.

I’m going to try to remember to post a message like this once per month or so. If I miss one, keep an eye out, but maybe don’t get terribly suspicious unless I miss several in a row.

Note that there are parts of this chain I’m not a part of, and obviously linux distributions I’m not involved in that support Secure Boot. I encourage other maintainers to offer similar statements for their respective involvement.

Live migration failures with KVM and libvirt

I decided to change some of my infrastructure back to KVM again, and the overall experience has been quite good in Fedora 22. Using libvirt with KVM is a breeze and the virt-manager tools make it even easier. However, I ran into some problems while trying to migrate virtual machines from one server to another.

The error

# virsh migrate --live --copy-storage-all bastion qemu+ssh://root@
error: internal error: unable to execute QEMU command 'drive-mirror': Failed to connect socket: Connection timed out

That error message wasn’t terribly helpful. I started running through my usual list of checks:

  • Can the hypervisors talk to each other? Yes, iptables is disabled.
  • Are ssh keys configured? Yes, verified.
  • What about ssh host keys being accepted on each side? Both sides can ssh without interaction.
  • SELinux? No AVC’s logged.
  • Libvirt logs? Nothing relevant in libvirt’s qemu logs.
  • Filesystem permissions for libvirt’s directories? Identical on both sides.
  • Libvirt daemon running on both sides? Yes.

I was pretty confused at this point. A quick Google search didn’t reveal too many relevant issues, but I did find a Red Hat Bug from 2013 that affected RHEL 7. The issue in the bug was that libvirt wasn’t using the right ports to talk between servers and those packets were being dropped by iptables. My iptables rules were empty.

Debug time

I ran the same command with LIBVIRT_DEBUG=1 at the front:

# LIBVIRT_DEBUG=1 virsh migrate --live --copy-storage-all bastion qemu+ssh://root@ 2>&1 > debug.log

After scouring the pages and pages of output, I couldn’t find anything useful.


I spotted an error message briefly in virt-manager or the debug logs that jogged my brain to think about a potential problem: hostnames. Both hosts had a fairly bare /etc/hosts file without IP/hostname pairs for each hypervisor. After editing both servers’ /etc/hosts file to include the short and full hostnames for each hypervisor, I tested the live migration one more time.


The migration went off without a hitch in virt-manager and via the virsh client. I migrated several VM’s, including the one running this site, with no noticeable interruption.

The post Live migration failures with KVM and libvirt appeared first on

CUDA 7.0 enabled programs for Fedora 22

nv-cuda-2014header-updatedI’ve udpated the CUDA version in the Fedora 22 Nvidia repository, it now contains CUDA 7.0.28 along with the cuFFT 7.0.35 patch. Note that from this version, CUDA is x86_64 bit compatible only, so there are no more i386 packages. There is still the cudart library available for 32 bit, but I don’t think it’s worth packaging.

The packages hosted here should correctly upgrade and obsolete the ones in Nvidia’s own repository, so it should be possible to go straight from one version to the other, if you need.

The static libraries (according to packaging guidelines) have been placed in a cuda-static package, thus reducing by an order of magnitude the size of the packages containing libraries. The toolkit can of course be installed and used to create CUDA binaries on systems where there is no Nvidia adapter installed.

The Nvidia compiler (nvcc) throws an error if the GCC version detected is higher than 4.9 (Fedora 22 default is 5.1.1), but the removing the check makes the compiler run fine until you enable C++11 support. If you need to enable C++11 support you need to use a separate GCC, older than 5.x, for compilation. See this comment here for details.

As part of the update, the repository that contains CUDA enabled programs (Blender, CCMiner and NVENC enabled FFMpeg) has been updated for Fedora 22. This is completely optional, so you can have Nvidia packages on your system and still use RPMFusion’s FFMpeg and Fedora’s Blender. I’ve tried to submit Blender updates to the appropriate package maintainer but did not receive any answer.

I will rebase all distributions to CUDA 7.0 as soon as the next long lived driver release will be branched by Nvidia.

As always, feedback is welcome. If you have any issue or would request an enabled CUDA package to add to the repository, just write in the comments or write me an email.

August 02, 2015

Activities from Mon, 27 Jul 2015 to Sun, 02 Aug 2015


Activities Amount Diff to previous week
Badges awarded 1303 +131.03%
Builds 20445 +24.67%
Copr build completed 4000 +16.11%
Copr build started 3998 +15.25%
Edit on the wiki 391 -30.43%
FAS user created 112 +08.74%
Meeting completed 29 +31.82%
Meeting started 29 +31.82%
New packages 158 +222.45%
Posts on the planet 75 +36.36%
Retired packages 0 NA
Updates to stable 295 -19.40%
Updates to testing 686 +59.91%

Top contributors of the week

Activites Contributors
Badges awarded aviso (10), mhoeher (9), ageha (7)
Builds karsten (6397), pbrobinson (5984), sharkcz (2219)
Copr build completed avsej (708), griever (318), leinfeva (202)
Copr build started avsej (708), griever (318), leinfeva (202)
Edit on the wiki tibbs (46), jromanes (29), ref (29)
Meeting completed dgilmore (6), kushal (5), nirik (5)
Meeting started darci (2), nirik (2), roshi (2)
New packages  
Posts on the planet leinfeva (10), lovenemesis (10), admin (9)
Retired packages  
Updates to stable siwinski (19), remi (16), zvetlik (14)
Updates to testing jchaloup (183), remi (32), fpokorny (22)
My eXperience with Fedora community at Fudcon Pune 2015

In July 2013 when Joerg Simon and Fabian Affolter introduced Fedora localization and the OSSTMM to Central Tibetan Administration i saw Joerg using fedora with kde, he called it his own custom fedora and had a lot of tweaking in it :D. I tried fedora but then again I went back using ubuntu since all my colleagues were using it and thought troubleshooting would be much easier. Joergs interest in tibetan thangka gave us a good time hunting for the best of thangka painting in mcleod ganj. Finally we got “Manjushri” - the Boddhisatva of Wisdom. We became good friendsand imet him again during Nullcon 2015, during which he introduced me to Praveen Kumar and he told me about 2015. Joerg guided me throughout the fudcon procedure, although i was intimidated at submitting a paper at fudcon. I also met Harish Pillay again, he is a cheerful and warm hearted person who also knows and wants to help about the tibetan community.
And so my journey began for Fudcon Pune 2015, I thought fudcon will be like any other conference where I will present my talk, meet new people. Little did I know that the F in fedora also stands for Family :D

Day 1 Midnight
Door bell rings ... Ding Dong
I woke up to see who is at the door, it was Aditya patawari, apparently he seemed confused that i am his room mate. I guess he was expecting someone else. He went off with the bell boy looking
confused ...
The bell rang again and i see him, this time he was sure i am his room mate. We exchanged a quick greetings of “hi hello” followed by our name and went to sleep.

Day 1.
The first person i meet in the morning was Harish, whom i met in nullcon goa and it felt great to see him again. Kushal started distributing the Fedora merchandize and then we headed to the venue MIT.
Since this was my first time in the fedora community, i didnt know anybody and was excited in meeting new people.
I met danishka, sirko, yogi, brendon who gave me a pair of fedora sticker for my blank thinkpad.
It was really intimidating at first, although excited, i was the only one from my community and i thought no body knew about Tibetans.
I met swapnil who was in hill hacks few weeks back in mcleod ganj and quickly our converstaion grew making me feel less intimidated of fudcon. I had a good talk with Harish how we have been in our life since the last time we met. For some reasons i was feeling dizzy after having lunch and it seemed like i was having a migraine problem.
I had to go back to the hotel and get some rest. Later that night Aditya patawari invited me for the mini fudpub but i couldn't as i was not feeling well.

Day 2.

My session being at the end of the day, I thought of attending various other talks. Kernel and userspace tracing with LTTng by Suchakra was a very interesting session, and i couldn't resist asking him for linux performance observability tool chart and i have it posted on the wall in my office :D

Pravin's talk on nuts and bolts of fedora internationalization and globalization was the talk i was looking for. I wanted to have the whole fedora os in tibetan language and his talk gave me a number of new ideas to implement. After the lunch i just sat in the Auditorium brushing my presentaion and enjoying the talks going on.

Finally my turn came and i began by asking how many in the audience know about tibetans in india. There were quite few who knew about the community and actually been to the places. I was very happy when Kiran said he has been to dharamsala and visited hillhacks and he knew about my community.

After my talk, i got a very good response from the audience who are willing to help in localizing tibetan language in fedora. I was overwhelmed by the feedback i received from the audience .

From Fudcon venue to bluO in Phoenix MarketCity where the actual Fudpub was happening i shared 2 hours bus ride with kushal das.
To be honest, he looked intimidating and serious to me first. But those 2 hours of conversation with him chagned everything. Kushal was a tibetan food lover, spicy thukpa to be specific. He told me, he wanted to visit my hometown and i was happy to have him as my guest at home. Kushals interest in collecting buddha statue even got me closer to him as friend.

At Fudpub i met Truong, my first impression was OMG!!! There is a chinese person in the audience. Even i know that FOSS and Fedora is neutral ground to bring us all together - i was a bit hesitent to talk to him and my first question before even introducing was “are you chinese ...”? To my surprise he said, NO i am
Vietnamise! We both laughed when he said that, and we had a great time together. Izhar joined us and all of us had great fun at the fud pub. The food was good and the people were cheerful. I had a great time bowling with Jens Peterson; we both had good number of strikes but he won :D 

Photo Courtesy: Kushal Das

Day 3.

I couldn't attend key signing party, for some reason the morning weather seemed to trigger my dizzyness and headache. I quickly exchange my keys with friends i have met during the fudcon.
Closing note from Rupali was very good and it shows how every one in the community worked hard to make it a sucess and i was happy to be a part of a community. To me it felt like a large family always ready to accept new members and i felt great being a part of it.
A Group of us went went for a shopping and on our way to the shopping place i had a good chat with danishka and yogi. We went to the Tibetan market in pune called the fashion street where i pretended to be someone from Indonesia. We met an old lady who after finding out i was an imposter asked me why I came all the way from north India to Pune.
She was very happy when I told her about my purpose of visit and how everyone in the group is a supporter for Tibetan people and its cause.

At dinner, Kushals rant on the Tom yum soup was funniest moment of the night. I headed to Kirans room where they were editing the noise in the videos of the talk.
Kushal joined us and started cracking jokes from this mysterious guy name ramkey. We had such a good time at Kirans room that night , followed by me finding out Ratnadweep and Sayan chowdhary are anime lovers. This day couldn't have been better. We exchanged our contacts and had our own rants on the spoilers during the anime episodes.

Day 4.

I had lunch at kushals place and finding his wife was also interested in Tibetan food and its culture, I kept thingking they should have a Tibetan name :D
I missed my chance of visiting Redhat office in pune since i slept in the morning, but all in all it was a conference to remember and I look forward to attend next year :D
As I was leaving the hotel, I asked Kushal it would be nice to have f in the fedora for family. He responded saying F does stands for family and fedora infact was founded by a group of friends.

As I promised in my session, I am proudly using Fedora for my personal use as well as at work. I even get my dad to use Fedora. :D

Akademy 2015: Photo report


This year I had a long trip to A Coruna to participate in the Akademy conference. It was an amazing opportunity to meet and to be among people that share the same vision and goal to create and share free sofware to the world.


It was also my first time when I travelled such a long distance by bus (from Amsterdam to A Coruna) that spent about 1.5 days in one way, and I hope it’s my first and last experience with that :)

Unfortunately, this report will be quite short, because after coming back to Amsterdam I got ill after a nice partying on the boat during the Amsterdam Gay Pride that was the immediate continuation of the trip to Spain :D


Here you can find my photo report made during the Akademy 2015 conference. In the case if you like to be anonymized, please notify me so I can remove you from the album. All photos are distributed under the CC-BY-NC-SA license.

Instalar Fedora 22 en UEFI

Hola este es mi primer post “formal” referente a temas de Fedora.

Les explicare como realice la instalación de Fedora Workstation 22 en una laptop con SO pre-instalado y UEFI activado.

August 01, 2015

Blocking ad networks with named

I’ve meant to do this for ages, so on my first day of my “staycation”, despite vowing to myself that I wouldn’t look at a computer screen this week (hey, it’s not actually the technical start of my week off is it?), I fiddled this morning with BIND to try and avoid seeing ads on my devices. While AdBlock works great on my browsers, that doesn’t transfer well to mobile devices and apps with built-in advertising, etc.

Unless you’re running your own BIND DNS server at home, you won’t be able to do this. If you have a home network with named running (my local network does) and unless you restrict all outbound DNS and allow DNS lookups only from your named server (which I do, it forces all of the machines on the system to use my DNS server which is configured to only ask OpenDNS for DNS info), this also won’t really work for you (at least not in the way that I’ve done it).

So this assumes some knowledge of BIND and networking. This is not so much a tutorial on how to configure BIND as it is some quick tips and shared info on what I did this morning.

First you need to setup a master zone. Mine looks like this:

zone "" {
        type master;
        file "master/";

NOTE: You may also need the following in your options section, but I’m not 100% sure as it was there before:

    response-policy {
        zone "";

This makes anything defined in this zone to be considered authoritative, just like the DNS settings I have for my local network. As an aside, you can use this to block entire domains (like youtube or facebook if you have kids at home staring at screens all day…).

I then wrote a script which pulls data from MVPS Hosts. Their data is meant to be put into a hosts file, but that means it would only work on a single machine and I’m trying to solve a multi-machine/mobile issue, not just a single computer. The script takes my file and mashes in data from MVPS Hosts and to create a new file that we will use:


input=$(mktemp /tmp/mvps.hosts.XXXXXX)
output=$(mktemp /tmp/
serial=$(grep serial ${source} | awk '{print $1}')
n_serial="$(date +%Y%m%d)01"

curl -s >${input}

dos2unix -o ${input} >/dev/null 2>&1

lines=$(wc -l ${input} | awk '{print $1}')

if [ ${lines} -lt 10000 ]; then
    exit 1

for line in $(cat ${source}); do
    if [ "${line}" == ";START ADHOSTS" ]; then
    	echo ${line} >>${output}

echo "" >>${output}
echo ";START ADHOSTS" >>${output}
for hostname in $(cat ${input} | egrep -v '^#' | awk '{print $2}'); do
    if [ "${hostname}" != "localhost" ]; then
        echo "${hostname}    IN    CNAME    ." >>${output}
echo ";END ADHOSTS" >>${output}

perl -pi -e "s/${serial}/${n_serial}/g" ${output}

rm -f ${input}
cp -f ${output} ${source}
rm -f ${output}

Note that you need dos2unix installed. Everything else is fairly standard. The MVPS Hosts file seems to be updated monthly, so this something you could possibly add to a monthly cronjob or just run manually every once in a while. So far it seems to work pretty good over here. I had initially thought about writing something in python, but bash is just so much faster (for me).

Also, if you put things in your zone file before the “;START ADHOSTS” line they’ll be retained, so if you do want to block specific domains (you may want to block and if you don’t want to see iOS iAd ads) you still can, and take advantage of the MVPS Hosts list (if someone has a better list, I would love to see it).

I hope this helps someone else out. Comments for improvement are welcome, this was a pretty quick-and-dirty script that, I’ll admit, does a few things oddly.

Polished to a Resilience

Long time since the last post, it's been quite an interesting year! We all move forward as do the efforts on various fronts. Just a quick update today on two projects previously discussed as things ramp up after some downtime (look for more updates going into the fall / winter).

Polisher has received alot of work in the domains of refactoring and new tooling. The codebase is more modular and robust, test coverage has been greatly expanded, and as far as the new utilities:

  • gem_mapper.rb: Lists all gem / gemfile dependencies & the versions available downstream
  • missing_deps.rb: Highlights dependencies missing downstream as well as any alternate versions available
  • gems2update.rb: Cross references missing dependencies downstream w/ updates available upstream and recommends specific versions to update to. This facilitates a consistent update accross dependencies which may impose different requirements on the same gems. If a unified update strategy cannot be deduced gems2update will highlight the conflicts.

These can be seen in action via the screencasts referenced above.

Resilience, our expiremental REFS parser, has also been polished. Various utilities previously written have been renamed, refactored, and expanded; and new tooling written to continue the analysis. Of particular notability are:

  • fcomp.rb - A file metadata comparison tool, that runs a binary diff on file metadata in the fs
  • axe.rb - The attribute extractor, pulls file specific metadata out of the refs filesystem and dumps it into a local file. Additional analysis will be of this metadata (in part)
  • rarser.rb - The complete filesystem parser / file extractor, this pulls files and directories off the image and dumps it into local files

Also worthy to note are other ongoing efforts including updating ruby to 2.3 in rawhide and updating rails to 4.x in EPEL.

Finally, the SIG has been going through (another) transitionary period. While membership is still growing there are many logistical matters currently up in the air that need to be resolved. Look for updates on that front as well as many others in the near future.


read more

Fedora Planet

I’ve been having problems with the Fedora planet recently, just a test post to see if it appears.

Nothing to see, move along.

Ayuda a Mejorar La Calidad del Kernel en Fedora
¿Quieres ayudar al proyecto Fedora y de paso añadir un nuevo badge a tu colección ?. Bueno esta la...
Helps Improve Quality Kernel in Fedora
Do you wanna collaborate to Fedora Project and add a new badge to your collection ?Well this is the...
Kdbus, F23 update, FUDCon Pune report, and… is Fedora slowing down?

Get on the kdbus!

If you’ve poked around the running processes on a modern Linux system, you’ve seen something called dbus. You can read more about this on the dbus website, but the basic summary is that it’s a system for passing messages between programs. Mostly, it runs in the background and users don’t ever think about it — but it’s an important part of the “plumbing”, enabling integrated desktop environment notifications for both GNOME and KDE, role selection for Fedora Server, and more.

For the past decade, this has run as a user-space daemon — a system service that runs in the background, but outside of the Linux kernel. Fedora is experimenting with a new implementation, caled kdbus, which — as the “K” might imply — is actually integrated into the kernel. This will allow it to be available at early boot (before other system services are running), may also allow for better performance, and because it’s connected to the kernel, better security features.

Some developers have been running this themselves for a while now, and now we’re asking for broader testing, at least among those of you brave enough to run our always-moving development branch, Fedora Rawhide. (I do, on one of my main machines — in practice, it’s not that scary, as long as you’re willing to help debug things in the occasional cases where they go a bit… sideways.) For this, all you need to do is boot your Rawhide system with kdbus=1 enforcing=0, and the kernel and systemd will automatically detect this and use kdbus instead of the traditional daemon.

(And, yeah – that’s disabling SELinux for that boot. Part of the current development is in writing the updated security policy.)

For more, see this devel list mail from Lennart Poettering.

A little warning on Rawhide this week…

As I just noted, Rawhide is usually pretty safe, but sometimes it is a little bit… well… raw. At this week’s Release Engineering meeting Dennis Gilmore notes that this week might be a little rough, with some infrastructure changes and a new version of RPM – which is in itself pretty interesting, as it supports the concept of “file triggers”, which will (eventually!) allow us to make package installation faster and more reliable.

In any case, if you’re a Rawhide user, keep an eye on the devel mailing list, and it’s a good idea to follow Kevin Fenzi’s “This week in rawhide” blog.

Fedora 23 construction continues

With a new Fedora release every six months, it doesn’t take long from the announcement of a new version until we’re in the thick of making the next one. Fedora 22 was released this May, and in just a couple of weeks, we’ll be releasing the Fedora 23 Alpha — currently scheduled for August 11th.

As always, please do remember that the schedule shows a target date, but the Fedora release process aims to strike a balance between strict calendar-based releases and the “release when it’s ready” approach. That means we do slip these dates by a week or two sometime, and if that happens, don’t be too alarmed.

Anyway, of particular note this week: the Alpha freeze, which means that any changes which are intended to go into the F23 alpha release need to go through the freeze exception process. And, we’ve activated Bodhi, the system we provide our packagers to push updates to the updates-testing — in early development, any updates go straight in to the main tree, but as we’re working on stabilizing the release, we add this additional process.

FUDCon Pune report

In addition to our big Flock conference (more on that last week, if you missed it), we have Fedora User and Developer conferences – “FUDCons” – every year in Latin America and in Asia/Pacific. This year’s APAC FUDCon was in Pune, India. Curious how it went? Read Kushal Das’s FUDCon Pune event report.

Is Fedora slowing down?

And finally this week… Fedora Contributor Jiří Eischmann provides some interesting commentary in a blog post with the somewhat-alarming title Growth of Fedora Repository Has Almost Stalled. It’s not as dire as all that, though – but definitely the opening to an important ongoing conversation. Jiří attributes this to the popularity of Copr, our service for building your own mini-repositories, with relaxed guidelines (it’s gotta be free software, and it’s gotta be legal for Fedora – after that, pretty much anything goes).

In parallel, there’s a conversation the Fedora Big Data SIG (a “SIG” is a special interest group – a lightweight association of anyone interested in the topic) mailing list, debating the value to users of repackaging upstream software to Fedora’s traditional standards. Long-term contributor and packager Haïkel Guémar notes:

In real world, if upstream says: “don’t use Fedora packages because they have crippled features”, no end users will ever use our packages. And no user base, means that we have no leverage to fix these poor practices, this is a vicious circle.

The software world has really changed since the beginning of Fedora. A lot of the problems are the same, but the scale is different and so are the pressures. Some of our work on is meant to address this, but we’ve got a long path of continuous improvement ahead of us still. We want users to be able to get the best, most useful open source software with the quality assurances we’re used to with the Fedora name – but we also have to figure out how to fit in with a world where that isn’t always the best model for our users. And, as Jiří says, we also need to build a better path for software which does fit best in the main repositories.

July 31, 2015

gedit snippet generating RPM spec %changelog line

I have done a little bit of work to make gedit fully working in RHEL-7 (appropriate packages with almost all my fixes will be most likely available in the next minor release of RHEL-7). Generally my feeling is that gedit is a way less powerful (and writing Python plugins more complicated) than the similar stuff in vim. However, there are some things which are shockingly powerful. One of the things which quite surprised me are snippets. They are a way more powerful than just simple text replacements. With this functionality I can create even something so crazy as this snippet generating %changelog line in the RPM spec files.:

import rpm
import datetime

date ="%a %b %d %Y")
headers = spec.packages[0].header
version = headers['Version']
release = ".".join(headers['Release'].split(".")[:-1])
packager = headers['Packager']
newheader = "* %s %s - %s-%s\n- " % (date, packager, version, release)
return newheader

To install this snippet, open Manage Snippets.. in the gedit application menu and in the RPM Spec section create a new snippet and copy the code above to the text box on the right side. Give to this snippet some TAB trigger (I have chi there) or a key shortcut and be done. Then activate this snippet in the top of the %changelog section.

Most popular web browsers among Fedora users

Google Chrome is the most popular browser in the world. It is so popular that some call it a new Internet Explorer. But that’s based on global stats. In Red Hat, I’m responsible for web browsers, so I wondered what are the most popular web browsers among Fedora users. So I asked through Fedora accounts on Facebook and Google+: “Which browser do you use the most in Fedora?”

I didn’t look for exact numbers. It’s clear that such polls can’t be 100% representative and for instance Google+ users have inclination to use Google products which can be seen on the comparison of results from Facebook and Google+. However, I think the results give you a rough idea of what browsers are popular among Fedora users. And the results are:

fedora-browsersThe surveys differed a bit. G+ supports polls, but only up to five options. So I pre-selected five browsers I expected would be most popular, and told the users to write a browser of their choice to comments if it’s not among pre-selected options. Facebook natively doesn’t support polls, so users wrote their preferences in comments. Even though other browsers were not discriminated by not being pre-selected the results were very similar. None of them got more votes than any of the pre-selected five. The total amount of votes on Facebook was considerable lower than on Google+ (1262). And the findings?

  • Firefox and Chrome/Chromium are the only relevant browsers among Fedora users. They take up to 95% of the pie. Opera and Epiphany were a bit more popular among Fedora users on Facebook, but neither of them exceeded 5%. All other browsers got just a couple of votes: Midori, Konqueror, SeaMonkey, Pale Moon, Vivaldi, Lynx,…
  • Firefox was the winner, a pretty clear one on Facebook and a close one on Google+ (49% vs 48%). Firefox is the default browser, so it’s not surprising.
  • What really surprised me is the huge difference between Chrome and Chromium. I thought there would be more people who prefer open source solutions, but apparently a lot more people prefer convenience even among Fedora users. You can find Chromium in alternative repos and it’s easy to install, but it doesn’t include Flash player and other closed source goodies. With Chrome, you get it all with an installation of one package. In terms of numbers of users, Chromium is pretty much irrelevant if you compare it to Chrome.
  • Quite a few people said that they were primarily using Firefox, but they had Chrome for Flash. When Flash goes finally away, Chrome will lose one of its significant advantages.
  • Opera used to have a market share of ~10% among Linux users. In this survey, it got 4.9% (FB) and 1.7% (G+). It took them more one year to release the new generation of Opera (based on Chromium) for Linux after they discontinued the original Opera (12.16). Apparently most users left and never came back (I’m one of them).

Flock Update

So the schedule for Flock is finally fixed and I have to update some things according to my last post. First the practical part of the Wallpaper Hunt is scheduled now for Friday now instead of Satruday. Addionally I will help Máirín Duffy on Saturday morning with the Inkscape and GIMP Bootcamp, guess which part I will do.

For the Wallpaper Hunt, if you want to go with us, to the place we have located, we should meet at least 19.15 on the hotel entrance. The trip takes 33 minutes, short walk and then bus, costs for a ticket is just 1$. We will arrive at the location arround 20.00. So we have a little bit time in daylight, sunset is scheduled for 20.18 and the sun will go down 296°, to get the lighthouse and sunset makes that hard but maybe we find a good position, otherwise, the lake is only few hundert meters away and sunset over a lake is always a nice shot. The Blue Hour starts 21:19 and takes that day 40 minutes. We can take the bus 21.49 back and would arrive 22.22 at the hotel. If you want to go by car, fine with me thats our destination

We moved the planet!

The Fedora planet that is. :)

The old site: will continue to work and redirect to the new site: for quite some time to come, but you should go update your feed links now while you are seeing this. :)

Why did we do this change? It’s part of us trying to make sure everything in * is using https. This allows us to set a HSTS header to get all browsers to use https with sites and also get that preseeded in some browsers. shows blog posts of the entire Fedora community and those posts may well have http links in them, or https links to self signed certs or the like, so there isn’t any way to make fully valid https, so we moved it to it’s own domain where it can happily use http until browsers no longer accept it.

This brings us one step closer to a https future. :)

Mozilla CEO Sends Angry Open Letter To Microsoft Over Changing Windows 10 Browser Defaults

Originally posted on TechCrunch:

With Edge in Windows 10, Microsoft has finally delivered a capable browser to replace the aging Internet Explorer. Microsoft likes Windows 10 so much, it makes Edge the default browser in Windows 10, even when you’re updating from a system that previously used Chrome or Firefox as the default.

Unsurprisingly, Mozilla is not amused and its CEO Chris Beard today wrote an open letter to Microsoft CEO Satya Nadella to complain that the company is taking away its users’ choices and ignored Mozilla’s calls for keeping the default during the upgrade process.

“When we first saw the Windows 10 upgrade experience that strips users of their choice by effectively overriding existing user preferences for the Web browser and other apps, we reached out to your team to discuss this issue,” Beard writes. “Unfortunately, it didn’t result in any meaningful progress, hence this letter.

[pullquote author=”Mozilla CEO Chris Beard” align=”right”]”Sometimes we see great…

View original 355 more words

[Event Report] July Python Pune Devsprint 2015

July Python Pune Developer's Sprint was hosted at Red Hat, Pune (India) office on 25th July, 2015.

Around 30 people joined this event.

It was a sprint planning for PyconIndia 2015 Devsprint.

The main agenda of the event was to encourage the attendees to make contribution to upstream python projects.

These people mentored on following projects.

Below is the list of Pull requests sent during the sprint.

Ankit Rana and Tejas Sathe created a project 'datahub'. It is a github-clone specifically for dataset so that people can share their dataset which can be used in the data analysis.

Amit Ghadge created a project 'hist', a history syncronizer to sync bash history between two machines.

Some of them learned how to read the codebase of python project and debug the code based on issue like :openstack-ironic, mailpile, pandas etc.

We need more mentors who are contributing to upstream projects, so that we can mentor newbee in order to get into OpenSource.

Thanks to Red Hat, Pune for the venue, mentors and attendees for making the event successful.

GUADEC 2015 badges

Just one week to GUADEC 2015 in Gothenburg! 7 days!!!

I hope you are all as excited as we are in the local organizing team.

We made a nice badge if you want to blog about the conference. Grab it here and if you are quick, you might be the first person to use it!

And don’t forget to register!

See you all in a bit.


Testing systemd-networkd based Fedora 22 AMI(s)

Few days back I wrote about a locally built Fedora 22 image which has systemd-networkd handling the network configuration. You can test that image locally on your system, or on an Openstack Cloud. In case you want to test the same on AWS, we now have two AMI(s) for the same, one in the us-west-1, and the other in ap-southeast-1. Details about the AMI(s) are below:

Region AMI Name AMI ID Virtualization
ap-southeast-1 fedora22-networkd ami-e89895ba HVM
us-west-1 fedora22-networkd ami-c9e21e8d HVM

Start an instance from these images, and login. In case you want to use some different DNS settings, feel free to remove /etc/resolv.conf link, and put up a normal /etc/resolv.conf file with the content you want.

CIL – Part2: Module priorities

In my previous blog, I talked about CIL performance improvements. In this blog post, I would like to introduce another cool feature called module priorities. If you check the link, you can see a nice blog post published by Petr Lautrbach about this new feature.

With new SELinux userspace, we are able to use priorities for SELinux policy modules. It means you can ship own ipa policy module, which is based on distribution policy module, with additional rules and load it with higher priority. No more different names for policy modules and higher priority wins.

# semodule --list-modules=full | grep ipa
400 ipa pp
100 ipa pp

Of course, you can always say you want to use distro policy module and add just additional fixes. Yes, it works fine for some cases when you add just minor fixes which you are not able to get to distro policy for some reasons. Actually you can also package this local policy how Lukas Vrabec wrote in his blog.

Another way how to deal with this case is a fact you can ship SELinux policy for your application at all and don’t be a part of distro policy. Yes, we can see these cases.

For example

# semodule --list-modules=full | grep docker
400 docker pp

But what are disadvantages with this way?

* you need to know how to write SELinux policy
* you need to maintain this policy and reflect the latest distro policy changes
* you need to do “hacks” in your policies if you need to have some interfaces for types shipped by distro policy
* you would get your policy to upstream and check if there is no conflict with distribution policy if they do a merge with the same upstream

From Fedora/RHEL point of view, this was always a problem how to deal with policies for big projects like Cluster, Gluster, OpenShift and so on. We tried to get these policies out of distro policy but it was really hard to do a correct rpm handling and then we faced my above mentioned points.

So is there any easy way how to deal with it? Yes, it is. We ships a policy for a project in our distribution policy and this project takes this policy, adds additional fixes, creates pull requests against distribution policy and if there will be different timelines then it will be shipped by this project. And that’s it! It can be done easily using module priorities.

For example, we have Gluster policy in Fedora by default.

# semodule --list-modules=full | grep gluster
100 glusterd pp

And now, Gluster team needs to do a new release but it causes some SELinux issues. Gluster folks can take distribution policy, add additional rules and package it.

Then we will see something like

# semodule --list-modules=full | grep gluster
100 glusterd pp
400 glusterd pp

In the mean time, Gluster folks can do pull requests with all changes against disitribution policy and they can still ship the same policy. The Gluster policy is a part of distribution policy, it can be easily usptream-able and moreover, it can be disabled in distribution policy by default.

# semodule --list-modules=full | grep gluster
400 gluster cil
100 glusterd pp disabled

$ matchpathcon /usr/sbin/glusterfsd
/usr/sbin/glusterfsd system_u:object_r:glusterd_exec_t:s0

This model is really fantastic and give us answers for lot of issues.

LINE Messenger on Linux

Recently I packaged purple-line in Fedora 22 or later, which is the plugin of LINE Messenger for pidgin:

# dnf install purple-line

You can see the plugin on pidgin.

New badge: Keepin Fedora Beautiful (F23) !
Keepin Fedora Beautiful (F23)Submitted a Supplemental Wallpaper Idea for Fedora 23
New badge: Def Keepin Fedora Beautiful (F23) !
Def Keepin Fedora Beautiful (F23)Got a Supplemental Wallpaper included in the Fedora 23 Release!
New badge: Nuancier F23 !
Nuancier F23Voted in the supplemental wallpapers election for the Fedora 23 release

July 30, 2015

Some GTK+ sightings

I had a chance to demonstrate some GTK+ file chooser changes that have accumulated in the last year, so I thought I should share some of this material here.

File Chooser 1
All the screenshots here are of the testfilechooser application in GTK+ master as of today (some bugs were found and fixed in the process).


Search in the filechooser area that I have spent a bit of time on myself this cycle. We’ve improved the internals of the search implementation to match the sophistication of nautilus:

  • The current folder is already loaded, so we search through that data without any extra IO.
  • We ask tracker (or the platforms native indexer) for search results.
  • Locations that are not covered by that, we crawl ourselves, skipping remote locations to avoid excessive network traffic.

File Chooser 2
The easiest way to start a search is to just type a few characters – the search entry will appear and the search begin (you can of course also use the Search button in the header, or hit Ctrl-F to reveal the search bar.
File Chooser 6
If you type a character that looks like the beginning of a path (~, / or .), we bring up the location entry instead to let you enter a location.

Note that we show helpful hints in the subtitle in the header: if you are searching, we tell you where. If you are expected to enter a URL, we tell you that.
File Chooser 3For search results, we show a location column that helps to determine quickly where a result comes from – results are sorted so that results from the current folder come first.  Recent files also have a location column. The formatting of the modification time column has been improved, and it can optionally show times in addition to dates.

As you can also see here, the context menu in the file list (as well as the one in the sidebar) has been changed to a popover. The main motivation for this is that we can now trigger it with a long press on touch screens, which does not work well with a menu.

File Chooser 4If the search takes longer than a few moments, we show a spinner.  Hitting Escape will stop the search. Hitting it again will hide the search entry. Hitting it one more time will close the dialog.

File Chooser 5If the search comes up empty, we tell you about it.

File Chooser 7As I already mentioned, we don’t crawl remote locations (and tracker doesn’t index them either). But we still get results for the current folder. The footer below the list informs you about this fact.


The GtkPlacesSidebar is shared between nautilus and the file chooser since a few years ago. This cycle, it has been rewritten to use a GtkListBox instead of a GtkTreeView. This has a number of advantages: we can use real widgets for the rows, and things like the eject button are properly themeable and accessible.

File Chooser 8Another aspect that was improved in the sidebar is the drag-and-drop of files to create bookmarks. We now show a clear drop location and gray out all the rest.  Some of these changes are a direct result of user testing on nautilus that happened last year.


The sidebar used to list all your removable devices, remote mounts, as well as special items for ‘Enter Location’, etc. To prevent the list from getting too long, we have moved most of these to a new view, and just have a single “Other Locations” item in the sidebar to go there.
File Chooser 9As you can see, the places view also has enough room to offer ‘Connect to Server’ functionality.

File Chooser 10It has completion for known server locations.

File Chooser 11We show progress while connecting to a server.

File Chooser 12And after the connection succeeded, the location shows up under ‘Networks’ in the list, from where it can be unmounted again.

File Chooser 13The recent server locations are also available in a popover.

File Chooser 14If you don’t have any, we tell you so.

Save mode

All of the improvements so far apply to both Open mode and Save mode.

File Chooser 15The name entry in Save mode has been moved to the header (if we have one).

File Chooser 16For creating directories, we now use a popover instead of an embedded entry in the list.

File Chooser 18This lets us handle invalid names in a nicer way.


All of these changes will appear in GTK+ 3.18 in September. And we are not quite done yet – we may still get a modernized path bar this cycle, if everything works out.

The improvements that I have presented here are not all my work.  A lot of credit goes to Allan Day, Carlos Soriano, Georges Basile Stavracas Neto, and Arc Riley. Buy them a drink if you meet them!

New badge: Flock 2015 Speaker !
Flock 2015 SpeakerYou gave a presentation at Flock 2015, the Fedora Contributor Conference
New badge: Flock 2015 Organizer !
Flock 2015 OrganizerYou were an organizer of the Flock 2015 conference -- a huge responsibility!
New badge: Domo Arigato !
Domo ArigatoYou're a proud member of the Fedora Robotics SIG
Please sign off your patches

One aspect of open source that appeals to many people is the idea that anyone can contribute. All it takes is a great idea, a little bit of work, and you can have fame, glory, and more conference t-shirts than you know what to do with. The reality is often not quite as simple for many reasons. A common complication is software licencing. There are plenty of other locations talking about open source software licencing and the complications there of so this one will be narrowly focused and have a simple request: When submitting patches for the Linux kernel, whether to official kernel mailings lists or to Fedora, please remember sign off your patches.

Luis Rodriguez has a great blog post on the history of the DCO. In over simplfied terms, the DCO is an assertion of 'Yes, I am permitted to include this code in this open source project'. Many projects, including the Linux kernel, require this assertion before taking any patch. Adding it is simple enough: simply add Signed-off-by: Your Name <your@e-mail.address> to the bottom of your commit text. If you can make the assertions in, you can add a Signed-off-by.

Is the Signed-off-by needed in all patches? Yes. Even cleanup patches? Yes. Even patches that just add a few device ids? Yes. Even patches that don't do anything useful? If a patch isn't useful it shouldn't be getting merged, but yes. A pattern I've seen a few times is

  • Person has problem
  • Person googles for problem
  • Person finds someone else had the same problem, someone else had a fix for the issue
  • Person tests the fix -- it works!
  • Person excitedly e-mails the fix to maintainers to get it included

Often the fix lacks a proper DCO so even if the patch is perfect in any other way, the maintainers cannot take the patch. This leaves everyone feeling frustrated. But just because a patch was submitted once without a proper DCO doesn't mean it can't be re-submitted later; if you can get in contact with someone (e.g. original author, co-maintainer) who can make the assertions of the DCO, the patch can be resubmitted. Until that happens though there isn't much the maintainers of the project can do with the patch.

It's vital to the success of open source projects that licences are followed. So please, if you want your patch included make sure to add your Signed-off-by.

Using Ansible to add a NetworkManager connection

The Virtual Machine has two interfaces, but only one is connected to a network. How can I connect the second one?

To check the status of the networks with NetworkManagers Command Line Interface (nmcli) run:

$ sudo nmcli device
eth0    ethernet  connected     System eth0 
eth1    ethernet  disconnected  --          
lo      loopback  unmanaged     --

To bring it up manually:

$ sudo nmcli connection add type ethernet ifname eth1  con-name ethernet-eth1
Connection 'ethernet-eth1' (a13aeb2c-630f-4de6-b735-760264927263) successfully added.

To Automate the same thing via Ansible, we can use the command: module, but that will execute every time unless we check that the interface has an IP address. If it does; we want to skip it. We can check that using the predfined facts variables. Each interface has a variable in the form of ansible_interface, which is a dictionary containing details about the host. Here is what my host has for the interfaces:

        "ansible_eth0": {
            "active": true,
            "device": "eth0",
            "ipv4": {
                "address": "",
                "netmask": "",
                "network": ""
            "ipv6": [
                    "address": "fe80::f816:3eff:fed0:510f",
                    "prefix": "64",
                    "scope": "link"
            "macaddress": "fa:16:3e:d0:51:0f",
            "module": "virtio_net",
            "mtu": 1500,
            "promisc": false,
            "type": "ether"
        "ansible_eth1": {
            "active": true,
            "device": "eth1",
            "macaddress": "fa:16:3e:38:31:71",
            "module": "virtio_net",
            "mtu": 1500,
            "promisc": false,
            "type": "ether"

You can see that, while eth0 has an ipv4 section, eth1 has no such section. Thus, to gate the playbook task on the present of the variable, use a when clause.

Here is the completed task:

  - name: Add second ethernet interface
    command: nmcli connection  add type ethernet ifname eth1  con-name ethernet-eth1
    when: ansible_eth1.ipv4 is not defined

Now, there is an Ansible module for Network Manager, but it is in 2.0 version of Ansible which is not yet released. I want this using the version of Ansible I (and my team) have installed on Fedora 22. Once 2.0 comes out, many of these “one-offs” will use the core modules.

Got the issue resolved and back to work after exams :)
Remember that I have met up with the following problem while I was doing integration?

UndefinedError at /badges/  'django.contrib.auth.models.AnonymousUser object' has no attribute 'is_read_only'.
You will also be able to see the error by going in to the site:

This error did not appear when I have logged into the website, yet it appeared each time when I tried to access the website as an anonymous user. With the help of puiterwijk from the #fedora-admin IRC, I got to know that it was a problem regarding the templates which I have overridden. So, the problem was in the widgets/user_navigation.html template. I have moved the following part to the bottom of that template because according to the new design the user account information of a logged in user should be displayed to the right of the header.

<span class="user-info">
{{ macros.inbox_link(request.user) }}
{{ macros.moderation_items_link(request.user, moderation_items) }}
{%- if settings.KARMA_MODE != 'hidden' or settings.BADGES_MODE != 'hidden' -%}
({{ macros.user_long_score_and_badge_summary(user) }})
{%- endif -%}
<a href="{{ request.user.get_absolute_url() }}">{{ request.user.username|escape }}</a>

This has caused the error in the website because the above mentioned code was earlier inside the {%- if request.user.is_authenticated() -%} clause. So, now you can understand the problem right? According to what I have done without noticing that the above code has come out of that if condition, the application looks for user information even when the user has not logged into the website which produced the error. So, when overriding templates it needs to be extra careful not to do such kind of mistakes which harm the application's functionality.

And according to the feedback I have received from the people present at FUDCon, it has been found that the hamburger menu needs to be removed because the old users will not directly spot that the links of 'ALL', 'FOLLOWED' and 'UNANSWERED' will be there when hovered over the hamburger icon. Most of them have not been able to understand why there is an hamburger icon on the secondary header. Hence the following mockup shows my idea on how it is going to be. I have first decided to include those 3 links in the body area so that they can get easily spotted without having to take second step. Yet secondly I have thought that those 3 links are common to most of the pages and hence they should be in an area common to most the pages which is the secondary header. And to blend with the blue colored image used in the secondary header, I have used a light blue color for the links. So, the following mockups where I have modified the desktop and mobile view of the Badges page will reflect my idea on that.

And also according to the feedback it has also been suggested to use a footer similar to the one in Hence the modified design of the footer is also depicted in the mockups below. And as always feedback on these are welcome.

Badges page desktop view

Badges page mobile view

So, it has been a very busy week for me due to my mid semester exams and it is time to resume the work. As the GSoC pen down date is also near I hope to work hard on integration get some tangible product soon.
To exec or transition that is the question...
I recently recieved a question on writing policy via linkedin.

Hi, Dan -

I am working on SELinux right now and I know you are an expert on it.. I believe you can give me a help. Now in my policy, I did in myadm policy like
require { ...; type ping_exec_t; ...;class dir {...}; class file {...}; }

allow myadm_t ping_exec_t:file { execute execute_no_trans };

Seems the ping is not work, I got error
ping: icmp open socket: Permission denied

Any ideas?

My response:

When running another program there are two things that can happen:
1. You can either execute the program in the current context (Which is what  you did)
This means that myadm_t needs to have all of the permissions of ping.

2. You can transition to the executables domain  (ping_t)

We usually use interfaces for this.



I think if you looked at your AVC's you would probbaly see something about myadm_t needing the net_raw capability.

sesearch -A -s ping_t -c capability
Found 1 semantic av rules:
   allow ping_t ping_t : capability { setuid net_raw } ;

net_raw access allows ping_t to create and send icmp packets.  You could add that to myadm_t, but that would allow it
to listen at a low level to network traffic, which might not be something you want.  Transitioning is probably better.


Transitioning could cause other problems, like leaked file descriptors or bash redirection.  For example if you do a
ping > /tmp/mydata, then you might have to add rules to ping_t to be allowed to write to the label of /tmp/mydata.

It is your choice about which way to go.

I usually transition if their is a lot of access needed, but if their is only a limited access, that I deem not too risky, I
exec and add the additional access to the current domain.
CNS 2015 - Day 1

The notes have not been proofread. Please do your research before you pick anything from this post. It is meant to be a rough sketch of everything that I heard and noted at the conference. Since quite a bit of this is new to me, it is bound to be inaccurate, unspecific, and possibly even incorrectly quoted.

Day 0 Keynote - "Birdsong"

  1. Adrienne Fairhall
  2. Birds learn their songs by trial and error.
  3. The Zebra Finch has a single song.
  4. STDP may require sustained depolarisation or bursting to occur.
  5. The structure of the basal ganglia is pretty conserved in all mammals.
  6. EI -> atractor -> stability
  7. Dopamine effect is U shaped in avalanche distribution in basal ganglia, therefore, both too much and too little will give negative results.
  8. Q: Why do you need variability for learning? (Structured variability)
  9. Q: How do we isolate the variability that was "good"?

Day 1 keynote - Wilson-Cowan equations

  1. Jack Cowan
  2. Wilson-Cowan equations.
  3. Attractor dynamics in neural systems.
  4. Exhibit various stable behaviours
  5. Oscillations before settle to a fixed point
    1. Stable forms.
    2. In the Vogels self organising model
    3. CITE: paper in press
  6. Near a phase transition, no need to look at details of single neurons - you're not missing anything by ignoring single neuron details.

Limits of scalability of cortical network models

  1. Sacha van Albada
  2. Mechanism at \(N \rightarrow \infty\) is not the same mechanism at finite size.
  3. Inappropriate scaling can also cause the network to become unstable -> for example, cause large osciallations.
  4. Asynchronous irregular state, therefore, Gaussian inputs assumed
  5. LIF is like the rate model with white noise added to outputs
  6. So, while scaling you have to maintain effective connectivity and also maintain mean activities.
  7. Important to simulate at natural scale to verify.

Complex synapses as efficient memory systems

  1. Markus Benna.
  2. Dense coding.
  3. Also use SNR.
  4. Synaptic weight distribution gets wider and wider - diffusion process.
  5. Good synaptic memory model:
    1. Work with tightly bounded weights
    2. Online learning
    3. High SNR.
    4. Not too complicated
    5. Long life time
    6. CITE: Amit and Fusi 1994
  6. Cascade model of complex synapse
  7. Need a balance of LTP/LTD - otherwise your distribution is squished against one of the boundaries.

Self-organisation of computation in neural systems by interaction between homoeostatic and synaptic plasticity

  1. Sakyasingha Dasgupta
  2. Cell assembly properties
    1. Pattern completion
    2. I-O association
    3. Persistent activity
  3. Synaptic scaling is about 50-100 times slower than synaptic plasticity process

A model for spatially periodic firing in the hippocampal formation based on interacting excitatory and inhibitory plasticity

  1. Simon Weber
  2. If inhibition is not precise enough, you get periodic firing.
  3. Model of grid and place cells
<script type="text/javascript">if (!document.getElementById('mathjaxscript_pelican_#%@#$@#')) { var align = "left", indent = "0em", linebreak = "false"; if (false) { align = (screen.width < 768) ? "left" : align; indent = (screen.width < 768) ? "0em" : indent; linebreak = (screen.width < 768) ? 'true' : linebreak; } var mathjaxscript = document.createElement('script'); = 'mathjaxscript_pelican_#%@#$@#'; mathjaxscript.type = 'text/javascript'; mathjaxscript.src = '//'; mathjaxscript[(window.opera ? "innerHTML" : "text")] = "MathJax.Hub.Config({" + " config: ['MMLorHTML.js']," + " TeX: { extensions: ['AMSmath.js','AMSsymbols.js','noErrors.js','noUndefined.js'], equationNumbers: { autoNumber: 'AMS' } }," + " jax: ['input/TeX','input/MathML','output/HTML-CSS']," + " extensions: ['tex2jax.js','mml2jax.js','MathMenu.js','MathZoom.js']," + " displayAlign: '"+ align +"'," + " displayIndent: '"+ indent +"'," + " showMathMenu: true," + " tex2jax: { " + " inlineMath: [ ['\\\\(','\\\\)'] ], " + " displayMath: [ ['$$','$$'] ]," + " processEscapes: true," + " preview: 'TeX'," + " }, " + " 'HTML-CSS': { " + " styles: { '.MathJax_Display, .MathJax .mo, .MathJax .mi, .MathJax .mn': {color: 'blue ! important'} }," + " linebreaks: { automatic: "+ linebreak +", width: '90% container' }," + " }, " + "}); " + "if ('default' !== 'default') {" + "MathJax.Hub.Register.StartupHook('HTML-CSS Jax Ready',function () {" + "var VARIANT = MathJax.OutputJax['HTML-CSS'].FONTDATA.VARIANT;" + "VARIANT['normal'].fonts.unshift('MathJax_default');" + "VARIANT['bold'].fonts.unshift('MathJax_default-bold');" + "VARIANT['italic'].fonts.unshift('MathJax_default-italic');" + "VARIANT['-tex-mathit'].fonts.unshift('MathJax_default-italic');" + "});" + "MathJax.Hub.Register.StartupHook('SVG Jax Ready',function () {" + "var VARIANT = MathJax.OutputJax.SVG.FONTDATA.VARIANT;" + "VARIANT['normal'].fonts.unshift('MathJax_default');" + "VARIANT['bold'].fonts.unshift('MathJax_default-bold');" + "VARIANT['italic'].fonts.unshift('MathJax_default-italic');" + "VARIANT['-tex-mathit'].fonts.unshift('MathJax_default-italic');" + "});" + "}"; (document.body || document.getElementsByTagName('head')[0]).appendChild(mathjaxscript); } </script>
CIL – Part1: Faster SELinux policy (re)build

As you probably know we shipped new features related to SELinux policy store migration in Fedora 23. If you check the link, you can see more details about this change. You can read some technical details, benefits and examples how to test it. In this blog series, called CIL, I would like to introduce you this new feature and show you benefits which CIL brings.

One of the most critical part of SELinux usability are time-consuming SELinux operations like policy installations or loading new policy modules for example. I guess you know what about I am talking. For example, you want to create own policy module for your application and test it on your virtual machine. It means you attempt to execute

semodule -i myapps.pp

and you are waiting, waiting, waitng and waiting.

The same you can see if you try to disable a module

semodule -d rhcs

and you are waiting, waiting, waitng and waiting.

It directly depends on used policy language and on the amount of policy rules which need to be rebuilt if SELinux policy modules are managed. You can read more info about policy modules and kernel policy in my previous blog.

And at this point, CIL brings really big performance improvements. Just imagine, no more “waiting waiting waiting” on a policy installation. No more “waiting waiting waiting” if you load your own policy module.

But no more words and let show you some real numbers.


You can see really big differences for chosen SELinux operations between a regular system with old SELinux userspace without CIL and with a new SELinux userspace with CIL.

It means we can really talk about ~75% speed-up for tools/apps which access to manage SELinux policy.

Note: These numbers come from Fedora 23 virtual machine and all these actions require a policy rebuild.

And it is not only about SELinux tools but we have also SELinux aware applications – systemd for example which loads Fedora distribution policy on boot process. And you get also big improvements on this boot process.

CIL: systemd[1]: Successfully loaded SELinux policy in 91.886ms.
REGULAR: systemd[1]: Successfully loaded SELinux policy in 172.393ms.

I believe you are now really excited to test this new feature and get own numbers and see how much faster SELinux tools like semodule, semanage are if they manipulate with a policy.

Activado Bodhi para Fedora 23
Ayer se llevó a cabo el punto de activación de bodhi. Para quienes no estén familiarizados con este...
Bodhi in Fedora 23 is Ready
Bodhi is available for testing package for Fedora 23, but for the people that don’t know what is...
Free Real-time Communications (RTC) at DebConf15, Heidelberg

The DebConf team have just published the first list of events scheduled for DebConf15 in Heidelberg, Germany, from 15 - 22 August 2015.

There are two specific events related to free real-time communications and a wide range of other events related to more general topics of encryption and privacy.

15 August, 17:00, Free Communications with Free Software (as part of the DebConf open weekend)

The first weekend of DebConf15 is an open weekend, it is aimed at a wider audience than the traditional DebConf agenda. The open weekend includes some keynote speakers, a job fair and various other events on the first Saturday and Sunday.

The RTC talk will look at what solutions exist for free and autonomous voice and video communications using free software and open standards such as SIP, XMPP and WebRTC as well as some of the alternative peer-to-peer communications technologies that are emerging. The talk will also look at the pervasive nature of communications software and why success in free RTC is so vital to the health of the free software ecosystem at large.

17 August, 17:00, Challenges and Opportunities for free real-time communications

This will be a more interactive session people are invited to come and talk about their experiences and the problems they have faced deploying RTC solutions for professional or personal use. We will try to look at some RTC/VoIP troubleshooting techniques as well as more high-level strategies for improving the situation.

Try the Debian and Fedora RTC portals

Have you registered for It can successfully make federated SIP calls with users of other domains, including Fedora community members trying

You can use for regular SIP (with clients like Empathy, Jitsi or Lumicall) or WebRTC.

Can't get to DebConf15?

If you can't get to Heidelberg, you can watch the events on the live streaming service and ask questions over IRC.

To find out more about deploying RTC, please see the RTC Quick Start Guide.

Did you know?

Don't confuse Heidelberg, Germany with Heidelberg in Melbourne, Australia. Heidelberg down under was the site of the athlete's village for the 1956 Olympic Games.

No tienes cuenta en Fedora ? Entra aquí
Si eres un usuario de Fedora y te gustaría cooperar en la comunidad lo puedes de hacer de muchas...
How create a Fedora account
Are you a fedora user and would like collaborate with the project ( translating, testing, tagging,...
dnf soll fedup und upgrade.img beerben

Auf der Fedora-Entwicklerliste wurde gestern ein Änderungsvorschlag für Fedora 23 vorgestellt, der dafür sorgen soll, das zukünftig nicht mehr fedup und upgrade.img, sondern einzig Fedoras neuer Paketmanager dnf Systemupgrades durchführen soll.

Bislang werden für Systemupgrades die separaten Tool fedup bzw. upgrade.img verwendet, welche jedoch unabhängig vom jeweiligen Paketmanager des Systems arbeiten. Hinzu kommt, das zumindest letzteres auch teilweise auf undokumentierte und deshalb nicht unterstützte Funktionen von systemd zugegriffen hat, was beim Versuch, fedup durch dnf zu ersetzen immer wieder zu Problemen geführt hat.

dnf soll hingegen auf den, von dem systemd-Entwicklern empfohlenen, Offline-Update-Modus von systemd zurückgreifen und die Verwendung von upgrade.img überflüssig machen.

MkDocs llega a los repos de Fedora

MkDocs es una herramienta escrita en Python que nos permite crear un sitio web a partir de archivos de texto escritos en formato Markdown, como su nombre indica el principal objetivo de esta aplicación es ayudarnos a construir una página web con documentación la que puede ser hospedada en cualquier sitio, incluso en hospedajes gratuitos como Read the Docs o Github pages.

MkDocs viene entre las novedades de Fedora 23 y para los impacientes esta este repo copr con paquetes para Fedora 22.

Para instalar MkDocs

En Fedora 22 habilitamos el repo copr:

sudo dnf copr enable williamjmorenor/mkdocs-f22

Instalamos la aplicación:

sudo dnf install mkdocs

Construyendo nuestro primer sitio

Vamos a crear un nuevo proyecto con:

mkdocs new testing

Esto creara una carpeta nueva con el nombre testing, accedemos a ver el contenido de la carpeta

cd testing
├── docs
│   └──
└── mkdocs.yml

Ahora podemos usar una de las características que nos ofrece mkdocs y ver el sitio web que es poder ver una vista previa de nuestro sitio web con:

mkdocs serve

Abrimos un navegador y vamos a para ver una vista previa de un sitio construido con mkdocs con información de prueba, no hay que cerrar la terminal para no detener el servidor, ahora podemos editar nuestra documentación y cada vez que guardemos un cambio podremos ver como se actualiza nuestra página.

Agregar más paginas a nuestra documentación

Debemos crear un nuevo archivo dentro del directorio /docs (debe tener extención .md).

touch ./docs

Agregamos algo de información al archivo

vi ./docs/

Guardamos los cambios y editamos el archivo mkdocs.yml para agregar nuestro nueva página:

vi mkdocs.yml

Agregamos algo así:

site_name: Mi Documento
- ''
- ''

Al guardar los cambios podemos ver nuestra página con la información actualizada.

Supongamos que deseamos tener mas de un nivel entre nuestra páginas, podemos crear un menu desplegable agregando información como:

site_name: Mi Documento
- ''
- ''
- Versiones:
    - 'Version 0.1': ''
    - 'Version 0.2': ''

Por favor noten que para archivos grandes es mas cómodo crear sub directorios para cada tema diferente.

Finalmente veremos como cambiar el tema de nuestra documentación para ello podemos usar uno de los temas incluidos. editamos el archivo mkdocs.yml y agregamos la siguiente linea:

theme: amelia

Donde el tema puede ser cualquiera de los temas que viene incluidos por defecto, pueden ver la lista completa de temas en esta pagina.

Una vez satisfechos con la vista previa del documento generamos el sitio web con:

mkdocs build


El sitio web generado por mkdocs puede alojarse en prácticamente cualquier sitio, les recomiendo seguir la guía oficial para por ejemplo aprender como alojar documentación de forma gratuita en  Read the Docs.

July 29, 2015

Fedora 22 –Updates for 20150729 (some new housekeeping)

It’s that  time again… Updated  Spins carrying today’s date were created and should be   fully hosted  by 0100 UTC  July 30th.   Regulars  will notice the  dropping of  all but  a single  master tracker,  we  do kindly ask that you use that and only  that tracker, mainly since it  helps  with  tracker datum collection..

That  permalink again is :

Seeds are  highly  appreciated, as  are  publication of torrents for those in the community..

Tips for running Fedora in a Raspberry Pi 2

This is a list of tips I’m using while running Fedora in my Raspberry Pi 2.

To  minimize the writes of the SD card I use tmpfs as much as possible:

# systemctl unmask tmp.mount
# systemctl enable tmp.mount

Add noatime,discard to /etc/fstab

# cat /etc/fstab

UUID=e098e36f-f409-44cb-9d8e-9d5c0e2ed9c9 / ext4 defaults,noatime,discard 1 1
/dev/mmcblk0p1 /boot vfat defaults,noatime 0 0

# mount -o remount /
# mount -o remount /boot
# fstrim -v /

To set the journal to write only to tmpfs set in /etc/systemd/journald.conf:


To set the country in the wifi adapter, add your country code to /etc/modprobe.d/cfg80211.conf:

options cfg80211 ieee80211_regdom=ES

To load the driver of the random number generator add a line to /etc/modules-load.d/raspberrypi.conf:


# dnf install rng-tools
# systemctl start rngd.service

By default, I got the power saving CPU governor, to improve performance, set it to  ondemand.

As suggested by Diogo, there is a cpupower.service that can be used to set the CPU governor:

# dnf install kernel-tools
# vi /etc/sysconfig/cpupower

CPUPOWER_START_OPTS="frequency-set -g ondemand"
CPUPOWER_STOP_OPTS="frequency-set -g ondemand"

# systemctl enable cpupower.service
# systemctl start cpupower.service

Download and install fedorazram and fake-hwclock from my copr. As there are no support in copr for the armv7hl arch yet, you have to download the packages manually and install them (they are noarch):

Number of builds in Copr over time

Some interesting statistics about Copr:

Number of builds in the Copr Build Service over time

Number of builds in the Copr Build Service over time

New badge: Internationalization Team Member !
Internationalization Team MemberYou are a proud member of Fedora Internationalization team
Remote code execution via serialized data

Most programming languages contain powerful features, that used correctly are incredibly powerful, but used incorrectly can be incredibly dangerous. Serialization (and deserialization) is one such feature available in most modern programming languages. As mentioned in a previous article:

“Serialization is a feature of programming languages that allows the state of in-memory objects to be represented in a standard format, which can be written to disk or transmitted across a network.”


So why is deserialization dangerous?

Serialization and, more importantly, deserialization of data is unsafe due to the simple fact that the data being processed is trusted implicitly as being “correct.” So if you’re taking data such as program variables from a non trusted source you’re making it possible for an attacker to control program flow. Additionally many programming languages now support serialization of not just data (e.g. strings, arrays, etc.) but also of code objects. For example with Python pickle() you can actually serialize user defined classes, you can take a section of code, ship it to a remote system, and it is executed there.

Of course this means that anyone with the ability to send a serialized object to such a system can now execute arbitrary code easily, with the full privileges of the program running it.

Some examples of failure

Unlike many classes of security vulnerabilities you cannot really accidentally create a deserialization flaw. Unlike memory management flaws for example which can easily occur due to a single off-by-one calculation, or misuse of variable type, the only way to create a deserialization flaw is to use deserialization. Some quick examples of failure include:

CVE-2012-4406 – OpenStack Swift (an object store) used Python pickle() to store metadata in memcached (which is a simple key/value store and does not support authentication), so an attacker with access to memcached could cause arbitrary code execution on all the servers using Swift.

CVE-2013-2165 – In JBoss’s RichFaces the classes which could be called were not restricted allowing an attacker to interact with classes that could result in arbitrary code execution.

There are many more examples spanning virtually every major OS and platform vendor unfortunately. Please note that virtually every modern language includes serialization which is not safe by default to use (Perl Storage, Ruby Marshal, etc.).

So how do we serialize safely?

The simplest way to serialize and deserialize data safely is to use a format that does not include support for code objects. Your best bet for serialization almost all forms of data safely in a widely supported format is JSON. And when I say widely supported I mean everything from Cobol and Fortran to Awk, Tcl and Qt. JSON supports pairs (key:value), arrays and elements and within these a wide variety of data types including strings, numbers, objects (JSON objects), arrays, true, false and null. JSON objects can contain additional JSON objects, so you can for example serialize a number of things into discrete JSON objects and then shove those into a single large JSON (using an array for example).

Legacy code

But what if you are dealing with legacy code and can’t convert to JSON? On the receiving (deserializing end) you can attempt to monkey patch the code to restrict the objects allowed in the serialized data. However most languages do not make this very easy or safe and a determined attacker will be able to bypass them in most cases. An excellent paper is available from BlackHat USA 2011 which covers any number of clever techniques to exploit Python pickle().

What if you need to serialize code objects?

But what if you actually need to serialize and deserialize code objects? Since it’s impossible to determine if code is safe or not you have to trust the code you are running. One way to establish that the code has not been modified in transit, or comes from an untrusted source is to use code signing. Code signing is very difficult to do correctly and very easy to get wrong. For example you need to:

  1. Ensure the data is from a trusted source
  2. Ensure the data has not been modified, truncated or added to in transit
  3. Ensure that the data is not being replayed (e.g. sending valid code objects out of order can result in manipulation of the program state)
  4. Ensure that if data is blocked (e.g. blocking code that should be executed but is not, leaving the program in an inconsistent state) you can return to a known good state

To name a few major concerns. Creating a trusted framework for remote code execution is outside the scope of this article, however there are a number of such frameworks.


If data must be transported in a serialized format use JSON.  At the very least this will ensure that you have access to high quality libraries for the parsing of the data, and that code cannot be directly embedded as it can with other formats such as Python pickle(). Additionally you should ideally encrypt and authenticate the data if it is sent over a network, an attacker that can manipulate program variables can almost certainly modify the program execution in a way that allows privilege escalation or other malicious behavior. Finally you should authenticate the data and prevent replay attacks (e.g. where the attacker records and re-sends a previous sessions data), chances are if you are using JSON you can simply wrap the session in TLS with an authentication layer (such as certificates or username and password or tokens).