August 26, 2016

radv: status update or is dota2 working yet?
Clickbait titles for the win!

First up, massive thanks to my major co-conspirator on radv, Bas Nieuwenhuizen, for putting in so much effort on getting radv going.

So where are we at?

Well this morning I finally found the last bug that was causing missing rendering on Dota 2. We were missing support for a compressed texture format that dota2 used. So currently dota 2 renders, I've no great performance comparison to post yet because my CPU is 5 years old, and can barely get close to 30fps with GL or Vulkan. I think we know of a couple of places that could be bottlenecking us on the CPU side. The radv driver is currently missing hyper-z (90% done), fast color clears and DCC, which are all GPU side speedups in theory. Also running the phoronix-test-suite dota2 tests works sometimes, hangs in a thread lock sometimes, or crashes sometimes. I think we have some memory corruption somewhere that it collides with.

Other status bits: the Vulkan CTS test suite contains 114598 tests, a piglit run a few hours before I fixed dota2 was at:
[114598/114598] skip: 50388, pass: 62932, fail: 1193, timeout: 2, crash: 83 - |/-\

So that isn't too bad a showing, we know some missing features are accounting for some of fails. A lot of the crashes are an assert in CTS hitting, that I don't think is a real problem.

We render most of the Sascha Willems demos fine.

I've tested the Talos Principle as well, the texture fix renders a lot more stuff on the screen, but we are still seeing large chunks of blackness where I think there should be trees in-game, the menus etc all seem to load fine.

All this work is on the semi-interesting branch of

It only has been tested on VI AMD GPUs, Polaris worked previously but something derailed it, but we should fix it once we get the finished bisect. CIK GPUs kinda work with the amdgpu kernel driver loaded. SI GPUs are nowhere yet.

Here's a screenshot:
Live migrating Btrfs from RAID 5/6 to RAID 10

Recently it was discovered that the RAID 5/6 implementation in Btrfs is completely broken, due to the fact that it miscalculates parity (which is rather important in RAID 5 and RAID 6).

So what to do with an existing setup that’s running native Btfs RAID 5/6?

Well, fortunately, this issue doesn’t affect non-parity based RAID levels such as 1 and 0 (and combinations thereof) and it also doesn’t affect a Btrfs filesystem that’s sitting on top of a standard Linux Software RAID (md) device.

So if down-time isn’t a problem, we could re-create the RAID 5/6 array using md and put Btrfs back on top and restore our data… or, thanks to Btrfs itself, we can live migrate it to RAID 10!

A few caveats though. When using RAID 10, space efficiency is reduced to 50% of your drives, no matter how many you have (this is because it’s mirrored). By comparison, with RAID 5 you lose a single drive in space, with RAID 6 it’s two, no-matter how many drives you have.

This is important to note, because a RAID 5 setup with 4 drives that is using more than 2/3rds of the total space will be too big to fit on RAID 10. Btrfs also needs space for System, Metadata and Reserves so I can’t say for sure how much space you will need for the migration, but I expect considerably more than 50%. In such cases, you may need to add more drives to the Btrfs array first, before the migration begins.

So, you will need:

  • At least 4 drives
  • An even number of drives (unless you keep one as a spare)
  • Data in use that is much less than 50% of the total provided by all drives (number of disks / 2)

Of course, you’ll have a good, tested, reliable backup or two before you start this. Right? Good.

Plug any new disks in and partition or luksFormat them if necessary. We will assume your new drive is /dev/sdg, you’re using dm-crypt and that Btrfs is mounted at /mnt. Substitute these for your actual settings.
cryptsetup luksFormat /dev/sdg
UUID="$(cryptsetup luksUUID /dev/sdg)"
echo "luks-${UUID} UUID=${UUID} none" >> /etc/crypttab
cryptsetup luksOpen luks-${UUID} /dev/sdg
btrfs device add /dev/mapper/luks-${UUID} /mnt

The migration is going to take a long time, so best to run this in a tmux or screen session.

time btrfs balance /mnt
time btrfs balance start -dconvert=raid10 -mconvert=raid10 /mnt

After this completes, check that everything has been migrated to RAID 10.
btrfs fi df /srv/data/
Data, RAID10: total=2.19TiB, used=2.18TiB
System, RAID10: total=96.00MiB, used=240.00KiB
Metadata, RAID10: total=7.22GiB, used=5.40GiB
GlobalReserve, single: total=512.00MiB, used=0.00B

If you still see some RAID 5/6 entries, run the same migrate command and then check that everything has migrated successfully.

For good measure, let’s rebalance again without the migration (this will take a while).
time btrfs fi balance start --full-balance /srv/data/

Now we can defragment everything.
time btrfs filesystem defragment /srv/data/ # this defrags the metadata
time btrfs filesystem defragment -r /srv/data/ # this defrags data

Wayland by default in Fedora 25?

I’ve noticed various reports that Fedora has decided to switch to Wayland by default in Fedora 25. It’s true that the alpha release will default to Wayland, but these reports have misunderstood an authorization from FESCo to proceed with the change as a final decision. This authorization corrects a bureaucratic mistake: FESCo previously authorized the change for Fedora 24, but the Workstation working group decided to defer the change to Fedora 25, then forgot to request authorization again for Fedora 25 as required. An objection was raised on the grounds that the proper change procedure was not followed, so to sidestep this objection we decided to request permission again from FESCo, which granted the request. Authorization to proceed with the change does not mean the decision to proceed has been made; the change could still be deferred, just as it was for Fedora 24.

Wayland by default for Fedora 25 is certainly the goal, and based on the current quality of our Wayland desktop, there’s a very good chance it will be reached. I expect the call will be made very soon. Stay tuned.

Priorities in security
I read this tweet a couple of weeks ago:

and it got me thinking. Security research is often derided as unnecessary stunt hacking, proving insecurity in things that are sufficiently niche or in ways that involve sufficient effort that the realistic probability of any individual being targeted is near zero. Fixing these issues is basically defending you against nation states (who (a) probably don't care, and (b) will probably just find some other way) and, uh, security researchers (who (a) probably don't care, and (b) see (a)).

Unfortunately, this may be insufficient. As basically anyone who's spent any time anywhere near the security industry will testify, many security researchers are not the nicest people. Some of them will end up as abusive partners, and they'll have both the ability and desire to keep track of their partners and ex-partners. As designers and implementers, we owe it to these people to make software as secure as we can rather than assuming that a certain level of adversary is unstoppable. "Can a state-level actor break this" may be something we can legitimately write off. "Can a security expert continue reading their ex-partner's email" shouldn't be.

comment count unavailable comments

August 25, 2016

Parsing milestoned XML in Python

I am trying to write a tool in Python (using Python 3.4 to be able to use the latest Python standard library on Windows without using any external libraries on Windows) for some manipulation with the source code for the Bible texts.

Let me first explain what is the milestoned XML, because many normal Python programmers dealing with normal XML documents may not be familiar with it. There is a problem with using XML markup for documents with complicated structure. One rather complete article on this topic is DeRose (2016).

Briefly [1] , the problem in many areas (especially in documents processing) is with multiple possible hierarchies overlapping each other (e.g., in Bibles there are divisions of text which are going across verse and chapters boundaries and sometimes terminating in the middle of verse, many especially English Bibles marks Jesus’ sayings with a special element, and of course this can go over several verses etc.). One of the ways how to overcome obvious problem that XML doesn't allow overlapping elements is to use milestones. So for example the book of Bible could be divided not like


but just putting milestones in the text, i.e.:

<chapter n="1" />
<verse sID="ID1.1" />text of verse 1.1
<verse eID="ID1.1" /> ....

So, in my case the part of the document may look like

text text
textB textB <czap> textC textC <verse/> textD textD </czap>

And I would like to get from some kind of iterator this series of outputs:

[(1, 1, "text text", ['text text']),
 (1, 2, "textB textB textC textC",
  ['<verse/>', 'textB textB', '<czap>', 'textC textC']),
 (1, 3, "textD textD", ['<verse/>', 'textD textD', '</czap>'])]

(the first two numbers should be number of the chapter and verse respectively).

My first attempt was in its core this iterator:

def __iter__(self) -> Tuple[int, int, str]:
    iterate through the first level elements

    NOTE: this iterator assumes only all milestoned elements on the first
    level of depth. If this assumption fails, it might be necessary to
    rewrite this function (or perhaps ``text`` method) to be recursive.
    collected = None

    for child in self.root:
        if child.tag in ['titulek']:
        if child.tag in ['kap', 'vers']:
            if collected and collected.strip():
                yield self.cur_chapter, self.cur_verse, \
            if child.tag == 'kap':
                self.cur_chapter = int(child.get('n'))
            elif child.tag == 'vers':
                self.cur_verse = int(child.get('n'))
            collected = child.tail or ''
            if collected is not None:
                if child.text is not None:
                    collected += child.text
                for sub_child in child:
                    collected += self._recursive_get_text(sub_child)
                if child.tail is not None:
                    collected += child.tail

(self.root is a product of ElementTree.parse(file_name).getroot()). The problem of this code lies in the note. When the <verse/> element is inside of <czap> one, it is ignored. So, obviously we have to make our iterator recursive. My first idea was to make this script parsing and regenerating XML:

#!/usr/bin/env python3
from xml.etree import ElementTree as ET
from typing import List

def start_element(elem: ET.Element) -> str:
    outx = ['<{} '.format(elem.tag)]
    for attr, attval in elem.items():
        outx.append('{}={} '.format(attr, attval))
    return ''.join(outx)

def recursive_parse(elem: ET.Element) -> List[str]:
    col_xml = []
    col_txt = ''
    cur_chapter = chap

    if elem.text is None:
        if elem.tail is not None:
            col_txt += elem.tail
        col_xml.extend([start_element(elem), elem.text])
        col_txt += elem.text
        for subch in elem:
            subch_xml, subch_text = recursive_parse(subch)
            col_txt += subch_text

        if elem.tail is not None:
            col_txt += elem.tail

    return col_xml, col_txt

if __name__ == '__main__':
    # write result XML to CRLF-delimited file with
    # ET.tostring(ET.fromstringlist(result), encoding='utf8')
    # or encoding='unicode'? Better for testing?
    xml_file = ET.parse('tests/data/Mat-old.xml')

    collected_XML, collected_TEXT = recursive_parse(xml_file.getroot())
    with open('test.xml', 'w', encoding='utf8', newline='\r\n') as outf:
                         encoding='unicode'), file=outf)

    with open('test.txt', 'w', encoding='utf8', newline='\r\n') as outf:
        print(collected_TEXT, file=outf)

This works correctly in sense that the generated file test.xml is identical to the original XML file (after reformatting both files with tidy -i -xml -utf8). However, it is not iterator, so I would like to somehow combine the virtues of both snippets of code into one. Obviously, the problem is that return in my ideal code should serve two purposes. Once it should actually yield nicely formatted result from the iterator, second time it should just provide content of the inner elements (or not, if the inner element contains <verse/> element). If my ideal world I would like to get recursive_parse() to function as an iterator capable of something like this:

if __name__ == '__main__':
    xml_file = ET.parse('tests/data/Mat-old.xml')
    parser = ET.XMLParser(target=ET.TreeBuilder())

    with open('test.txt', 'w', newline='\r\n') as out_txt, \
            open('test.xml', 'w', newline='\r\n') as out_xml:
        for ch, v, verse_txt, verse_xml in recursive_parse(xml_file):
            print(verse_txt, file=out_txt)
            # or directly parser.feed(verse_xml)
            # if verse_xml is not a list

        print(ET.tostring(parser.close(), encoding='unicode'),

So, my first attempt to rewrite the iterator (so far without the XML part I have):

def __iter__(self) -> Tuple[CollectedInfo, str]:
    iterate through the first level elements
    cur_chapter = 0
    cur_verse = 0
    collected_txt = ''
    # collected XML is NOT directly convertable into Element objects,
    # it should be treated more like a list of SAX-like events.
    # xml.etree.ElementTree.fromstringlist(sequence, parser=None)
    # Parses an XML document from a sequence of string fragments.
    # sequence is a list or other sequence containing XML data fragments.
    # parser is an optional parser instance. If not given, the standard
    # XMLParser parser is used. Returns an Element instance.
    # sequence = ["<html><body>", "text</bo", "dy></html>"]
    # element = ET.fromstringlist(sequence)
    # self.assertEqual(ET.tostring(element),
    #         b'<html><body>text</body></html>')
    # FIXME přidej i sběr XML útržků
    # collected_xml = None

    for child in self.root.iter():
        if child.tag in ['titulek']:
            collected_txt += '\n{}\n'.format(child.text)
            collected_txt += child.tail or ''
        if child.tag in ['kap', 'vers']:
            if collected_txt and collected_txt.strip():
                yield CollectedInfo(cur_chapter, cur_verse,
                                    re.sub(r'[\s\n]+', ' ', collected_txt,
                                           flags=re.DOTALL).strip()), \
                    child.tail or ''

            if child.tag == 'kap':
                cur_chapter = int(child.get('n'))
            elif child.tag == 'vers':
                cur_verse = int(child.get('n'))
            collected_txt += child.text or ''

            for sub_child in child:
                for sub_info, sub_tail in MilestonedElement(sub_child):
                    if sub_info.verse == 0 or sub_info.chap == 0:
                        collected_txt += sub_info.text + sub_tail
                        # FIXME what happens if sub_element contains
                        # multiple <verse/> elements?
                        yield CollectedInfo(
                            sub_info.chap, sub_info.verse,
                            collected_txt + sub_info.text), ''
                        collected_txt = sub_tail

            collected_txt += child.tail or ''

    yield CollectedInfo(0, 0, collected_txt), ''

Am I going the right way, or did I still not get it?

[1]From the discussion of the topic on the XSL list.
DeRose, Steven. 2016. “Proceedings of Extreme Markup Languages®.” Accessed August 25.
GUADEC 2016 Notes

I’m back from GUADEC and wanted to share a few thoughts on the conference itself and the post-conference hackfest days.

All the talks including the opening and closing sessions and the GNOME Foundation AGM are available online. Big thanks goes to the organization team for making this possible.

20160814_152402The program offered one doc talk (Documentation: state of the union by Kat) and several unconference sessions; the sessions were something new compared to previous years.

In one of the sessions, Shaun gave a presentation on pintail, a site builder that lets you publish your documents from sources in Mallard, Ducktype, DocBook, AsciiDoc (the latter coming soon), and so on, using yelp-xsl, a package also used in GNOME Docs. We are hoping to see pintail deployed in the Fedora Documentation Project soon, to replace the aging Publican-based documentation website.

20160814_174655As for the lightning talk session, I was really impressed by Eleanor Blandford and her presentation skills, given the fact that she is only 10! Future GNOME hacker?

20160813_150040It was great to see GUADEC turning into a family-friendly conference. Several families attended and there was a children’s corner available right at the venue.

20160814_183824During the hackfest days, I mostly worked on squashing various documentation bugs and adding more Mallard test tokens into gnome-getting-started-docs.

Distributors: If your distro is shipping Firefox (and not Epiphany) as the default browser, please file a bug similar to #770013 so that we can add test tokens for your distro into the getting-started page on web browsing.

20160816_173740Jakub and I also quickly reviewed the current SVGs and remaining getting-started videos. Updates are coming in the next release! With Shaun, we had a look at pintail and its support for building the Fedora DocBook-based guides.

20160814_181907As this picture says, GUADEC 2016 was well-organized and successful, so yes, thank you again organizers!


New badge: Mad Science (Kernel Tester IX) !
Mad Science (Kernel Tester IX)You completed 1000 runs of the kernel regression test suite
New badge: Mad Science (Kernel Tester VIII) !
Mad Science (Kernel Tester VIII)You completed 500 runs of the kernel regression test suite
New badge: Mad Science (Kernel Tester VII) !
Mad Science (Kernel Tester VII)You completed 200 runs of the kernel regression test suite
New badge: Science (Kernel Tester VI) !
Science (Kernel Tester VI)You completed 100 runs of the kernel regression test suite
Achievement get: Rainbow!

Earlier this month, I received the Rainbow badge in Fedora Badges. Rainbow is the fifth badge in a series for receiving “karma cookies” from others in IRC. Every time I receive a new badge in this series, I like to reflect back on the past and where my Fedora journey has taken me since October 2015.

What is a rainbow cookie?

If you’re not aware already, Fedora has a unique system of rewarding positive contributions in the community through karma cookies.

Karma cookies are a unique way of rewarding positive interactions and actions in Fedora with a friendly, quantifiable number. In any official Fedora IRC channel, Fedora contributors can give any other contributor Karma by adding ‘++’ after their nick (e.g. mattdm++ or puiterwijk++).

This “positive” karma cookies are distributed by zodbot, Fedora’s IRC bot. A contributor can give another contributor a “karma cookie” once a release cycle before they are able to give another one. For reaching certain milestones of karma cookies, contributors are awarded badges via Fedora Badges. Fedora uses this as a method to promote positive behavior in the community as well as help support and build community in Fedora. This reflects upon the “Friends” part of the Four Foundations of Fedora. I love the concept of karma cookies and I think it’s a small and great way for us to share our appreciation for other contributors in the project.

The Rainbow badge is awarded after receiving 100 karma cookies across all Fedora releases.

Thank you!

I did spend a lot of time giving thanks and appreciation in my Flock 2016 write-up, so I think it would be better to point there for a longer, more verbose expression of gratitude.

Żegnajcie! Fedora Flock 2016 in words

<iframe class="wp-embedded-content" data-secret="olJcGRYDVL" frameborder="0" height="338" marginheight="0" marginwidth="0" sandbox="allow-scripts" scrolling="no" security="restricted" src="" title="“Żegnajcie! Fedora Flock 2016 in words” — Justin W. Flory's Blog" width="600"></iframe>

I am still appreciative and thankful of all the people who have spared their time for helping get me started in Fedora. Sometimes, it’s hard to believe it hasn’t yet been a full year since my first contributions. The opportunities and friendships that being a member of the Fedora community have provided are irreplaceable. I hope that I am able to continue making an impact on Fedora far into the future and share some cookies with some other contributors. And as always, I hope to pay forward the kindness and guidance that others have bestowed to me towards others who are entering our project.

Thanks to all the mentors both past and present, friends, and fellow community members who have participated in my journey so far.

The post Achievement get: Rainbow! appeared first on Justin W. Flory's Blog.

Modularity Infrastructure Design

Co-authored by Courtney Pacheco and Ralph Bean

Note: This article is a follow-up to Introduction to Modularity.


The purpose of our Modularity initiative is to support the building, maintaining, and shipping of modular things. So, in order to ensure these three requirements are met, we need to design a framework for building and composing the distribution.

In terms of the framework, in general, we are concerned about the possibility of creating an exponential number of component combinations with independent lifecycles. That is, when the number of component combinations becomes too large, we will not be able to manage them. So that we don’t accidentally make our lives worse, we must limit the number of supported modules with a policy and provide infrastructure automation to reduce the amount of manual work required.

Submitting Packages vs. Submitting Modules

Normally, a packager identifies an upstream project to package, writes/generates a spec file, builds locally in mock or scratch in koji, evaluates/decides whether to proceed with the packaging, then submits the package review in rhbz. With modules, the packager simply writes a .yaml file (which defines the module’s metadata), then submits it for review in rhbz. This approach for defining modules is designed to be as simple as possible to minimize the complexity of the process.

Updating Packages vs. Updating Modules

Updating packages can be very complex. With modules, maintainers can easily update their module’s metadata definition (via the .yaml file), commit and push it to dist-git (which houses the definitions of modules), then kick off a “module build” in Koji. Like the submission process for modules, this approach is intentionally designed to be as simple as possible (and as similar to the existing packaging process as possible).

Infrastructure Proposal

Ralph Bean created a very detailed, informative overview of the Infrastructure proposal:

Two important elements in this chart are captured in the box labeled Orchestrator (i.e., ad-hoc pungi).  One part of that is říďa, the module-build service, which is responsible for setting up tags in Koji and rebuilding the components of a module from source.  (Note: We demoed a working prototype of říďa at Flock 2016.)  The other system here is a continuous rebuild/compose system, which makes heavy use of the Product Definition Center (PDC) to know what needs to be rebuilt. That is, when a component in a module changes, the continuous compose system is responsible for asking what depends on that component, then scheduling rebuilds of those modules directly in říďa.

Once those module rebuilds have completed and validated by CI, the continuous compose system will be triggered again to schedule rebuilds of a subsequent tier of dependencies, and this cycle will repeated until the tree of dependencies is fully rebuilt.

In the event that a rebuild fails, or if CI validation fails, maintainers will be notified in the usual ways (the Fedora notification service). A module maintainer could then respond by manually fixing their module and scheduling another module build (in koji), at which point the trio of systems would pick up where they left off and would complete the rebuild of subsequent tiers (stacks).

You can find more details about říďa here. You can also find more details about the components in the rest of the proposal chart  here.

How you can help Modularity

Lending a helping paw

Would you like to learn more? Would you like to help out? You can find us on #fedora-modularity on freenode. You can also join the weekly Modularity Working Group Meeting for the latest updates on Modularity.

Also, please do subscribe to for regular updates on our Modularity effort! In case you missed our email updates, you can find archives for this list here.

The post Modularity Infrastructure Design appeared first on Fedora Community Blog.

Quarter Century of Innovation – aka Happy Birthday Linux!

Screenshot from 2016-08-25 14-35-23

Happy Birthday, Linux! Thank you Linus for that post (and code) from a quarter of a century ago.

I distinctly remember coming across the post above on comp.os.minix while I was trying to figure out something called 386BSD. I was following the 386BSD development by Lynne Jolitz and William Jolitz back when I was in graduate school in OSU. I am not sure where I first heard about 386BSD, but it could have been in some newsgroup or the BYTE magazine (unfortunately I can’t find any references). Suffice to say, the work of 386BSD was subsequently documented by the Dr. Dobb’s Journal from around the 1992. Fortunately, the good people at Dr. Dobb’s Journal have placed their entire contents on the Internet and the first post of the port of 386BSD is now online.

I was back in Singapore by then and was working at CSA Research doing work in building networking functionality for a software engineering project. The development team had access to a SCO Unix machine but because we did not buy “client access licenses” (I think that was what it was called), we could only have exactly 2 users – one on the console via X-Windows and the other via telnet. I was not going to suggest to the management to get the additional access rights (I was told it would cost S$1,500!!) and instead, tried to find out why it was that the 3rd and subsequent login requests were being rejected.

That’s when I discovered that SCO Unix was doing some form of access locking that was part of the login process used by the built-in telnet daemon. I figured that if I can replace the telnet daemon with one that does not do the check, I can get as many people telnetting into the system and using it.

To create a new telnet daemon, I needed the source code and then to compile it. SCO Unix never provided any source code. I managed, however, to get the source code to a telnet daemon (from I think although I could be wrong).

Remember that during those days, there was no Internet access in Singapore – no TCP/IP access anyway. And the only way to the Internet was via UUCP (and Bitnet at the universities). I used (an ftp via email service by Digital Equipment Corporation) to go out and pull in the code and send it to me via email in 64k uuencoded chunks. Slow, but hey, it worked and it worked well.

Once I got the code, the next challenge was to compile it. We did have the C compiler but for some reason, we did not have the needed crypto library to compile against. That was when I came across the incredible stupidity of labeling cryptography as a munition by the US Department of Commerce. Because of that, we, in Singapore, could not get to the crypto library.

After some checking around, I got to someone who happened to have a full blown SCO Unix system and had the crypto library in their system. I requested that they compile a telnet daemon without the crypto library enabled and to then send me the compiled binary.

After some to and fro via email, I finally received the compiled telnet daemon without the crypto linked in and replaced the telnetd on my SCO Unix machine. Viola, everyone else in the office LAN could telnet in. The multi-user SCO machine was now really multi-user.

That experience was what pushed me to explore what would I need to do to make sure that both crypto code and needed libraries are available to anyone, anywhere. The fact that 386BSD was a US-originated project meant that tying my kite to them would eventually discriminate against me in not being able to get to the best of cryptography and in turn, security and privacy. That was when Linus’ work on Linux became interesting for me.

The fact that this was done outside the US meant that it was not crippled by politics and other shortsighted rules and that if it worked well enough, it could be an interesting operating system.

I am glad that I did make that choice.

The very first Linux distribution I got was from Soft Landing Systems (SLS in short) which I had to get via the amazingly trusty service which happily replied with dozens of 64K uuencoded emails.

What a thrill it was when I started getting serialized uuencoded emails with the goodies in them. I don’t think I have any of the 5.25″ on to which I had to put the uudecoded contents. I do remember selling complete sets of SLS diskettes (all 5.25″ ones) for $10 per box (in addition to the cost of the diskettes). I must have sold it to 10-15 people. Yes, I made money from free software, but it was for the labour and “expertise”.

Fast forward twenty five years to 2016, I have so many systems running Linux (TV, wireless access points, handphones, laptops, set-top boxes etc etc etc) that if I were asked to point to ONE thing that made and is still making a huge difference to all of us, I will point to Linux.

The impact of Linux on society cannot be accurately quantified.  It is hard. Linux is like water. It is everywhere and that is the beauty of it. In choosing the GPLv2 license for Linux, Linus released a huge amount of value for all of humanity. He paid forward.

It is hard to predict what the next 25 years will mean and how Linux will impact us all, but if the first 25 years is a hint, it cannot but be spectacular. What an amazing time to be alive.

Happy birthday Linux. You’ve defined how we should be using and adoption technology. You’ve disrupted and continue to disrupt, industries all over the place. You’ve helped define what it means to share ideas openly and freely. You’ve shown what happens when we collaborate and work together. Free and Open Source is a win-win for all and Linux is the Gold Standard of that.

Linux (and Linus) You done well and thank you!

Go! Speed Racer Go!

I finally reached a point where I could start running the go version of sm-photo-tool. I finished the option validation for the list command. While I was testing it I noticed how much faster the Go version felt. Here are the python vs Go versions of the commands.

The python version took 80 milliseconds to run the validation of parameters and print out the error message. This is expected, you have to start the python VM (and if not precompiled, the code would need to be compiled). In this test, the code was precompiled.

$ time sm-photo-tool list invalid blah
ERROR: valid options are ['album', 'galleries']

real	0m0.080s
user	0m0.059s
sys	0m0.021s

Ok so how fast is the Go version? My guess was half the time, 40ms. I was way off. Try 6ms. SIX! That’s amazing to do the exact same amount of work.

$ time ./sm-photo-tool list invalid blah
ERROR: valid options are [album galleries]

real	0m0.006s
user	0m0.001s
sys	0m0.005s

Some thoughts that I don’t want to have, regarding people getting shot

This post could be written by a lot of people who belong to a lot of groups. This post has been written by a lot of people who belong to a lot of groups, and you should find and read those things too. This just happens to be the post that I can write, about a group that I belong to also.

Trigger warnings: audism, racism, discussions of police-related violence/shooting, probably some other stuff.

A number of (hearing) friends from a bunch of my (different) social circles recently sent me — almost simultaneously — links to news stories about Deaf people getting killed by cops who couldn’t communicate with them.

This is nothing new. It’s been happening for ages. Someone with a gun gets scared and pulls the trigger, and someone else is dead. Maybe that person is Deaf. Maybe that person is Black. In any case, that person is now dead, and that’s not okay. (Maybe that person is both Deaf and Black, and we mention the second part but not the first. That’s disability erasure that, statistically, correlates highly with race; that’s also not okay.)

I’ve been deaf as long as I can remember, and I’ve known these stories happened for a long, long time. But this is the first time I’ve watched them from inside the conversations of a Deaf community — for some definition of “inside” that includes confused mainstreamed-oral youngsters like me who are struggling to learn ASL and figure out where they fit.

I’m a geek, a scholar, and an academic. My last long string of blog posts is part of a draft chapter on postmodernist philosophy as a theoretical language for describing maker/hacker/open-source culture within engineering education, and honestly… that’s what I’d rather write about. That’s what I’d rather think about. That’s what I’d rather sign about. Not people getting shot. A large portion of my Deaf friends are also geeks and scholars — older and more experienced than me, with tips on how to request ASL interpreting for doctoral defenses and faculty meetings, how to use FM units to teach class, how to navigate accessibility negotiations when your book wins awards and you get international speaking invitations. They are kind and brilliant and passionate and wonderful I love them and I want to be one of them when I grow up.

And we are geeks when we talk about these deaths, too. Kind and brilliant and passionate and wonderful. And my heart bursts with gratitude that I know these people, because it’s such a thoughtful and complex discussion, from so many perspectives, drawing on so many historical, theoretical, personal, etc. threads… the narratives I love, the sorts of tricky complexity that brought me back to graduate school and sent me hurtling down years of studying intricate threads of thought so I could better appreciate the mysteries that people and their stories are.

And I can’t stop thinking that any of us — any of these kind and brilliant and passionate and wonderful geeks in the middle of these great and rather hopeful discussions about complex societal dynamics and how to improve them — we could be taken out by a single bullet from a cop who doesn’t know.

I’ve learned a lot of things about being a deaf woman of color in the past year. I’m lucky; I look like a “good” minority, a white-skinned Asian who can play to stereotypes of quiet submission — but even then. And I know lots of people who can’t. And one of the first things I learned was how to stop pretending to be hearing all the time — especially in any interaction involving someone with a badge or guns (airports, traffic stops, anything). This isn’t just because it’s exhausting to lipread, but because it can be dangerous to piss off someone who thinks you’re ignoring them out of malice or attitude rather than the truth that you simply didn’t hear them shouting.

I first learned this sort of thing in undergrad, when some of my engineering college friends were horrified by stories of some other student from some other engineering college arrested by panicky cops for carrying around an electronics project. I thought they were upset for the same reasons I was — because it was a stupendous overreaction on the part of the cops and the school. And it was. But they were also worried because — what if that had been me? And the cops had shouted stop, and turn around, and put down the device — and I didn’t hear them?

“It’s fine. I mean, I’m deaf, but I can talk — I would explain things. I would figure it out,” I told them at the time. “I’m smart, you know.” As if that would protect me, as if I could compensate that way — because I’d compensated that way for so much, for all my life.

But being smart doesn’t make you more hearing — to hear shouts from people pointing guns at you — or less dead, once they fire them. And being smart doesn’t spare you from assumptions people make because of how you’re navigating tradeoffs. If you’re a PhD who decides to go voice-off while getting through airport security because it means you’re less likely to get shot, you’re going to get treated like a very small and stupid child. Maybe not every time, and not by everyone, but enough that swallowing your pride becomes a normal part of flying. No written note, no typed message, no outward display of intelligence that I’ve been able to figure out has made someone recognize the intellectual identity I’m trying to communicate when they’ve already assumed it isn’t there.

And being smart doesn’t mean you can think your way out of other people’s assumptions and their ignorance and their inability to see who you are. And being smart isn’t what gives your life its value; being human does. (Being smart doesn’t make you more special than people who don’t rank as high on whatever flawed metric of smartness you or the world decide to use.) And being kind and brilliant and passionate and wonderful does not exempt you from being heartbroken when the world is broken, and afraid because it hurts you, and your friends, and people like you, and people like your friends, for a lot of different reasons that shouldn’t matter in the world, but do.

I wish I were more eloquent, but I can’t think about this too much and still do things like finish my doctoral dissertation this week. I wish I could speak to how this isn’t just about violence against Deaf and disabled people, how I’m not just speaking up right now because I happen to belong to those groups too — this breaks my heart when it’s Black people and queer people and Christian people and female people and trans people and… people. It’s mostly that I can speak a little bit more readily from inside groups I’m in, and that I have a little bit of time to vent this out right now, between writing a section on “postmodern narrative sensemaking as plural” and another on “narrative accruals as co-constructing communities of practice.”

Back to the world, I guess. Back to writing my stories of the gorgeousness and complexity and hope that always lives inside the world that wins my heart and breaks it all at the same time.

Summer Talks, PurpleEgg

I recently gave talks at Flock in Krakow and GUADEC in Karlsruhe:

Flock: What’s Fedora’s Alternative to vi httpd.conf Video Slides: PDF ODP
GUADEC: Reworking the desktop distribution Video Slides: PDF ODP

The topics were different but related: The Flock talk talked about how to make things better for a developer using Fedora Workstation as their development workstation, while the GUADEC talk was about the work we are doing to move Fedora to a model where the OS is immutable and separate from applications. A shared idea of the two talks is that your workstation is not your development environment environment. Installing development tools, language runtimes, and header files as part of your base operating system implies that every project you are developing wants the same development environment, and that simply is not the case.

At both talks, I demo’ed a small project I’ve been working on with codename of PurpleEgg (I didn’t have that codename yet at Flock – the talk instead talks about “NewTerm” and “fedenv”.) PurpleEgg is about creating easily creating containerized environments dedicated to a project, and about integrating those projects into the desktop user interface in a natural, slick way.

The command line client to PurpleEgg is called pegg:

[otaylor@localhost ~]$ pegg create django mydjangosite
[otaylor@localhost ~]$ cd ~/Projects/mydjangosite
[otaylor@localhost mydangjosite]$  pegg shell
[[mydjangosite]]$ python runserver
August 24, 2016 - 19:11:36
Django version 1.9.8, using settings 'mydjangosite.settings'
Starting development server at
Quit the server with CONTROL-C.

“pegg create” step did the following steps:

  • Created a directory ~/Projects/mydjangosite
  • Created a file pegg.yaml with the following contents:
base: fedora:24
- python3-virtualenv
- python3-django
  • Created a Docker image is the Fedora 24 base image plus the specified packages
  • Created a venv/ directory in the specified directory and initialized a virtual environment there
  • Ran ‘django-admin startproject’ to create the standard Django project

pegg shell

  • Checked to see if the Docker image needed updating
  • Ran a bash prompt inside the Docker image with a customized prompt
  • Activated the virtual environment

The end result is that, without changing the configuration of the host machine at all, in a few simple commands we got to a place where we can work on a Django project just as it is documented upstream.

But with the PurpleEgg application installed, you get more: you get search results in the GNOME Activities Overview for your projects, and when you activate a search result, you see a window like:


We have a terminal interface specialized for our project:

  • We already have the pegg environment activated
  • New tabs also open within that environment
  • The prompt is uncluttered with relevant information moved to the header bar
  • If the project is checked into Git, the header bar also tracks the Git branch

There’s a fair bit more that could be done: a GUI for creating and importing projects as in GNOME Builder, GUI integration for Vagrant and Docker, configuring frequently used commands in pegg.yaml, etc.

At the most basic, the idea is that server-side development is terminal-centric and also somewhat specialized – different languages and frameworks have different ways of doing things. PurpleEgg embraces working like that, but adds just enough conventions so that we can make things better for the developer – just because the developer wants a terminal doesn’t mean that all we can give them is a big pile of terminals.

PurpleEgg codedump is here. Not warrantied to be fit for any purpose.

F24-20160823 updated Live isos

New Kernel means new set of updated lives.

I am happy to release the F24-20160823 updated lives isos.

as always these respins can be found at

Using the updated isos will save about 500M of updates after install  YMMV (Gold MATE install updates as of 20160823 is 583M)

I would like to thank the community and the seeders for their dedication to this project.

August 24, 2016

NetworkManager 1.4: with better privacy and easier to use

After we released version 1.0 of NetworkManager, it took us sixteen months to reach the 1.2 milestone. This means that it took over a year for some newly added features to reach the user base. Now we are releasing the next major release after just four months.

<figure class="wp-caption aligncenter" id="attachment_113" style="width: 600px">Guglielmo Marconi, checking out NetworkManager 1.4 Wi-Fi MAC address changing<figcaption class="wp-caption-text">Guglielmo Marconi, checking out NetworkManager 1.4 Wi-Fi MAC address changing</figcaption></figure>

This improved release cadence was made possible by the excellent work of Red Hat’s Quality Engineering team during the development cycle. Their thorough testing gave us confidence in the new code and dramatically lowered the number of bugs late in the release cycle.

Despite a somewhat shorter release cycle the new version of NetworkManager, while still API and ABI compatible with previous versions, is by no means short on improvements. Let’s take a detailed look!

What’s new?

It is now possible to randomize the MAC address of Ethernet devices to mitigate possibility of tracking. The users can choose between different policies; use a completely random address, or just use different addresses in different networks. For Wi-Fi devices, the same randomization modes are now supported and does no longer require support from wpa-supplicant.

API for using configuration snapshots that automatically roll back after a timeout has been added. The remote network configuration tools (think Cockpit) are expected to use this to avoid situations where a mistake in the configuration renders the remote host unreachable.

A new “dns-priority” property of ipv4 and ipv6 settings can be used to tweak the order of servers in resolv.conf. This will make things easier for users who often use multiple active connections.

Following some upstream kernel changes, IPv6 tokenized interface identifiers can be configured. This makes it possible for the system operators to use a router-assigned prefix while still using some well-known host part of the address.

nmcli got some new features too, motivated by the feedback we received in the NetworkManager user survey. Many users will surely welcome that the “connection add” syntax is now consistent with “connection modify”. Those of you who’re used to typing “ifconfig” to get the big picture can now get a quick overview of devices and their configuration by invoking “nmcli” without parameters.

Certain parts of the device configuration, such as IPv4 and IPv6 method or addressing, can now be updated without completely restarting the device configuration. nmcli has been extended with “device modify” and “device reapply” subcommands that build on this functionality.

Canonical contributed support for oFono in place of ModemManager to support mobile broadband connections. The Ubuntu phone has been using this for some time. We’re happy to see it merged in mainline NetworkManager!

Canonical also contributed patches that expose on D-Bus RX/TX counters of transferred bytes per interface. With this client applications can monitor the bandwidth.

VPN plugins are no longer released together with NetworkManager. It is intended that for most plugins no 1.4.0 release exists because the newest version of the plugin works equally well with any NetworkManager version 1.2.0 or newer.

What’s next?

There’s a couple of features that have been worked on, but didn’t quite make the 1.4.0 release. Very likely to be included in the next update are improved proxy support and MACsec.

The efforts of Atul Anand, a Google Summer of Code participant, have turned out to be fruitful. He has done excellent job at improving the support for Proxy (auto-) configuration and his work will be merged very soon.

The Linux kernel 4.6 includes support for a Layer 2 encryption known as MACsec and we’re almost done implementing support for configuring it with NetworkManager.

Happy Networking!

Thanks to Dan Williams, Thomas Haller, Francesco Giudici, Beniamino Galvani, Eric Garver and Sabrina Dubroca for reviewing this article, adding their favourite features and fixing many silly mistakes!

UDP Failures and RNGs

Upgrades to a new kernel inevitably break something. 4.7 managed to break a Ruby unit test based on a bug report. A test to send an empty UDP packet was now timing out. I've never worked with Ruby much but the test itself was easy enough to get a general idea what was going on. I grabbed a copy of the ruby source on a rawhide machine and did a mockbuild

# fedpkg co -a ruby
# cd ruby
# dnf builddep ruby.spec
# fedpkg mockbuild

and saw that it was failing in the same way as the report. This is a very large amount of work for what amounts to just one failure and I still wanted to rule out other problems besides the kernel. Fortunately, it could easily be narrowed down to a smaller test case:

$ cat weird_ruby.rb
#!/usr/bin/env ruby

  require "socket"
  require "io/nonblock"
  require "io/wait"
rescue LoadErrror

  def test_udp_recvfrom_nonblock
    u1 =
    u2 =
    u1.bind("", 0)
    u2.send("", 0, u1.getsockname) [u1]
    u1.close if u1
    u2.close if u2


Ruby devs are probably cringing at the style but the important part is that this was a test case that worked on my 4.6 based machine and failed on the 4.7 based machine. This was a great candidate for bisection. Because I'm lazy and don't like rebooting machines, I used my buildroot setup (I had to recompile my buildroot image to put Ruby in it).

The first bisect in buildroot was useless, it gave me 4.6 as the first bad commit which I knew to be false. I tried 4.5. Still bad. I knew the test case was definitely passing on other environments in 4.6. strace is (still) my favorite userspace tool for 'opaque program, what are you doing' so I decided to give it a try:

brk(0x2560000)                          = 0x2560000
brk(0x255c000)                          = 0x255c000
brk(0x2580000)                          = 0x2580000
brk(0x257c000)                          = 0x257c000
brk(0x25a0000)                          = 0x25a0000
brk(0x259c000)                          = 0x259c000
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, {0, 1882167}) = 0
brk(0x25c1000)                          = 0x25c1000

getrandom waitwhathuh?

# cat /proc/1084/stack
[<ffffffff813f3ac8>] SyS_getrandom+0xd8/0x140
[<ffffffff818e99f2>] entry_SYSCALL_64_fastpath+0x1a/0xa4
[<ffffffffffffffff>] 0xffffffffffffffff

The corresponding code showed it waiting on an event for nonblocking_pool.initialized. So the Ruby environment was internally trying to get a random number but the nonblocking pool wasn't yet initialized. The buildroot environment I run is very minimalist, there isn't much in the way of devices or drivers. The buildroot init system writes a set of random data to initialize the RNG but the RNG code doesn't actually update the calculation of entropy. This means that another call to add entropy must update the calculation. Usually the system uses things like device interrupts or input timing to generate entropy. In a minimal system like buildroot, most of these aren't present. The most reliable source of entropy is typing on the keyboard and, yes, if I keyboard enough the pool will eventually initialize. I should probably put a big blinking HACK sign here before I state my solution. In the buildroot init file, right before the write to the RNG with random data I did

for i in `seq 0 1024`; do

The idea ws to spawn a bunch of different processes in hopes that it would give a bump of entropy to the system. It did indeed work. This is not endorsed as a solution for anything other than 'let me run my Ruby program plx'. I'm certain there are better ideas out there. It's worth noting this limitation is no longer present on newer kernels. There has been some reworking of the random number generator to mitigate problems like these.

Once the Ruby problem was fixed, I could actually bisect and found a very promising and relevant commit. Reporting a kernel bug with a Ruby program still leaves a lot open to wonder about, especially after hitting unexpected problems with the RNG. When submitting kernel problems, it's best to submit C when possible. So with the help of strace (again!) to see what exactly Ruby was doing, I ported the Ruby code to some roughly corresponding C code:

#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <sys/socket.h>
#include <unistd.h>
#include <arpa/inet.h>
#include <netinet/in.h>
#include <string.h>
#include <netdb.h>
#include <errno.h>

int main()
    int fd1, fd2;
    struct sockaddr_in addr1;
    unsigned int len1;
    int ret;
    fd_set rfds;


    if (fd1 < 0 || fd2 < 0) {
        printf("socket fail");

    len1 = sizeof(addr1);

    memset(&addr1, 0, sizeof(addr1));
    addr1.sin_family = AF_INET;
    addr1.sin_addr.s_addr = inet_addr("");
    addr1.sin_port = htons(0);
    ret = bind(fd1, (struct sockaddr *)&addr1, len1);
    if (ret < 0) {
        printf("bind fail %d\n", errno);

    ret = getsockname(fd1, (struct sockaddr *)&addr1, &len1);
    if (ret < 0) {
        printf("getsockname failed %d\n", errno);
    ret = sendto(fd2, "", 0, 0, (struct sockaddr *)&addr1, len1);
    if (ret < 0) {
        printf("sendto failed %d\n", errno);

    FD_SET(fd1, &rfds);
    select(fd1+1, &rfds, NULL, NULL, NULL);

Again, not the prettiest code but it also showed the problem in the same way as the Ruby code. Once I submitted the bug, the turn around time was very fast and the bug fix got merged. There was still some discussion about whether the test case is valid but it sounds like it is supposed to work.

Lessons here: reduce your test cases, bisection is still fantastic, and hacks are what you get for being lazy.

bitmath-1.3.1 released

bitmath is a Python module I wrote which simplifies many facets of interacting with file sizes in various units as python objects. A few weeks ago version 1.3.1 was released with a few small updates.


This new function accepts inputs using non-standard prefix units such as single-letter, or mis-capitalized units. For example, parse_string will not accept a short unit like ‘100k‘, whereas parse_string_unsafe will gladly accept it:

View the code on Gist.

  • Documentation Refresh: The project documentation has been thoroughly reviewed and refreshed.

Several broken, moved, or redirecting links have been fixed. Wording and examples are more consistent. The documentation also lands correctly when installed via package.

Getting bitmath-1.3.1

bitmath-1.3.1 is available through several installation channels:

  • Fedora 23 and newer repositories
  • EPEL 6 and 7 repositories
  • PyPi

Ubuntu builds have not been prepared yet due to issues I’ve been having with Launchpad and new package versions.

Communication Anti-Patterns
Let’s get this out of the way: Yes, I’m old and grumpy. I have more than a few “get off my lawn!” moments. But sometimes… sometimes, they’re justified. Especially when confronted with some of the common communication anti-patterns I run into day after day when working with distributed communities/workers. Here’s a few things you shouldn’t […]
Autocloud: What's new?

Autocloud was released during the Fedora 23 cycle as a part of the Two Week Atomic Process.

Previously, it used to listen to fedmsg for successful Koji builds. Whenever, there is a new message the AutocloudConsumer queues these message for processing. The Autocloud job service then listens to the queue, downloads the images and runs the tests using Tunir. A more detailed post about it’s release can be read here.

During the Fedora 24 cycle things changed. There was a change on how the Fedora composes are built. Thanks to adamw for writing a detailed blogpost on what, why and how things changed.

With this change now autocloud listens to the compose builds over the fedmsg. The topic being “”. It checks for the messages with the status FINISHED and FINISHED_INCOMPLETE.

After the filtration, it gets the Cloud Images built during that particular compose using a tool called fedfind. The job here is parse the metadata of the compose and getting the Cloud Images. These images are then queued into both libvirt and vbox boxes. The Autocloud job service then downloads the images and run the tests using Tunir.

Changes in the Autocloud fedmsg messages.

Earlier the messages with the following topics were sent

Now along with the fedmsg message for the status of the image test. Autocloud also sends messages for the status of a particular compose.

The compose_id field was added to the autocloud.image.* messages

Changes in the UI

  • A page was added to list all the composes. It gives an overview of the composes like if it’s still running, number of tests passed, etc
  • The jobs page lists all the tests data as earlier. We added filtering to the page so filter the jobs based on various params
  • You need to agree that the jobs output page looks better than before. Now, rather the showing a big dump of text the output now is properly formatted. You can now reference each line separately.

Right now, we are planning to work on testing the images uploaded via fedimg in Autocloud. If the project looks interesting and you are planning to contribute? Ping us on #fedora-apps on Freenode.


I came back from Karlsruhe last week, where GUADEC 2016 took place.

It was a wonderful event. Even though it was only my second GUADEC, I felt at home in this community, meeting with old and new friends.

The talks were excellent, but a few really resonated with me:

Of course the most interesting discussions are sometimes the ones you have outside of the planned presentations, and probably one of the best for me was when Stef Walter showed me the amazing work they've done on the Cockpit project, especially since I should be able to reuse a lot of that for my work with Bibliothèques Sans Frontières.

Other than that I spent quite some time hacking on a few things:

Overall, I spent most of the week on flatpak-related issues, which is a technology I'm growing to love more and more, and which could revolutionize the way we distribute applications for the Linux desktop.

And all those fixes allowed me to spend most of the trip back home playing in GNOME Games, running from a flatpak build of git master, with a PS3 gamepad. Obviously that's the only reason I worked on all of this! 😜

All in all, this GUADEC was a huge success, thanks to the efforts of the organization team.

Looking forward to the next one in Manchester!

Getting S3 Statistics using S3stat

I’ve been using Amazon S3 as a CDN for the LVFS metadata for a few weeks now. It’s been working really well and we’ve shifted a huge number of files in that time already. One thing that made me very anxious was the bill that I was going to get sent by Amazon, as it’s kinda hard to work out the total when you’re serving cough millions of small files rather than a few large files to a few people. I also needed to keep track of which files were being downloaded for various reasons and the Amazon tools make this needlessly tricky.

I signed up for the free trial of S3stat and so far I’ve been pleasantly surprised. It seems to do a really good job of graphing the spend per day and also allowing me to drill down into any areas that need attention, e.g. looking at the list of 404 codes various people are causing. It was fairly easy to set up, although did take a couple of days to start processing logs (which is all explained in the set up). Amazon really should be providing something similar.

Screenshot from 2016-08-24 11-29-51

For people providing less than 200,000 hits per day it’s only $10, which seems pretty reasonable. For my use case (bazillions of small files) it rises to a little-harder-to-justify $50/month.

I can’t justify the $50/month for the LVFS, but luckily for me they have a Cheap Bastard Plan (their words, not mine!) which swaps a bit of advertising for a free unlimited license. Sounds like a fair swap, and means it’s available for a lot of projects where $600/yr is better spent elsewhere.

Devo Firmware Updating

Does anybody have a Devo RC transmitter I can borrow for a few weeks? I need model 6, 6S, 7E, 8, 8S, 10, 12, 12S, F7 or F12E — it doesn’t actually have to work, I just need the firmware upload feature for testing various things. Please reshare/repost if you’re in any UK RC groups that could help. Thanks!

Fedora 24 Release Party in Singapore

As you might know, Fedora released its 24th version at the end of June! Recently, the Fedorans in Singapore had a party to celebrate the release.  The release party was not only to celebrate its release, but also to commemorate Fedora’s open source journey so far. We invited people from different diverse background to join us for a night of fun and open conversations (Singapore is a cosmopolitan country!)

Fedora 24 Release Party in Singapore: Fedora 24 DVDs

Some of the Fedora 24 DVDs and stickers for the party

We had a RSVP of over 50 folks and expected more to join in. We set up the Fedora banners and were also ready to give out DVDs and stickers. However, on the day itself, there was a dropout rate of 60% and only around fifteen folks turned up. Most of the folks that turned up were students interested in learning more about Fedora. Nevertheless, it was a cozy and warm party that everyone felt pretty comfortable with.

Getting Singapore party started

Fedora 24 Release Party in Singapore: Fedora 24 release party food

Kicking off the party with some pizza and drinks

Everyone had some pizzas, chips and drinks before the talks began. Strange thing that happened: one of the pizza delivery men got into a traffic accident and had his arm badly injured. The pizza company had to redo the pizzas and send another guy down. Hope he’s okay now. As for the talks, we had two talks lined for the party: a Fedora release talk and a Tails talk. The release talk was done by one of ambassadors in Singapore, Huiren Woo.

Fedora 24 Release Party in Singapore: Singapore release party talk

Huiren Woo delivering the talk on the new Fedora release

Interactive learning about Fedora

Our release party talk was focused on community and collaboration whilst still covering the new features and improvements in Fedora 24. At the end of the talk, we had a fun quiz to test out the knowledge of our audience and also to see if they were engaged in listening. We made use of an educational quiz tool, Kahoot, and allowed everyone to participate and win prizes. The top prize was a copy of The Open Organization with a signature from Jim Whitehurst.

Fedora 24 Release Party in Singapore: Fedora 24 quiz winners

Winners of the Fedora 24 release party quiz

These people were very smart and paid good attention to our talk! The other players also had lots of fun playing the quiz; some used nicknames such as “Bill Gates” and MSP (acronym for Microsoft Student Partner). At first, “Bill Gates” was leading the game but fortunately (and unfortunately), he lost to the tougher questions and got the 3rd prize, which was a copy of the 2015 yearbook and Catalyst-In-Chief.

Fedora 24 Release Party in Singapore: Fedora 24 small conversations

Discussing topics of open source and questions about the new Fedora 24 release

At the end of all the talks, most of the folks stayed behind and continued having conversations in small groups. We talked about B-tree filesystem (BTRFS), encryption, Wayland, Tor, and many more! Overall, it was quite a successful event with many engaged Fedorans. Thank you everyone for joining us for our release party!

Special thanks

Not to forget, thank you Amazon Web Services (AWS) for helping to sponsor our food at the event! PS: You can actually spin up Fedora Cloud instances and use tools on AWS EC2.

The post Fedora 24 Release Party in Singapore appeared first on Fedora Community Blog.

Building Fedora Rawhide images with Imagefactory


ImageFactory is a tool, built on Oz, that is suitable for generating various types of operating system and cloud images. These images can be generated in a variety of different formats. Those include the Docker image format and the qcow2 image format.

This article shows you how to build a Fedora Rawhide image in Imagefactory, then run it in a container via Docker. In the next article, you’ll see how to do the same thing, except with building qcow2 images for VMs.

Why Build a Fedora Rawhide Image?

Fedora Rawhide is, as you know, a constantly evolving version of the Fedora operating system that has nightly updates. While you can technically upgrade your OS to Rawhide, most users prefer to stick to the latest Fedora release because Rawhide is not 100% guaranteed to be stable. Still, you may want to test various tools and software on Rawhide, or even play around with Rawhide to see what it has to offer. In these cases, you will likely find container images and VMs useful and appealing.

About ImageFactory

This section gives a brief overview of ImageFactory and how it works. This is not a comprehensive overview of the tool. Instead, the main purpose of this section is to get you comfortable with how this tool works on a high level.


Before we begin, make sure that you have installed a working version of ImageFactory. If you are unsure how to install and setup ImageFactory,

$ sudo dnf install imagefactory imagefactory-plugins-TinMan imagefactory-plugins-Docker

What do I need to build an image?

You will need two files: a kickstart file and a template.

The Kickstart file is used to tell ImageFactory what to install and how to install it. Below is the official Kickstart file for creating a (minimal) Fedora docker base image:

To download the file to your current directory,

$ wget

The template is used to tell ImageFactory which OS you want to install (version, architecture, etc.) and where to install it from. It essentially describes what to build. An example template can be seen below:

        <install type='url'>

Templates can have either a .tdl or a .xml extension. (Note: You may want to change the root password to something more secure.)

Modifying Oz to work with ImageFactory

Since ImageFactory is built on Oz and you may not have enough RAM available in Oz to generate an image, you will have to modify your Oz configuration, which is located at /etc/oz/oz.cfg.

Here is the default oz.cfg file:

output_dir = /var/lib/libvirt/images
data_dir = /var/lib/oz
screenshot_dir = /var/lib/oz/screenshots
# sshprivkey = /etc/oz/id_rsa-icicle-gen

uri = qemu:///system
image_type = raw
# type = kvm
# bridge_name = virbr0
# cpus = 1
# memory = 1024

original_media = yes
modified_media = no
jeos = no

safe_generation = no

Edit the line which says #memory = 1024 and change the value to 2048. That should be sufficient to build an image. You can run this command to replace the line:

$ sudo sed -i -e 's/# memory = 1024/memory = 2048/' /etc/oz/oz.cfg

Building a Docker container image

The general command for building a base image is:

# imagefactory --debug base_image --file-parameter install_script <kickstart> <template> --parameter offline_icicle true
  • The install_script parameter is self explanatory — it tells Imagefactory which files it should use to build your image with.
  • The offline_icicle parameter tells Oz (via Imagefactory) to use features of libguestfs to mount the image offline, chroot into it, then execute an RPM command. (Normally, Oz derives this information by launching a throwaway version of your image, ssh-ing into it, then running an RPM command. However, because we are building a container image, the actual output is not something that can be booted as a VM, which is why we must derive the ICICLE “offline”.)

This process will take more than 10 minutes to run if you do not have a cached copy of the rpms in ImageFactor. Otherwise, it should take anywhere from 5-10 minutes to complete.

Once the process is finished, it prints the ID of the image you just created. The final output will look something like this:

============ Final Image Details ============
UUID: 17206f41-5bd8-4578-84b9-a3fffc1cd168
Type: base_image
Image filename: /var/lib/imagefactory/storage/17206f41-5bd8-4578-84b9-a3fffc1cd168.body
Image build completed SUCCESSFULLY!

Note: If the image build fails, then most likely Rawhide is broken — in which case, you may want to run the tutorial using the latest Fedora release. At the time of this article, that’s Fedora 24. The URL for the F24 repository is You can replace this URL with the URL in the template above. (Be sure to change the Fedora release number, etc. so your image information is accurate!) On the other hand, if you receive an SELinux error, then try running setenforce 0 in your terminal.

Preparing the image for Docker

Next, we need to “docker compress” our image in order to prepare it for loading into Docker:

# imagefactory --debug target_image --id <UUID> docker --parameter compress xz

This time, replace UUID with the UUID printed out, not the Image filename.

Once this process completes, you now have your Docker base image!

To load your new image into Docker:

docker load -i full/path/to/compressed/image/filename


August 23, 2016

Install Audacity with MP3 support -Fedora 24
Hello Fedora People,
To install Audacity with MP3 support -Fedora 24, type the following comand line in the terminal:

dnf install audacity-freeworld

(don't forget to enable "free" and "non-free" repositories, visit to know how to do it).
FOSS Wave: Delhi, India

FOSS Wave in Delhi, India: Getting started for the dayOpen source is the new trend. When major corporations are moving towards open architecture by using open source tools and even pushing their internal projects into open source, it makes your contributions especially worthy. But before starting with contributing, many people face the same common set of questions. How they can start, how should they introduce themselves in the community, and where they can contribute. To answer these questions, I planned a session on free and open source software (FOSS) and Fedora at the Northern India Engineering College in Delhi, India.

During the planning phase, I got in touch with Sumantro, who is himself an open source enthusiast and contributing to various open source projects including the Fedora Project. With his help, we planned the agenda for the session and gathered the resources to conduct the session. On 12th August, 2016, this session on FOSS and Fedora was conducted to:

  • Answer these questions
  • Bring up new people in the open source arena
  • Show where they can contribute, learn and make an impact

Starting the day in Delhi

The session started with small questions. People who have tried their hands on contributing to open source shared what problems they faced during their journey. The answers to the questions ranged from having issues working with the codebase of the projects to problems in figuring out where to start from.

During the session, the main focus was to make the participants aware about what FOSS is and how it is beneficial to the community. The common misconception about freeware and FOSS being the same was also cleared during the session. During the session, a brief overview was presented to the participants about how they can start with their open source journey. We walked them through from identifying the project where they want to contribute to sending their introduction mails in the mailing lists of the project. The session moved on with the topic about where the participants can contribute and what areas of contribution they can work in (both technical and non-technical). Awareness was raised that a contributor doesn’t need to possess the knowledge of coding to get started with contributions in the project.

FOSS Wave in Delhi, India: Audience asking questions about open sourceIntroducing Fedora

After the introductory session on FOSS, we went ahead with our agenda and introduced the Fedora Project and the community behind it: what the Fedora Project is, what its mission is, and how the participants can get started with Fedora. The participants were guided upon how they can create their identity on the Fedora Project by signing up on FAS. They could then use that identity to get access to various Fedora applications and resources. The session on Fedora moved on with the introduction on how the contributors can get to the mailing list and introduce themselves to the community. There, they can get help about starting their contributions. The main focus during the session on Fedora was to introduce the participants to the Fedora Quality Assurance (QA) team and release validation testing.

Extending on the basic idea, we introduced Bodhi and package testing. Through a live demonstration, participants learned how they can start with package testing. The demo consisted of how to log into Bodhi using FAS and then enabling the updates-testing repository on Fedora to get the packages in testing. An overview of the karma system was provided to the participants where they were told about what karma points mean and how they should give karma. The session proceeded with an overview of release validation and why it is important. The different development channels in Fedora, like Rawhide and branched, were introduced to the participants and what they mean. A demo of release validation using relval was provided to the participants.

Introducing Git and more questions

The next session for the event was focused on getting started with Git. Git is a version control system used by many individuals and corporations to manage their source code. It also keeps track of the changes made by other developers. During this session, participants were introduced with a basic Git workflow. How they initialize a Git repository, add a remote repository, and pull the project source code. Participants guided to making their first Git commits and how it all works. This included covering the associated benefits of this type of system. Moving on with the Git session, participants were introduced to how various open source projects use version control systems like Git to manage their source code and accept contributions.

The event ended with an open question-and-answer session. Participants asked a variety of questions regarding open source projects. These questions consisted of things like availability of paid opportunities in open source, competitions in open source, and more. Answering these questions, participants learned about programs like Google Summer of Code, various conferences that are organized by the open source projects, and the recognition model used by these projects.

FOSS Wave in Delhi, India: Covering using Git version control system

Author credits: Saurabh Badhwar

Image courtesy Andrew Illarionov – originally posted to Unsplash as Untitled.

The post FOSS Wave: Delhi, India appeared first on Fedora Community Blog.

Linux Day [Panamá 2016]
Primeros pasos El pasado 06 de agosto el equipo de Floss-Pa y Fedora Panamá celebro el Linux Day este evento se realizo en la Universidad Interamericana de Panamá [UIP] en esta ocasión les brindare un resumen de todo lo sucedido en el evento. A principios de mayo…

August 22, 2016

All systems go
Service 'COPR Build System' now has status: good: Everything seems to be working.
There are scheduled downtimes in progress
Service 'COPR Build System' now has status: scheduled: scheduled outage of backend
Setting up a home music system with Raspberry Pi3 and MPD

I had one Raspberry Pi3 in my home office (actually it was missing for few months). Found it two nights back, and decided to put it in use. Best part is the on-board WiFi. This means I can put it anywhere in the house, and still access it. I generally use Raspbian on my Pi(s), so did the same this time too. After booting from a fresh SD card, did the following to install Music Player Daemon.

$ sudo apt-get update
$ sudo apt-get upgrade
$ sudo apt-get install mpd
$ sudo systemctl enable mpd
$ sudo systemctl start mpd

This got MPD running on the system, the default location of the songs directory is /var/lib/mpd/music. You can setup the location in /etc/mpd.conf file. But this time when ever I changed a song, the service stopped. I had to restart the service. After poking around for sometime, I found that I have to uncomment the following from the mpd.conf file.

device "hw:0,0"

I also changed the value of mixer_type to software, that enables volume control from the software. After a restart everything worked as planned. I have MPD clients on my phone (and also even on Anwesha’s phone, and on mother-in-law’s tablet), and also on laptop.

The above is the screenshot of the client on phone. On my Fedora laptop I installed a new client called cantata.

$ sudo dnf install cantata

If you have any tips on MPD, or on similar setups, feel free to comment, or drop a note on twitter/mail. Happy music everyone.

DNF 1.1.10 and DNF-PLUGINS-CORE 0.1.21-3 Released

Another stability release of DNF and DNF-PLUGINS-CORE has been made. This release should eliminate the most critical bugs, especially the Unicode tracebacks and COPR plugin should work in Korora again. More information about the release can be found in DNF and plugins release notes.

New release: usbguard-0.5.14

Looks like a fast release cycle. Issue #119 was important enough to force another quick bugfix release. This should be the last release in the 0.5.x milestone. New IPC, CentOS and RHEL 7 compatibility comming in the next!


  • Fixed unknown descriptor type handling


Many thanks to the following people for contributions to this release and to the USBGuard project:


If you are using Fedora or the USBGuard Copr repository, run:

$ sudo dnf update usbguard


Signed release tarball can be downloaded from the USBGuard release page at GitHub:

SHA256(usbguard-0.5.14.tar.gz)= e8f150539c4b2a7b487193a63d61074063919f8396bf844a049b77c18356e3de
New role as Fedora Magazine editor in chief
Picture from Flock 2016 with Justin W. Flory and Ryan Lerch for Fedora Magazine

Me (left) with Ryan Lerch (right) at Flock 2016

Today, I am pleased to announce my new role as the Fedora Magazine editor-in-chief. After deciding to shift focus to other areas of the Fedora Project, I am receiving the torch from Ryan Lerch. Ryan has helped lead the Magazine, edit pieces from other contributors, contribute his own pieces, and decide strategic direction for the Magazine.

He leaves big shoes to fill, but I hope to offer my own leadership, creativity, and direction in coming years as well. I’d like to thank both Ryan, Paul Frields, and Remy DeCausemaker for their mentorship and guidance towards becoming involved with Fedora and the Magazine. I’m excited to have the opportunity to help guide the Fedora Magazine in how it fits with the rest of Fedora.

History of the Magazine

The Fedora Magazine began in late 2013, replacing the former publication, Fedora Weekly News. The Magazine delivers all official news and announcements for the Fedora Project, covers how-to guides on using software available in Fedora, and other general tips and tricks for using Fedora. The Magazine has had a large number of contributors assist in getting it off the ground and to where it is now.

A special thanks goes out to Ryan, Paul, and Chris Roberts for helping bring the Magazine to where it is today. There are surely many more names than these too worth mentioning, but it would be impossible for me to cover them all here.

Write for the Magazine!

Want to write your own article for the Magazine? Know a useful piece of software you want to share with other Fedora users? Want to write about how using Fedora made something easier for you? Or maybe your own “top 5” list of tools for doing an everyday task? Come write for us! If you’re interested, check out this guide about writing an article, and then how to write a pitch for the Magazine team to review.

Book by Sergey Demushkin from the Noun Project.

The post New role as Fedora Magazine editor in chief appeared first on Justin W. Flory's Blog.

Flock 2016 in Krakow – Recap

The fourth annual Flock conference for Fedora contributors took place from August 2nd-5th in Krakow, Poland. Over 200 developers and enthusiasts from different continents met to learn, present, debate, plan, and celebrate. Although Fedora is the innovation source for a major Red Hat product (Red Hat Enterprise Linux), this event received “gold” level sponsorship from a sister community — openSUSE. openSUSE serves the same function for SuSE Linux Enterprise as Fedora does for RHEL. SUSE showed the fellowship that rules in the open source world, which is why we love it!

The first two days of Flock centered around presentations. The first step was to present numbers showing the past year’s accomplishments. This was day one’s opening keynote by Fedora Project Leader Matthew Miller. The number of contributors has reached 2000 and is growing. Miller also shared his goals for the upcoming year: convert to Python 3, start implementing modularity, and develop Fedora Hubs.

Then the day turned into dozens of breakout sessions.The hottest topics were containers, Project Atomic, security, outreach efforts, community building, and team updates and plans from around the Fedora community. In the evening, people could stay in the Flock hotel to play board games, or travel to the city center for guided tours. The guides were funny and taught us a lot about Polish culture and history. You’d never believe why they decided to keep a big city tower in a time when citizens so poor they were breaking buildings to sell bricks.

On day two, a second keynote was delivered by Radosław Krowiak about teaching programming for children at Akademia Programowania in Krakow. The talk was about much more than just programming, though; it was about creativity. Krowiak stated that scientific subjects like engineering, math, or physics, aren’t taught in creative environments and aren’t considered cool. Therefore, kids don’t realize how important these subjects are. Still, what was the most awe-inspiring “aha” moment of this session? The results of a creativity test designed for NASA by George Land. According to Land’s testing, by reaching adulthood, we only retain 2% of the creativity we had as five-year-olds.

Krowiak explained most educational systems strip us of inventive thinking and implant a conventional “This is how it’s always been done” mindset. The setting for this talk was perfect, since Flock offers a solution to this problem. The conference helps you join a community that creates original hacking solutions for collaboration. That community is also part of the driving force in the major industry of the century. Once again, dozens of sessions rounded out the day. The evening program of day two was a boat trip on a river in Krakow. Could it get more awesome?

The last two days shifted from theory to practice, and from ideas to planning. The workshop schedule started with the largest group of lightning talks yet at Flock. On days three and four, you could build your first module, a kernel, an official OpenShift Origin instance on Fedora, a Fedora badge, or a Fedora containers library. Alternatively, you could help plan Fedora security, documentation, budget, marketing, Fedora infrastructure, or meet the Fedora Ambassadors Steering Committee (FAMSCo). The sequence of three evening events ended with class, in a local Krakow brewery.

Flock organizers went to great effort to land a first class event. The community showed they go the extra mile to make attendees feel welcome and help you get involved. Are you creative? Do you like learning new technology and meeting new people? Are you good with languages? If you answered at least one yes, Fedora can be for you.

Here are some parting comments from attendees:

It was amazing event with a lot of information. Thanks to everybody and remember that #fedoralovespython! — Lumír Frenzy Balhar

Feeling the post-#flocktofedora mix of energized and exhausted. Thanks to everyone who makes this conference so amazing every year! — Matthew Miller

It was amazing [sic] to be part of it! — Redon Skikuli

For more details, check out the videos, a list of blog posts written about Flock, pictures, or the Twitter feed. You can explore contributor options, read a contributor guide (or a shorter PDF guide), or try Fedora itself. You can also contact the Marketing team.


The cost of mentoring, or why we need heroes
Earlier this week I had a chat with David A. Wheeler about mentoring. The conversation was fascinating and covered many things, but the topic of mentoring really got me thinking. David pointed out that nobody will mentor if they're not getting paid. My first thought was that it can't be true! But upon reflection, I'm pretty sure it is.

I can't think of anyone I mentored where a paycheck wasn't involved. There are people in the community I've given advice to, sometimes for an extended period of time, but I would hesitate to claim I was a mentor. Now I think just equating this to a paycheck would be incorrect and inaccurate. There are plenty of mentors in other organizations that aren't necessarily getting a paycheck, but I would say they're getting paid in some sense of the word. If you're working with at risk youth for example, you may not get paid money, but you do have satisfaction in knowing you're making a difference in someone's life. If you mentor kids as part of a sports team, you're doing it because you're getting value out of the relationship. If you're not getting value, you're going to quit.

So this brings me to the idea of mentoring in the community.

The whole conversation started because of some talk of mentoring on Twitter, but now I suspect this isn't something that would work quite like we think. The basic idea would be you have new young people who are looking for someone to help them cut their teeth. Some of these relationships could work out, but probably only when you're talking about a really gifted new person and a very patient mentor. If you've ever helped the new person, you know how terribly annoying they become, especially when they start to peak on the Dunning-Kruger graph. If I don't have a great reason to stick around, I'm almost certainly going to bail out of that. So the question really is can a mentoring program like this work? Will it ever be possible to have a collection of community mentors helping a collection of new people?

Let's assume the answer is no. I think the current evidence somewhat backs this up. There aren't a lot of young people getting into things like security and open source in general. We all like to think we got where we are through brilliance and hard work, but we all probably had someone who helped us out. I can't speak for everyone, but I also had some security heroes back in the day. Groups like the l0pht, Cult of the Dead Cow, Legion of Doom, 2600, mitnick, as well as a handful of local people. Who are the new heroes?

Do it for the heroes!

We may never have security heroes like we did. It's become a proper industry. I don't think many mature industries have new and exciting heroes. We know who Chuck Yeager is, I bet nobody could name 5 test pilots anymore. That's OK though. You know what happens when there is a solid body of knowledge that needs to be moved from the old to the young? You go to a university. That's right, our future rests with the universities.

Of course it's really easy to say this is the future, making this happen will be a whole different story. I don't have any idea where we start, I imagine people like David Wheeler have ideas. All I do know is that if nothing changes, we're not going to like what happens.

Also, if you're part of an open source project, get your badge

If you have thoughts or ideas, let me know: @joshbressers

August 21, 2016

GSoC 2016: That’s a wrap!

Tomorrow, August 22, 2016, marks the end of the Google Summer of Code 2016 program. This year, I participated as a student for the Fedora Project working on my proposal, “Ansible and the Community (or automation improving innovation)“. You can read my original project proposal on the Fedora wiki. Over the summer, I spent time learning more about Ansible, applying the knowledge to real-world applications, and then taking that experience and writing my final deliverable. The last deliverable items, closing plans, and thoughts on the journey are detailed as follows.

Deliverable items

The last deliverable items from my project are two (2) git patches, one (1) git repository, and seven (7) blog posts (including this one).

Closing plans

At the end of the summer, I was using a private cloud instance in Fedora’s infrastructure for testing my playbooks and other resources. One of the challenges towards the end of my project was moving my changes from my local development instance into a more permanent part of Fedora’s infrastructure. For these reasons, I had some issues with running them in a context and workflow specific to Fedora’s infrastructure and set-up (since I am not a sponsored member of the Fedora system administration group).

My current two patches were submitted to my mentor, Patrick. Together, we worked through some small problems with running my playbook in the context of Fedora’s infrastructure. There may still be some small remaining hoops to jump through for running it in production, but any remaining changes to be made should be minor. The majority of the work and preparation for moving to production is complete. This is also something I plan to follow up on past the end of the GSoC 2016 program as a member of the Fedora Infrastructure Apprentice program.

My patches should be merged into the ansible.git and infra-docs.git repositories soon.

Reflection on GSoC 2016

As the program comes to a close, there’s a lot of valuable lessons I’ve learned and opportunities I’m thankful to have received. I want to share some of my own personal observations and thoughts in the hopes that future students or mentors might find it useful for later years.

Planning your timeline

In my case, I spent a large amount of time planning my timeline for the project before the summer. Once the summer began, my original timeline was too broad for having smaller milestones to work towards. My timeline on the student application was more broad and general, and while it covered the big points, it was difficult to work towards those at first. Creating smaller milestones and goals for the bigger tasks makes them easier to work through on a day-by-day basis and helps add a sense of accomplishment to the work you are doing. It also helps shape direction for your work in the short-term and not just the long-term.

For an incoming Google Summer of Code student for Fedora (or any project), I would recommend creating the general, “big picture” timeline for your project before the summer. Then, if you are accepted and beginning your proposal, spend a full day creating small milestones for the bigger items. Try to map out accomplishments every week and break down how you want to reach those milestones throughout the week. I started using TaskWarrior with an Inthe.AM Taskserver to help me manage weekly tasks going into my project. But it’s important to find a tool that works for you. You should reach out to your mentor about ideas for tools. If possible, your mentor should also have a way to view your agenda and weekly tasks. This will help make sure your goals are aligned to the right kind of work you are doing for an on-time completion.

I think this kind of short-term planning or task management is essential for hitting the big milestones and being timely with your progress.

Regular communication

Consistent and frequent communication is also essential for your success in Google Summer of Code. This can be different depending on the context of how you are contributing to the project. For a normal student, this might just be communicating about your proposal with your mentor regularly. If you’re already an active contributor and working in other areas of the project, this might be spending extra time on communicating your progress on the GSoC project (but more on that specifically in the next section).

Regardless of the type of contributor you are, one thing is common and universal – be noisy! Ultimately, project mentors and GSoC program administrators want to be sure that you are spending the time on your project and making progress towards accomplishing your goals. If you are not communicating, you will run the highest risk of failing. How to communicate can vary from project to project, but for Fedora, here’s my personal recommendations.

Blog posts

Even for someone like me who spends a lot of time writing already, this can be a difficult thing to do. But no matter how hard it is to do it, this is the cornerstone for communicating your progress and leaving a trail for future students to learn from you as well. Even if you’ve had a difficult week or haven’t had much progress, take the time to sit down and write a post. If you’re stuck, share your challenges and share what you’re stuck on. Focus on any success or breakthroughs you’ve made, but also reflect on issues or frustrations you have had.

Taking the time to reflect on triumphs and failures is important not only for Google Summer of Code, but even looking past that into the real world. Not everything will go your way and there will be times where you will be face challenges that you don’t know how to resolve. Don’t burn yourself out trying to solve those kinds of problems alone! Communicate about them, ask for help from your mentors and peers, and make it an open process.

IRC check-ins

Whether in a public channel, a meeting, or a private one-on-one chat with your mentor, make sure you are both active and present in IRC. Make sure you are talking and communicating with your mentor on a regular basis (at a minimum, weekly). Taking the time to talk with your mentor about your challenges or progress is helpful for them so they know what you’re up to or where you are in the project. It also provides a chance for them to offer advice and oversight into your direction and potentially steer you away from making a mistake or going into the wrong direction. It is demotivating when you’ve spent a lot of time on something and then later discovered it either wasn’t necessary or had a simpler solution than you realized.

Make sure you are communicating often with your mentor over IRC to make your progress transparent and to also offer the chance for you to avoid any pitfalls or traps that can be avoided.

Hang out in the development channels

As a Fedora Google Summer of Code student, there are a few channels that you should be present in on a regular basis (a daily presence is best).

  • #fedora-admin
  • #fedora-apps
  • #fedora-summer-coding
  • Any specific channel for your project, e.g. #fedora-hubs

A lot of development action happens in this channels, or people who can help you with problems are available here. This also provides you the opportunity to gain insight into what the communication in an active open source project looks like. You should at least be present and reading the activity in these channels during the summer. Participation is definitely encouraged as well.

Balancing project with open source contributions

I think my single, most difficult challenge with Google Summer of Code was balancing my proposal-specific contributions with the rest of contributions and work in the Fedora Project. I believe I was a minority of Google Summer of Code students who applied for the program as an active member of the project almost a full year before the program began. Additionally, my areas of contribution in Fedora before GSoC were mostly unrelated to my project proposal. My project proposal mostly aligned with my intended degree and education I am pursuing. A lot of the technology I would be working with was new to me and I had minimal knowledge about it before beginning the summer. As a result, this presented a unique set of challenges and problems I would face throughout my project.

The consequences of this were that I had to spend a lot more time researching and becoming familiar with the technology before advancing with creating the deliverable items. A great resource for me to learn about Ansible was Ansible for DevOps by Jeff Geerling. But I spent more time on learning and “trying out the tech” than I had anticipated.

This extra time spent on research and experimentation were in tandem to my ongoing contributions in other areas of the project like Community Operations, Marketing, Ambassadors, the Diversity Team, and as of recently, the Games SIG. Balancing my time between these different areas, including GSoC, was the biggest challenge to me over the summer (along with a separate, part-time job on weekends). A separation of time to different areas of Fedora became essential for making progress on my project. What worked well for me was setting short-term goals (by the hour or day) that I wanted to hit and carry out. Until those goals were reached, I wouldn’t focus on anything other than those tasks.

Special thanks

I’m both thankful and grateful to those who have offered their mentorship, time, and guidance for me to be a member of the GSoC Class of 2016. Special thanks go to Patrick Uiterwijk, my mentor for the program. I’ve learned a lot from Patrick through these past few months and enjoyed our conversations. Even though we were both running around the entire week, I’m glad I had the chance to meet him at Flock 2016 (and hope to see him soon at FOSDEM or DevConf)! Another thanks goes to one of my former supporting mentors and program administrator Remy DeCausemaker.

I’m looking forward to another year and beyond of Fedora contributions, and can’t wait to see what’s next!

The post GSoC 2016: That’s a wrap! appeared first on Justin W. Flory's Blog.

Fim da primeira Campus Party Weekend Recife. Vinte e seis horas...

Fim da primeira Campus Party Weekend Recife. Vinte e seis horas de conteúdo e diversos temas.

Nesta edição não tivemos nada específico sobre nossa comunidade, mas o contato com pessoas de outros grupos é sempre bem proveitoso.

#CPWeekend #CPRecife5

Bom dia com GNU. PH Santana fala sobre GNU, GPL, free software...

Bom dia com GNU.

PH Santana fala sobre GNU, GPL, free software e open source.

#CPWeekend #CPRecife5

Palestra sobre o protagonismo no empoderamento das mulheres na...

Palestra sobre o protagonismo no empoderamento das mulheres na tecnologia e programação, casos de sucesso, recursos/aplicações da linguagem, como incluir e manter mulheres na tecnologia e porque todos deveriam aprender Python.

#CPWeekend #CPRecife5

Desmistificando o Joomla!

Palestra sobre Joomla na Campus Party Weekend - Recife.

#CPWeekend #CPRecife5

Transmissão feita por PH Santana

Software livre aplicado em design, arquitetura e jogos
<figure class="tmblr-full" data-orig-height="960" data-orig-width="1280"></figure>

Palestra do Allan Brito na Campus Party Weekend - Recife.

O uso de software livre no design e arquitetura de jogos:

Transmissão feita por PH Santana

#CPWeekend #CPRecife5

August 20, 2016

Fedora 25: Wayland wird Standard-Displayserver für Workstation

Das FESCO (Fedora Steering Comitee) hat am Donnerstag beschlossen, das für die Workstation-Variante von Fedora 25 Wayland der standardmäßig verwendete Displayserver sein soll. Es soll aber auch weiterhin die Möglichkeit geben, X als Displayserver zu verwenden. Den übrigen Desktop-Spins von Fedora bleibt es freigestellt, welchen Displayserver sie standardmäßig nutzen.

FESCO behält sich jedoch auch die Option vor, diese Entscheidung rückgängig zu machen, falls im weiteren Verlauf der Entwicklung von Fedora 25 Fehler in Wayland auftauchen sollten, die einen Einsatz als standardmäßigen Displayserver nicht (mehr) rechtfertigen.

Update 2017-08-21 10:07: Der Link zum FESCO-Ticket im Artikel wurde korrigiert.

Die veröffentlichten News werden nach bestem Wissen und Gewissen zusammengetragen. Eine Garantie für die Vollständigkeit und/oder Richtigkeit wird nicht übernommen.
Fedora Meetup Pune August 2016

Fedora Pune Meetup for the month of August 2016 happened today at our usual location. We had in total 12 people turning out for the meetup.

The event started with introductions and we had two new comers joining us this time, Trishna and Prathamesh.

This time the event was mostly foccused around re-writing the GNU C Library Manual using reStructuredText and Sphinx. This task was decided during the release event that we had last month. We did create a Etherpad link to maintain the status of the task1.

The aim is to build a modern version, good looking version of the GNU C Library Manual.

In today’s meetup, we sat down and tried completing the chapters we picked. A couple of us sent a PRs to the docs repo that we are maintaing in Github. The generated read the docs can be seen here

If you are planning to contribute, ping /me (sayan) or kushal in #dgplug channel on Freenode.

<script async="async" charset="utf-8" src=""></script>
GSoC: Final Submission

This summer has been really amazing, I learnt a lot and worked crazy hours it has been a crazy yet amazing ride. I am not going to stop working on open source projects and with Pagure it is something really close to my heart.

There are a few things left but I can conclude that I am able to achieve what I wanted to at the beginning of this program , but there is never a feeling of satisfaction it is just like you want to achieve the best possible and most beautiful solution.

Pagure has CI integration which was one of my major goals to achieve and with the coming release it will be out and will be usable to people. This gives me immense pleasure to say that the foundation of CI was laid by me although Pingou kind of wrote a lot after that but that helped me to learn the depth of thinking one needs to have when you are working on a feature like this. Selection_027

I also worked on Private Repo feature which took more time than expected and it was pretty challenging to achieve , this feature is in  feature branch and it may get merged after it is checked in the staging first. Selection_028

It was so challenging that I got stuck on a data fetching problem from the database , we use Sqlalchemy as ORM in Pagure. I went through a lot of ups and downs at times I was about to give up but then I get some small part of it and Pingou has been so amazing mentor he never spoon fed me instead he asked the right question the moment he ask something the idea bulb use to glow.

I still remember struggling with  Xapian and Whoosh. This was again a very big task and still is , it requires a lot of time to optimize it to a level where it doesn’t slow the site. I gave a lot of time on it but since I few other goals and various issue to solve so I eventually moved on to those just to come back.

Pagure pages is one of the last goals that I worked on recently and there are discussion pending over it.

At a glance I was able to achieve a lot of the big goals on my proposal and still work has to be done, and I will continue to work on achieving various other goals. Few links that I want to share :

Commits made to the master branch

Commits on private-repo branch on pagure 

Pull-request for static page hosting

This kinds of makes me feel happy that I have around 102 commits on the master branch now and I believing I will be working a lot more on Pagure to bring a lot of cool and useful feature to it. In case you have any suggestion feel free to file issues on Pagure.

To be really frank I am not at all sad that GSoC is getting because I have received so much love and inspiration from Fedora Community that contributing to projects has actually become my daily routine the day I don’t commit code, review patches or comment on issues I start feeling something is missing .

And some of my fellow GSoCers said That’s all folks!  ;)

Happy Hacking!


Fedora/RISC-V, steady progress

davidlt has done an amazing job building RISC-V RPMs:

I also managed to boot our “stage 3” filesystem on the real FPGA hardware. Unfortunately it’s extremely crashy:

# ldconfig /usr/lib64 /usr/lib /lib64 /lib
disk cannot read 4096 bytes @1544056832!

This is in the HTIF / SD-card access layer which we have full source for so at least it can be fixed.

More Open Source is Good Open Source
A few days ago, Microsoft announced that it has released PowerShell under the MIT license for Linux (and Mac OS X). Perhaps surprisingly, this has brought a number of folks out of the woodwork to gripe about Microsoft… releasing something as open source. Microsoft isn’t perfect, not by a long shot, but this is not […]
What's my next badge?

I love Fedora Badges. I'm not saying all I do is to get more badges, but it's a great motivator. One thing that somewhat miss a guidance on what options I have, what should I do to get another badge, how much activity will it need.

Fedora Project is not the only community that awards badges to its members. For example Stack Overflow has badges as well. In you Stack Overflow profile you can see which badge you are likely going to get next and how much progress you have made on that.

<figure> Badges on Stack Overflow<figcaption>Badges on Stack Overflow</figcaption> </figure>

Is it possible to do something like this for Fedora badges? Turns out it kind-of is. There actually is [a related issue] filed for the awesome Fedora Hubs project to show options of next badges.

All this actually relies on having information about badge paths, but until it's available in production, it can be reasonably hacked based on badge name and a short list of exceptions.

Baby steps

First thing that comes to mind is to simply look at statistics of the badges. The ones that are awarded more often are most likely the easiest to get. Let's start from that by finding the 5 most common badges that I don't have yet.

However, just taking that is not a particularly good suggestion: in my case 3 out of these badges turn out to be "have FAS account for at least X years". I'm slowly getting there, but there's not much I can do to speed this up. It makes sense to only show the first badge from each series.

dont-call-it-a-comeback icon egg icon curious-penguin-ask-fedora-i icon crypto-panda icon speak-up! icon

Progress towards next badge

Before we can estimate progress on getting a badge, it is important to understand how the badges are awarded. The system is based on the messaging bus1. The fedbadges service listens to the bus and every time it sees a message, it checks it against the rules it has defined.

The process starts with a simple check on the message content to make sure that the message is connected to some badge. If it is, more complex checks are performed. These checks either communicate with pkgdb or datanommer, the service that archives old messages.

Now obviously I'm not keen on reimplementing the whole rule engine. Fortunately, it is possible to reuse the code from fedbadges. All badges that I care about for this part only need the datanommer integration, so that is a big help.

The biggest issue I faced with this is the fact that fedbadges connects directly to datanommer's database. I can't do that. My workaround was to write a script that would download all messages related to me from datagrepper and store them locally. This works reasonably well for me personally, but trying to get the messages for someone who is active for a longer time is going to be an issue.

Unfortunately, the list of messages related to a particular user is not enough for all badges: Bodhi has badges for other people voting on your updates. Therefore we also need all messages related to updates a person creates.

47 / 50
8 / 10
6 / 10
11 / 20
25 / 50
438 / 999
2 / 5
146 / 400
87 / 250
34 / 100
24 / 100
1 / 5

Another problematic badge is the Cookie one: the number of cookies you have get's reset every release cycle, so despite the number 8 above, I actually only have 1 right now.

I want it too

If you want to experiment with this code, I put it on Pagure as my-next-badge. There are instructions in README. I didn't try to optimize this in any way (yet), so it needs a lot of memory as all the messages must fit there. In my case, it is about 40 MiB. For other people, it might be significantly more. By "more" I mean it can easily be several gigabytes.

If you decide to try this, please note that the script is a hack that may not always be correct. It may try to convince you that you satisfy conditions for some badge even if you don't have it. Take it with a grain of salt.

<section class="footnotes">
  1. Well, almost. Some badges are awarded manually. We can ignore that here.