Fedora summer-coding Planet

Project Idea: PI Sw1tch

Posted by Mo Morsi on February 18, 2017 05:02 PM

While gaming is not high on my agenda anymore (... or rather at all), I have recently been mulling buying a new console, to act as much as a home entertainment center as a gaming system.

Having owned several generations PlayStation and Sega products, a few new consoles caught my eye. While the most "open" solution, the Steambox sort-of fizzled out, Nintendo's latest console Switch does seem to stand out of the crowd. The balance between power and portability looks like a good fit, and given Nintendo's previous successes, it wouldn't be surprising if it became a hit.

In addition to the separate home and mobile gaming markets, new entertainment mechanisms are needing to provide seamless integration between the two environments, as well as offer comprehensive data and information access capabilities. After all what'd be the point of a gaming tablet if you couldn't watch Youtube on it! Neal Stephenson recently touched on this at his latest TechCrunch talk, by expressing a vision of technology that is more integrated/synergized with our immediate environment. While mobile solutions these days offer a lot in terms of processing power, nothing quite offers the comfort or immersion that a console / home entertainment solution provides (not to mention mobile phones being horrendous interfaces for gaming purposes!)

Being the geek that I am, this naturally led me to thinking about developing a hybrid mechanism of my own, based on open / existing solutions, so that it could be prototyped and demonstrated quickly. Having recently bought a Raspeberry PI (after putting my Arduino to use in my last microcontroller project), and a few other odds and end pieces, I whipped up the following:

The idea is simple, the Raspberry PI would act as the 'console', with a plethora of games and 'apps' available (via open repositories, steam, emulators, and many more... not to mention Nethack!). It would be anchorable to the wall, desk, or any other surface by using a 3D-printed mount, and made portable via a cheap wireless controller / LCD display / battery pack setup (tied together through another custom 3D printed bracket). The entire rig would be quickly assemblable and easy to use, simply snap the PI into the wall to play on your TV; remove and snap into the controller bracket to take it on the go.

I suspect the power component is going to be the most difficult to nail down, finding an affordable USB power source that is lightweight but offers sufficient juice to drive the Raspberry PI w/ LCD might be tricky. But if this is done correctly, all components will be interchangeable, and one can easily plug in a lower-power microcontroller and/or custom hardware component for a tailored experience.

If there is any interest, let me know via email. If 3 or so people commit, this could be done in a weekend! (stay tuned for updates!)

read more

2016 – My Year in Review

Posted by Justin W. Flory on February 17, 2017 08:30 AM

Before looking too far ahead to the future, it’s important to spend time to reflect over the past year’s events, identify successes and failures, and devise ways to improve. Describing my 2016 is a challenge for me to find the right words for. This post continues a habit I started last year with my 2015 Year in Review. One thing I discover nearly every day is that I’m always learning new things from various people and circumstances. Even though 2017 is already getting started, I want to reflect back on some of these experiences and opportunities of the past year.


When I started writing this in January, I read freenode‘s “Happy New Year!” announcement. Even though their recollection of the year began as a negative reflection, the freenode team did not fail to find some of the positives of this year as well. The attitude reflected in their blog post is reflective of the attitude of many others today. 2016 has brought more than its share of sadness, fear, and a bleak unknown, but the colors of radiance, happiness, and hope have not faded either. Even though some of us celebrated the end of 2016 and its tragedies, two thoughts stay in my mind.

One, it is fundamentally important for all of us to stay vigilant and aware of what is happening in the world around us. The changing political atmosphere of the world has brought a shroud of unknowing, and the changing of a number does not and will not signify the end of these doubts and fears. 2017 brings its own series of unexpected events. I don’t consider this a negative, but in order for it not to become a negative, we must constantly remain active and aware.

Secondly, despite the more bleak moments of this year, there has never been a more important time to embrace the positives of the past year. For every hardship faced, there is an equal and opposite reaction. Love is all around us and sometimes where we least expect it. Spend extra time this new year remembering the things that brought you happiness in the past year. Hold them close, but share that light of happiness with others too. You might not know how much it’s needed.

First year of university: complete!

Many things changed since I decided to pack up my life and go to a school a thousand miles away from my hometown. In May, I officially finished my first year at the Rochester Institute of Technology, finishing the full year on dean’s list. Even though it was only a single year, the changes from my decision to make the move are incomparable. Rochester exposed me to amazing, brilliant people. I’m connected to organizations and groups based on my interests like I never imagined. My courses are challenging, but interesting. If there is anything I am appreciative of in 2016, it is for the opportunities that have presented themselves to me in Rochester.

Adventures into FOSS@MAGIC

On 2016 Dec. 10th, the "FOSS Family" went to dinner at a local restaurant to celebrate the semester

On 2016 Dec. 10th, the “FOSS Family” went to dinner at a local restaurant to celebrate the semester

My involvement with the Free and Open Source Software (FOSS) community at RIT has grown exponentially since I began participating in 2015. I took my first course in the FOSS minor, Humanitarian Free and Open Source Software Development in spring 2016. In the following fall 2016 semester, I became the teaching assistant for the course. I helped show our community’s projects at Imagine RIT. I helped carry the RIT FOSS flag in California (more on that later). The FOSS@MAGIC initiative was an influencing factor for my decision to attend RIT and continues to play an impact in my life as a student.

I eagerly look forward to future opportunities for the FOSS projects and initiatives at RIT to grow and expand. Bringing open source into more students’ hands excites me!

I <3 WiC

With a new schedule, the fall 2016 semester marked the beginning of my active involvement with the Women in Computing (WiC) program at RIT, as part of the Allies committee. Together with other members of the RIT community, we work together to find issues in our community, discuss them and share experiences, and find ways to grow the WiC mission: to promote the success and advancement of women in their academic and professional careers.

WiCHacks 2016 Opening CeremonyIn spring 2016, I participated as a volunteer for WiCHacks, the annual all-female hackathon hosted at RIT. My first experience with WiCHacks left me impressed by all the hard work by the organizers and the entire atmosphere and environment of the event. After participating as a volunteer, I knew I wanted to become more involved with the organization. Fortunately, fall 2016 enabled me to become more active and engaged with the community. Even though I will be unable to attend WiCHacks 2017, I hope to help support the event in any way I can.

Also, hey! If you’re a female high school or university student in the Rochester area (or willing to do some travel), you should seriously check this out!

Google Summer of Code

Google Summer of Code, abbreviated to GSoC, is an annual program run by Google every year. Google works with open source projects to offer stipends for them to pay students to work on projects over the summer. In a last-minute decision to apply, I was accepted as a contributing student to the Fedora Project. My proposal was to work within the Fedora Infrastructure team to help automate the WordPress platforms with Ansible. My mentor, Patrick Uiterwijk, provided much of the motivation for the proposal and worked with me throughout the summer as I began learning Ansible for the first time. Over the course of the summer, my learned knowledge began to turn into practical experience.

It would be unfair for a reflection to count successes but not failures. GSoC was one of the most challenging and stressful activities I’ve ever participated in. It was a complete learning experience for me. One area I noted that I needed to improve on was communication. My failing point was not regularly communicating what I was working through or stuck on with my mentor and the rest of the Fedora GSoC community. GSoC taught me the value of asking questions often when you’re stuck, especially in an online contribution format.

On the positive side, GSoC helped formally introduce me to Ansible, and to a lesser extent, the value of automation in operations work. My work in GSoC helped enable me to become a sponsored sysadmin of Fedora, where I mostly focus my time contributing to the Badges site. Additionally, my experience in GSoC helped me when interviewing for summer internships (also more on this later).

Google Summer of Code came with many ups and downs. But I made it and passed the program. I’m happy and fortunate to have received this opportunity from the Fedora Project and Google. I learned several valuable lessons that have and will impact going forward into my career. I look forward to participating either as a mentor or organizer for GSoC 2017 with the Fedora Project this year.

Flock 2016

Group photo of all Flock 2016 attendees outside of the conference venue (Photo courtesy of Joe Brockmeier)

Group photo of all Flock 2016 attendees outside of the conference venue (Photo courtesy of Joe Brockmeier)

Towards the end of summer, in the beginning of August, I was accepted as a speaker to the annual Fedora Project contributor conference, Flock. As a speaker, my travel and accommodation were sponsored to the event venue in Kraków, Poland.

Months after Flock, I am still incredibly grateful for receiving the opportunity to attend the conference. I am appreciative and thankful to Red Hat for helping cover my costs to attend, which is something I would never be able to do on my own. Outside of the real work and productivity that happened during the conference, I am happy to have mapped names to faces. I met incredible people from all corners of the world and have made new lifelong friends (who I was fortunate to see again in 2017)! Flock introduced me in-person to the diverse and brilliant community behind the Fedora Project. It is an experience that will stay with me forever.

To read a more in-depth analysis of my time in Poland, you can read my full write-up of Flock 2016.

To Kraków for Flock with Bee, Amita, Jona, and Giannis

On a bus to the Kraków city center with Bee Padalkar, Amita Sharma, Jona Azizaj, and Giannis Konstantinidis (left to right).

Maryland (Bitcamp), Massachusetts (HackMIT), California (MINECON)

Bitcamp 2016: The Fedora Ambassadors of Bitcamp 2016

The Fedora Ambassadors at Bitcamp 2016. Left to right: Chaoyi Zha (cydrobolt), Justin W. Flory (jflory7), Mike DePaulo (mikedep333), Corey Sheldon (linuxmodder)

2016 provided me the opportunity to explore various parts of my country. Throughout the year, I attended various conferences to represent the Fedora Project, the SpigotMC project, and the RIT open source community.

There are three distinct events that stand out in my memory. For the first time, I visited the University of Maryland for Bitcamp as a Fedora Ambassador. It also provided me an opportunity to see my nation’s capitol for the first time. I also visited Boston for the first time this year as well for HackMIT, MIT’s annual hackathon event. I also participated as a Fedora Ambassador and met brilliant students from around the country (and even the world, with one student I met flying in from India for the weekend).

"Team Ubuntu" shows off their project to Charles Profitt before the project deadline for HackMIT 2016

“Team Ubuntu” shows off their project to Charles Profitt before the project deadline for HackMIT 2016

Lastly, I also took my first journey to the US west coast for MINECON 2016, the annual Minecraft convention. I attended as a staff member of the SpigotMC project and a representative of the open source community at RIT.

All three of these events have their own event reports to go with them. More info and plenty of pictures are in the full reports.

Vermont 2016 with Matt

Shortly after I arrived, Matt Coutu took me around to see the sights and find coffee

Shortly after I arrived, Matt took me around to see the sights and find coffee.

Some trips happen without prior arrangements and planning. Sometimes, the best memories are made by not saying no. I remember the phone call with one of my closest friends, Matt Coutu, at some point in October. On a sudden whim, we planned my first visit to Vermont to visit him. Some of the things he told me to expect made me excited to explore Vermont! And then in the pre-dawn hours of November 4th, I made the trek out to Vermont to see him.

50 feet up into the air atop Spruce Mountain was colder than we expected

50 feet up into the air atop Spruce Mountain was colder than we expected.

Instantly when crossing over the state border, I knew this was one of the most beautiful states I ever visited. During the weekend, the two of us did things that I think only the two of us would enjoy. We climbed a snowy mountain to reach an abandoned fire watchtower, where we endured a mini blizzard. We walked through a city without a specific destination in mind, but to go wherever the moment took us.

We visited a quiet dirt road that led to a meditation house and cavern maintained by monks, where we meditated and drank in the experience. I wouldn’t classify the trip has a high-energy or engaging trip, but for me, it was one of the most enjoyable trips I’ve embarked on yet. There are many things that I still hold on to from that weekend for remembering or reflecting back on.

A big shout-out to Matt for always supporting me with everything I do and always being there when we need each other.

Martin Bridge may not be one of your top places to visit in Vermont, but if you keep going, you'll find a one-of-a-kind view

Martin Bridge may not be one of your top places to visit in Vermont, but if you keep going, you’ll find a one-of-a-kind view.

Finally seeing NYC with Nolski

Mike Nolan and Justin W. Flory venture through New York City early on a Sunday evening

Mike Nolan and I venture through New York City early on a Sunday evening

In no short time after the Vermont trip, I purchased tickets for my favorite band, El Ten Eleven, in New York City on November 12th. What turned into a one-day trip to see the band turned into an all-weekend trip to see the band, see New York City, and spend some time catching up with two of my favorite people, Mike Nolan (nolski) and Remy DeCausemaker (decause). During the weekend, I saw the World Trade Center memorial site for the first time, tried some amazing bagels, explored virtual reality in Samsung’s HQ, and got an exclusive inside look at the Giphy office.

This was my third time in New York City, but my first time to explore the city. Another shout-out goes to Mike for letting me crash on his couch and stealing his Sunday to walk through his metaphorical backyard. Hopefully it isn’t my last time to visit the city either!

Finalizing study abroad

This may be cheating since it was taken in 2017, but this is one of my favorite photos from Dubrovnik, Croatia so far

This may be cheating since it was taken in 2017, but this is one of my favorite photos from Dubrovnik, Croatia so far. You can find more like this on my 500px gallery!

At the end of 2016, I finalized a plan that was more than a year in the making. I applied and was accepted to study abroad at the Rochester Institute of Technology campus in Dubrovnik, Croatia. RIT has a few satellite campuses across the world: two in Croatia (Zagreb and Dubrovnik) and one in Dubai, UAE. In addition to being accepted, the university provided me a grant to further my education abroad. I am fortunate to have received this opportunity and can’t wait to spend the next few months of my life in Croatia. I am currently studying in Dubrovnik since January until the end of May.

During my time here, I will be taking 12 credit hours of courses. I am taking ISTE-230 (Introduction to Database and Data Modeling), ENGL-361 (Technical Writing), ENVS-150 (Ecology of the Dalmatian Coast), and lastly, FOOD-161 (Wines of the World). The last one was a fun one that I took for myself to try broadening my experiences while abroad.

Additionally, one of my personal goals for 2017 is to practice my photography skills. During my time abroad, I have created a gallery on 500px where I upload my top photos from every week. I welcome feedback and opinions about my pictures, and if you have criticism for how I can improve, I’d love to hear about it!

Accepting my first co-op

The last big break that I had in 2016 was accepting my first co-op position. Starting in June, I will be a Production Engineering Intern at Jump Trading, LLC. I started interviewing with Jump Trading in October and even had an on-site interview that brought me to their headquarters in Chicago at the beginning of December. After meeting the people and understanding the culture of the company, I am happy to accept a place at the team. I look forward to learning from some of the best in the industry and hope to contribute to some of the fascinating projects going on there.

From June until late August, I will be starting full-time at their Chicago office. If you are in the area or ever want to say hello, let me know and I’d be happy to grab coffee, once I figure out where all the best coffee shops in Chicago are!

In summary

2015 felt like a difficult year to follow, but 2016 exceeded my expectations. I acknowledge and I’m grateful for the opportunities this year presented to me. Most importantly, I am thankful for the people who have touched my life in a unique way. I met many new people and strengthened my friendships and bonds with many old faces too. All of the great things from the past year would not be possible without the influence, mentorship, guidance, friendship, and comradery these people have given me. My mission is to always pay it forward to others in any way that I can, so that others are able to experience the same opportunities (or better).

2017 is starting off hot and moving quickly, so I hope I can keep up! I can’t wait to see what this year brings and hope that I have the chance to meet more amazing people, and also meet many of my old friends again, wherever that may be.

Keep the FOSS flag high.

The post 2016 – My Year in Review appeared first on Justin W. Flory's Blog.

Search and Replace The VIM Way

Posted by Mo Morsi on January 26, 2017 06:02 PM

Did you know that it is 2017 and the VIM editor still does not have a decent multi-file search and replacement mechanism?! While you can always roll your own, it's rather cumbersome, and even though some would say this isn't in the spirit of an editor such as VIM, a large community has emerged around extending it in ways to behave more like a traditional IDE.

Having written about doing something similar to this via the cmd line a while back, and having refactored a large amount of code recently that involved lots of renaming, I figured it was time to write a plugin to do just that, rename strings across source files, using grep and sed

Before we begin, it should be noted that this is of most use with a 'rooting' plugin like vim-rooter. By using this, you will ensure vim is always running in the root directory of the project you are working on, regardless of the file being modified. Thus all search & replace commands will be run relative to the top project dir.

To install vsearch, we use Vundle. Setup & installation of that is out of scope for this article, but I highly recommend familiarizing yourself with Vundle as it's the best Vim plugin management system (in my opinion).

Once Vundle is installed, using vsearch is as simple as adding the following to your ~/.vim/vimrc:

Plugin 'movitto/vim-vsearch'

Restart Vim and run :PluginInstall to install vsearch from github. Now you're good to go!

vsearch provides two commands :VSearch and :VReplace.

VSearch simply runs grep and displays the results, without interrupting the buffer you are currently editing.

VReplace runs a search in a similar manner to VSearch but also performs and in-memory string replacement using the specified args. This is displayed to the user who is prompted for comfirmation. Upon receiving it, the plugin then executes sed and reports the results.

read more

Lessons on Aikido and Life via Splix

Posted by Mo Morsi on November 08, 2016 02:42 AM

Recently, I've stumbled upon splix, a new obsession game, with simple mechanics that unfold into a complex competitive challenge requiring fast reflexes and dynamic tactics.

At the core the rule set is very simple:
- surround territory to claim it
- do not allow other players to hit your tail (you lose... game over)

While in your territory you have no tail, rendering you invulnerable, but during battles territory is always changing, and you don't want to get caught deep on an attack just to be surrounded by an enemy who swaps the territory alignment to his!

The simple dynamic yields an unbelievable amount of strategy & tactics to excel at while at the same time requiring quick calculation and planning. A foolheardy player will just rush into enemy territory to attempt to capture squares and attack his opponent but a smart player will bait his opponent into his sphere of influence through tactful strikes and misdirections.

Furthermore we see age old adages such as "better to run and fight another day" and the wisdom of pitting opponents against each other. Alliances are always shifting in splix, it simply takes a single tap from any other player to end your game. So while you may be momentarily coordinating with another player to surround and obliterate a third, watch your back as the alliance may dissove at the first opportunity (not to mention the possiblity of outside players appearing anytime!)

All in all, I've found careful observation and quick action to yield the most successful results on the battlefield. The ideal kill is from behind an opponent who has periously invaded your territory deeply. Beyond this, lurking at the border so as the goad the enemy into a foolheardy / reckless attack is a robust tactic provided you have built up the relfexes and coordination to quickly move in and out of territory which is constantly changing. Make sure you don't fall suspect to your own trick and overpenetrate the enemy border!

Another tactic to deal w/ an overly aggressive opponent is to slightly fallback into your safe zone to quickly return to the front afterwords, perhaps at a different angle or via a different route. Often a novice opponent will see the retreat as a sign of fear or weakness and become over confident, penetrating deep into your territory in the hopes of securing a large portion quickly. By returning to the front at an unexpected moment, you will catch the opponents off guard and be able to destroy them before they have a chance to retreat to their safe zone.

Of course if the opponent employs the same strategy, a player can take a calculated risk and drive a distance into the enemy territory before returning to the safe zone. By paying attention to the percentage of visible territory which the player's vulnerability zone occupies and the relative position of the opponent, they should be able to safely guage the safe distance to which they can extend so as to ensure a safe return. Taking large amounts of territory quickly is psychologically damaging to an opponent, especially one undergoing attacks on multiple fronts.

If all else fails to overcome a strong opponent, a reasonable retreat followed by an alternate attack vector may result in success. Since in splix we know that an safe zone corresponds to only one enemy, if we can guage / guess where they are, we can attempt to alter the dynamics of the battle accordingly. If we see that an opponent has stretch far beyond the mass of his safe zone via a single / thin channel, we can attempt to cut them off, preventing a retreat without crossing your sphere of influence.

This dynamic becomes even more pronounced if we can encircle an opponent, and start slowly reducing his control of the board. By slowly but mechanically & gradually taking enemy territory we can drive an opponent in a desired direction, perhaps towards a wall or other player.

Regardless of the situation, the true strategist will always be shuffling his tactics and actions to adapt to the board and setup the conditions for guaranteed victory. At no point should another player be underestimated or trusted. Even a new player with little territory can pose a threat to the top of the leader board given the right conditions and timing. The victorious will stay clam in the heat of the the battle, and use careful observations, timing, and quick reflexes to win the game.

(<endnote> the game *requires* a keyboard, it can be played via smartphone (swapping) but the arrow keys yields the fastest feedback</endnode>)

read more

How Minecraft got me involved in the open source community

Posted by Justin W. Flory on October 10, 2016 09:30 AM

This post was originally published on OpenSource.com.

When people first think of “open source”, their mind probably first goes to code. Something technical that requires an intermediate understanding of computers or programming languages. But open source is a broad concept that goes beyond only binary bits and bytes. Open source projects hold great regard for community participation. The community is a fundamental piece of a successful open source project. For my experience getting involved with open source, I began in the community and worked my way around from there. At the age of fifteen, I was beginning my open source journey and I didn’t even know it.

Gaming introduces open source

One of my strongest memories of a “gaming addiction” was when I was fifteen and a younger cousin introduced me to the game Minecraft. The game was in beta then, but I remember the sandbox-style of the game entertaining the two of us for hours. But what I discovered was that playing the game alone became boring. Playing and mining with others made the experience more fun and meaningful. In order to do this, I learned I would have to host a server for my friends to connect to play with me.

I originally used the “vanilla” Minecraft server software at first, but it was limited to what it could do, and didn’t compare to other multiplayer servers in existence. They all seemed to be using something that offered more, so players could play games, cast spells, or do other unique things that would normally not be possible in the game. After digging, I discovered Bukkit, an open source Minecraft server software with an extensible API to let developers change the multiplayer experience. I soon became wrapped up with Bukkit like a child with a new toy. Except this toy had me digging through my computer to set up “port forwarding”, “setting NAT records”, and “creating static IP addresses”. I was teaching myself the basics of computer networking in the guise of creating a game server for my friends.

Over time, my Minecraft server hobby began to take up more and more time. More people began playing on my server and I began searching for ways to improve the performance of my server. After doing some digging, I discovered the SpigotMC project, shortened to just Spigot. Spigot was a fork of the Bukkit project that made specific enhancements to performance. After trialing it on my server, I discovered the performance gains were measurable and I would commit to using Spigot from then on.

Participating in SpigotMC

Before long, I began running into new challenges with managing my Minecraft server community, whether it was finding ways to scale or finding the best ways to build a community up. In October 2013, I registered an account on the Spigot forums to talk with other server owners and seek advice on ways I could improve. I found the community welcoming and accepting to helping me learn and improve. Several people in the community were owners of larger servers or developers of unique plugins to Spigot. In response to my detailed inquisitions, they responded with genuine and helpful feedback and support. Within a week, I was already in love with the people and helpfulness of the Spigot community.

I became an active participant in the forum community in Spigot. Through the project, I was introduced to IRC and how to use it for communicating with other server owners and developers. What I didn’t realize was a trend in my behavior. Over time, I began shifting away from asking all the questions. Almost as if in a role reversal, I became the one answering questions and helping support other new server owners or developers. I became the one in an advisory role instead of the one always asking.

SpigotMC team at annual Minecraft convention, MINECON, in 2015

SpigotMC team at annual Minecraft convention, MINECON, in 2015

In April 2014, the project lead of Spigot reached out to me asking if I would consider a role as a community staff member. Part of my responsibilities would be responding to reports, encouraging a helpful and friendly community, and maintaining the atmosphere of the community. With as much prestige and honor as my sixteen-year-old self could muster, I accepted and began serving as a community moderator. I remember feeling privileged to serve the position – I would finally get to help the community that had done so much to help me.

Expanding the open source horizon

Through 2014 and 2015, I actively served as a moderator of the community, both in the forums and the IRC network for Spigot. I remained in the Spigot community as the project steadily grew. It was incredible to see how the project was attracting more and more users.

However, my open source journey did not end there. After receiving my high school diploma in May 2015, I had set my sights on the Rochester Institute of Technology, a school I noted as having the country’s only Free and Open Source Software minor. By coincidence, I also noticed that my preferred Linux distribution, Fedora, was holding its annual contributor conference in Rochester, a week before I would move in for classes. I decided I would make the move up early to see what it was all about.

Flock 2015 introduces Fedora

The summer passed, and before I knew I was ready, I was packing up from my home outside of Atlanta, Georgia to leave for Rochester, New York. After fourteen hours of driving, I finally arrived and began moving into my new home. A day after I arrived, Flock was slated to begin, marking my first journey in Rochester.

Group photo of Fedora Flock 2015 attendees at the Strong Museum of Play

Group photo of Fedora Flock 2015 attendees at the Strong Museum of Play

At Flock, I entered as an outsider. I was in an unfamiliar city with unfamiliar people and an open source project I was only mildly familiar with. It was all new to me. But during that week, I discovered a community of people who were united around four common ideals. Freedom, Friends, Features, First: the Four Foundations of the Fedora Project were made clear to me. The community members at Flock worked passionately towards advancing their project during the talks and workshops. And after the talks finished, they gathered together for hallway discussions, sharing drinks, and enjoying the presence of their (usually) internationally dispersed team. Without having ever attended a Fedora event before, I knew that the Four Foundations and the community behind Fedora were the real deal. Leaving Flock that year, I vowed to pursue becoming a part of this incredible community.

Pen to paper, keyboard to post

The first major step I took towards contributing to the Fedora Project was in September 2015, during Software Freedom Day. Then Fedora Community Action and Impact Coordinator Remy DeCausemaker was in attendance representing Fedora. During the event, I reached out to the Fedora Magazine editorial team asking to become involved as a writer. By the end of September, I penned my first article for the Fedora Magazine, tying in my experience in the Spigot community to Fedora: run a Minecraft server using Spigot.

My first step getting involved with the Fedora community was an exciting one. I remember feeling proud and excited to see my first article published on the front page, not only helping Fedora, but also helping Spigot. I realized then that it was relatively straightforward to contribute this kind of content, and I would keep writing about software I was familiar with for the Magazine.

As I continued writing posts for the Fedora Magazine, I became aware of another team forming up in Fedora: the Community Operations, or CommOps, team. I subscribed to their mailing list, joined the IRC channel, and attended the first meetings. Over time, I became wrapped up and involved with the community efforts within Fedora. I slowly found one thing leading to another.

Today in Fedora, I am the leading member of the Community Operations (CommOps) team, the editor-in-chief of the Fedora Magazine, a Marketing team member, an Ambassador of North America, a leading member of the Diversity Team, and a few other things.

Advice for other students

When you’re first getting started, it can sometimes be tough and a little confusing. As students getting involved with FOSS, there are a few challenges that we might have to face. A lot of this can be with making the first steps into a new project. There are countless open source projects of various sizes and they all do things a bit differently from others, so the process changes from project to project.

One of the most obvious challenges with getting involved is your personal experience level. Especially when getting started, it can be easy to look at a large project or well-known project and see all the work devoted there. There are smart and active people working on these projects, and many times their contributions are quite impressive! One of the many concerns I’ve seen other students here face (including myself at first) is wondering how someone with beginning to moderate experience or knowledge can get involved, in comparison to some of these contributions from active contributors. If it’s a large project, like Fedora, it can be intimidating to think where to start when there’s so many things to do and areas to get involved with. But if you think of it all as one big project, it is intimidating and difficult for you to make that first step.

Break a bigger project into smaller pieces. Start small and look for something you can help with. A healthy open source project usually will have things like easyfix bugs that are good ones to start with if it’s your first time contributing. Keep an eye out for those if you’re getting started.

Another challenge you might face as a student or beginner to open source is something called imposter syndrome. For me, this was something I had identified with before I knew what it was. For a definition, I’ll pull straight from Wikipedia first: ” Term referring to high-achieving individuals marked by an inability to internalize their accomplishments and a persistent fear of being exposed as a “fraud”.

Imposter syndrome can be a common feeling as you get involved with open source, especially if comparing yourself to some of those active and smart contributors that you meet as you become involved. But you should also remember you are a student – comparing yourself or your contributions to a professional or someone with years of experience isn’t fair to yourself! It’s not apple-to-apples. Your contributions as you get involved with open source are worthy and valuable to an open source project regardless of how deep, how many, or how much time you spend on the project. Even if it’s a couple of hours in the week, that’s saving others those couple of hours and it’s adding something into the project. A contribution is a contribution – it’s a bad idea to rate the worth of contributions to other contributions.

Those are some of the challenges that are useful to know and understand as you become more involved with FOSS. If you know the challenges you are up against, it makes it easier to handle them as they come.

There are also benefits to contributing to open source as a student as well. Contributing to open source is a great way for you to take knowledge and info you have learned from classes and begin applying it to real-world projects and gain experience. It’s one thing to take you to the next level as a student. If you are contributing to a project in the real world, that is unique experience that is helpful for you for future career outlooks as well.

It’s also a great networking opportunity. In open source, you meet many incredible and smart people. In my time in Fedora, I’ve met many contributors and had various mentors help me get involved. I’ve made new friends and met people who I normally would never have had the opportunity to meet.

River boat cruise dinner with Fedora friends at Flock 2016

River boat cruise dinner with Fedora friends at Flock 2016

There are also opportunities for leadership in open source projects. Whether it’s just one task, one bug, or even a role, you might find that sometimes all it takes is someone willing to say, “I’ll do this!” to have leadership on something. It might be challenging or difficult at first, but it’s a great way for you to understand working in team environments, how to work effectively even if you’re remote, and how to break down a task and work on finding solutions for complex problems.

Lastly, it’s important for younger people to become more involved with open source communities. As students and younger community members, we add unique perspective and ideas to open source projects. It’s important to a healthy community for an open source project and any open source project worth contributing to should be welcoming and accepting to students who are willing to spend time working on the project and helping solve those problems, whether they’re bugs, tasks, or other things. In short, there is absolutely a role for students getting involved with open source!

The post How Minecraft got me involved in the open source community appeared first on Justin W. Flory's Blog.

GSoC 2016: That’s a wrap!

Posted by Justin W. Flory on August 21, 2016 08:35 PM

Tomorrow, August 22, 2016, marks the end of the Google Summer of Code 2016 program. This year, I participated as a student for the Fedora Project working on my proposal, “Ansible and the Community (or automation improving innovation)“. You can read my original project proposal on the Fedora wiki. Over the summer, I spent time learning more about Ansible, applying the knowledge to real-world applications, and then taking that experience and writing my final deliverable. The last deliverable items, closing plans, and thoughts on the journey are detailed as follows.

Deliverable items

The last deliverable items from my project are two (2) git patches, one (1) git repository, and seven (7) blog posts (including this one).

Closing plans

At the end of the summer, I was using a private cloud instance in Fedora’s infrastructure for testing my playbooks and other resources. One of the challenges towards the end of my project was moving my changes from my local development instance into a more permanent part of Fedora’s infrastructure. For these reasons, I had some issues with running them in a context and workflow specific to Fedora’s infrastructure and set-up (since I am not a sponsored member of the Fedora system administration group).

My current two patches were submitted to my mentor, Patrick. Together, we worked through some small problems with running my playbook in the context of Fedora’s infrastructure. There may still be some small remaining hoops to jump through for running it in production, but any remaining changes to be made should be minor. The majority of the work and preparation for moving to production is complete. This is also something I plan to follow up on past the end of the GSoC 2016 program as a member of the Fedora Infrastructure Apprentice program.

My patches should be merged into the ansible.git and infra-docs.git repositories soon.

Reflection on GSoC 2016

As the program comes to a close, there’s a lot of valuable lessons I’ve learned and opportunities I’m thankful to have received. I want to share some of my own personal observations and thoughts in the hopes that future students or mentors might find it useful for later years.

Planning your timeline

In my case, I spent a large amount of time planning my timeline for the project before the summer. Once the summer began, my original timeline was too broad for having smaller milestones to work towards. My timeline on the student application was more broad and general, and while it covered the big points, it was difficult to work towards those at first. Creating smaller milestones and goals for the bigger tasks makes them easier to work through on a day-by-day basis and helps add a sense of accomplishment to the work you are doing. It also helps shape direction for your work in the short-term and not just the long-term.

For an incoming Google Summer of Code student for Fedora (or any project), I would recommend creating the general, “big picture” timeline for your project before the summer. Then, if you are accepted and beginning your proposal, spend a full day creating small milestones for the bigger items. Try to map out accomplishments every week and break down how you want to reach those milestones throughout the week. I started using TaskWarrior with an Inthe.AM Taskserver to help me manage weekly tasks going into my project. But it’s important to find a tool that works for you. You should reach out to your mentor about ideas for tools. If possible, your mentor should also have a way to view your agenda and weekly tasks. This will help make sure your goals are aligned to the right kind of work you are doing for an on-time completion.

I think this kind of short-term planning or task management is essential for hitting the big milestones and being timely with your progress.

Regular communication

Consistent and frequent communication is also essential for your success in Google Summer of Code. This can be different depending on the context of how you are contributing to the project. For a normal student, this might just be communicating about your proposal with your mentor regularly. If you’re already an active contributor and working in other areas of the project, this might be spending extra time on communicating your progress on the GSoC project (but more on that specifically in the next section).

Regardless of the type of contributor you are, one thing is common and universal – be noisy! Ultimately, project mentors and GSoC program administrators want to be sure that you are spending the time on your project and making progress towards accomplishing your goals. If you are not communicating, you will run the highest risk of failing. How to communicate can vary from project to project, but for Fedora, here’s my personal recommendations.

Blog posts

Even for someone like me who spends a lot of time writing already, this can be a difficult thing to do. But no matter how hard it is to do it, this is the cornerstone for communicating your progress and leaving a trail for future students to learn from you as well. Even if you’ve had a difficult week or haven’t had much progress, take the time to sit down and write a post. If you’re stuck, share your challenges and share what you’re stuck on. Focus on any success or breakthroughs you’ve made, but also reflect on issues or frustrations you have had.

Taking the time to reflect on triumphs and failures is important not only for Google Summer of Code, but even looking past that into the real world. Not everything will go your way and there will be times where you will be face challenges that you don’t know how to resolve. Don’t burn yourself out trying to solve those kinds of problems alone! Communicate about them, ask for help from your mentors and peers, and make it an open process.

IRC check-ins

Whether in a public channel, a meeting, or a private one-on-one chat with your mentor, make sure you are both active and present in IRC. Make sure you are talking and communicating with your mentor on a regular basis (at a minimum, weekly). Taking the time to talk with your mentor about your challenges or progress is helpful for them so they know what you’re up to or where you are in the project. It also provides a chance for them to offer advice and oversight into your direction and potentially steer you away from making a mistake or going into the wrong direction. It is demotivating when you’ve spent a lot of time on something and then later discovered it either wasn’t necessary or had a simpler solution than you realized.

Make sure you are communicating often with your mentor over IRC to make your progress transparent and to also offer the chance for you to avoid any pitfalls or traps that can be avoided.

Hang out in the development channels

As a Fedora Google Summer of Code student, there are a few channels that you should be present in on a regular basis (a daily presence is best).

  • #fedora-admin
  • #fedora-apps
  • #fedora-summer-coding
  • Any specific channel for your project, e.g. #fedora-hubs

A lot of development action happens in this channels, or people who can help you with problems are available here. This also provides you the opportunity to gain insight into what the communication in an active open source project looks like. You should at least be present and reading the activity in these channels during the summer. Participation is definitely encouraged as well.

Balancing project with open source contributions

I think my single, most difficult challenge with Google Summer of Code was balancing my proposal-specific contributions with the rest of contributions and work in the Fedora Project. I believe I was a minority of Google Summer of Code students who applied for the program as an active member of the project almost a full year before the program began. Additionally, my areas of contribution in Fedora before GSoC were mostly unrelated to my project proposal. My project proposal mostly aligned with my intended degree and education I am pursuing. A lot of the technology I would be working with was new to me and I had minimal knowledge about it before beginning the summer. As a result, this presented a unique set of challenges and problems I would face throughout my project.

The consequences of this were that I had to spend a lot more time researching and becoming familiar with the technology before advancing with creating the deliverable items. A great resource for me to learn about Ansible was Ansible for DevOps by Jeff Geerling. But I spent more time on learning and “trying out the tech” than I had anticipated.

This extra time spent on research and experimentation were in tandem to my ongoing contributions in other areas of the project like Community Operations, Marketing, Ambassadors, the Diversity Team, and as of recently, the Games SIG. Balancing my time between these different areas, including GSoC, was the biggest challenge to me over the summer (along with a separate, part-time job on weekends). A separation of time to different areas of Fedora became essential for making progress on my project. What worked well for me was setting short-term goals (by the hour or day) that I wanted to hit and carry out. Until those goals were reached, I wouldn’t focus on anything other than those tasks.

Special thanks

I’m both thankful and grateful to those who have offered their mentorship, time, and guidance for me to be a member of the GSoC Class of 2016. Special thanks go to Patrick Uiterwijk, my mentor for the program. I’ve learned a lot from Patrick through these past few months and enjoyed our conversations. Even though we were both running around the entire week, I’m glad I had the chance to meet him at Flock 2016 (and hope to see him soon at FOSDEM or DevConf)! Another thanks goes to one of my former supporting mentors and program administrator Remy DeCausemaker.

I’m looking forward to another year and beyond of Fedora contributions, and can’t wait to see what’s next!

The post GSoC 2016: That’s a wrap! appeared first on Justin W. Flory's Blog.

The final week - GSoC Wrap Up

Posted by Sachin S. Kamath on August 15, 2016 09:05 AM

Happy Independence day, India!

Also, today marks the beginning of the GSoC deadline week. This post will wrap-up what I have done during my intern period.

Community Bonding period

  • Figure out how fedmsg works

    fedmsg (FEDerated MeSsaGe bus) is a python package and API defining a brokerless messaging architecture to send and receive messages to and from applications.

fedmsg was used to gather messages for statistics generation.


  • Figure out how datagrepper works.

    Datagrepper is a web-app to retrieve historical information about messages on the fedmsg bus. It is a JSON API for the datanommer message store.

Datagrepper queries were made to retrieve messages for users and was letter compiled into one bigger JSON file and rendered into other forms of output.


  • Familiarize with all the tools in Toolbox.

    CommOps Toolbox is a set of tools that aims at automating tedious tasks. 

I had to deliver a tool which could be combined with the existing tools for CommOps storytelling and Metrics process.


Coding Period - Mid Term

1st Quarter:

  • Onboarding Series - Badge Identification

    • Onboarding is really important for large communities like Fedora. Until Fedora Hubs arrives, badges were decided to be an ideal way to track progress.

    • Started digging information on badges and how it works.

  • Automated GSoC Reports

    • A tool was to be delivered that could initially give statistics of all the Fedora / RedHat / Outreachy interns that would automatically generate CSV's and graphs based on a user's activity. Scaling the tool was pushed for later.

    • Repo for data : https://github.com/sachinkamath/fedstats-data/

  • Badges .yaml definitions

    • This was pushed for as the tool had to added into the toolbox before the mid-term.
  • Automate Events Report Analysis

    • Bee's Script (to be uploaded) as a start

    • Parse a csv, give rudimentary stats about users/fedmsgs: Using stats-tool : PyCon Data ( PyCon US Statistics was generated using this tool)

2nd Quarter:

  • Work on adding more features to the tool

    • More output options such as markdown, gource and csv was added during this period
  • Generate mid-term reports

Blog Posts for this period

Summer with Fedora

Let the Coding Begin

Getting fedstats production ready

Digging deep into datagrepper

Mid term Overview

Mid-term to Finals

Blog Posts for this period

Journey So Far

Understanding statscache

Improving statistics using python-fedora API

Final touches and road ahead

Identifying Fedora Contributors


Working with the amazing people over at Fedora was indeed a really good experience. In this 3 months, I collected around 51 badges.

One badge which I am really proud of this the black and white cookie badge, given to users who have helped 25 Fedorans. It has been only awarded to 31 times so far. Cookies! \o/

Current badges rank, Gotta Badge 'Em All!

Current Repository statistics :

Identifying Fedora Contributors - Stats for Flock

Posted by Sachin S. Kamath on August 06, 2016 01:06 PM

Quoting the Fedora Wiki :

Flock is an annual conference for Fedora contributors to come together, discuss new ideas, work to make those ideas a reality, and continue to promote the core values of the Fedora Community: Freedom, Friends, Features, and First.

I was working on generating statistics for Flock this week. Bhagyashree (bee2502), my GSoC mentor, had delivered a talk on Fedora Contributors and Newcomers Onboarding and I was assigned the task of generating statistics of the whole Fedora Community. At first thought, this was a pretty hectic thing to do. To accomplish this, I will need data of all the contributors from the beginning of fedmsg -i.e from 2012. And, I will have to find when a user had signed up for a FAS account and track his/her activity. Phew!

Now let's crunch the numbers :

Estimating Users :

Fedora Badges Statistics

It was pretty simple, the Fedora Badges front-end (Tahrir) suggested that there were around 41,000 FAS Accounts which fedmsg was tracking and had logged into the badges website. I assumed that if an account was to be of a contributor, he/she should have logged into the badges system. Okay, so I have the count, what now?

Making sense out of the mess :

I fired up my tool and added an extra element to it, i.e : the topic field of the requests (as org.fedoraproject.prod.fas.user.create) and set the --start and --end to match the starting and ending date of every year.

In simple terms, I am pulling the usernames of all those people who made their FAS account in between the years 201x - 201(x+1) (from 2012 to 2016), one year at a time. This will give me the total count of FAS accounts made every year. I could have just taken the count value from the JSON for this, but I needed the usernames for later. Along with this, I also dumped the usernames into a file in the format {username : timestamp_of_creation} into a file.

It looked something like this. I did this for all the years until I had 2012.json to 2016.json.

This gave me the count of FAS accounts being made every year - and with some pygal magic, I got this :

Right click and select View Image for an interactive graph

Yearwise FAS Accounts

Pull, pull, pull :

Now that we have the usernames and the timestamp of creation, we can check if the user was active for a certain period or not. I did this by pulling the data of a user with the --start and the --end arguments in 3 different ways.

1) Check if the user was active immediately, for that - the --start was set as T1 = (time of account creation) and end was set as T2 = (time of account creation + timedelta of 2 weeks).

If the user had count > 10, then the user was checked if the user was active between T2 and T2 + time delta of one month. If the user did not have any, a variable called slow_start was set to True for that user and was subsequently checked for 6+ months activity. Why? Because, there are a lot of people who created a FAS account early and started contributing after an year or so. Again, if count was less than 0, then the user was marked inactive. If the user had activity during this period, he/she was marked as a slow starter. And this is what I got after running the script :

<script src="https://gist.github.com/sachinkamath/95cdd1f5587d5581f25938ead5a8ceeb.js"></script>

Identifying long-term and short-term contributors :

The following set of rules were set followed for differentiating users :

Users considered inactive :

1) Users who have less than 10 fedmsg activity count
2) Users who have only created FAS
3) Users who made very few wiki edits + created a FAS account
4) not_category was set as fedbadges for such messages - so that the fedmsg activity won't exceed 10.

Users considered short term :

1) Users who have activity < 3 months.
2) People who have considerable amount of fedmsg activity and don't have any activity after a month.
3) not_category is again set as fedbadges here.

Users considered long term:

1) Users who have 3+ months of activity
2) Even if a user hasn't contributed in 6 months after creating the FAS account and then has considerable amount of fedmsg activity after another 6 months or an year.
3) Don't call it a comeback badge is considered https://badges.fedoraproject.org/badge/dont-call-it-a-comeback

After running this, I ended up with the following graph:

Right click and select View Image for an interactive graph

All this ended up in Flock in the form of a presentation and not to mention, had a good sleep after crunching up the stats :)

fedstats - Final touches and road ahead

Posted by Sachin S. Kamath on July 30, 2016 02:26 AM

GSoC Deadline is coming!

This week was meant for me to add the final touches to the tool and getting statistics for Flock ready by tweaking it.

Final Touches

The only thing remaining was categorization of output files. This had to be done because the files generated earlier where cluttering the main folder if too many users were pulled for. A very elegant solution was to categorize them into folders. by usernames - and then a .gitignore entry for all the outputs. Although I had .gitignore entries earlier, I organized files only this week.

This is how it basically works :

If the tool is called with the --group argument, the output is stored in <group>/<username>/<output_filename> and if the tool is called using the --user argument, then the output is to <username>/<output_filename>. Also, to avoid confusion - and overwriting of files - the default filename of every file is now <username>_main.<extension>. If a duplicate entry is found, a numeric number is automatically appended to the end.

I also started prettifying the code and scrubbing it. There were performance issues while grabbing group members but now, the JSON can be locally cached by using --mode json argument.

As of now, the develop branch stands at around 48 commits.

Road Ahead

The tool is written in the format of a script and this needs to be addressed. I have started working on packaging this tool and am currently trying to split up the script into modules to get it ready for packaging. Although it is not on my GSoC timeline, I am anyway going ahead with it.

Post-GSoC Goals :

  • More powerful stats

    Comparison graphs, multi-threading, caching and more..

  • Package the tool

    The tool needs to get ready for PyPI and needs to be modularized.

  • Implement missing features in statscache

    Statscache does not have the graph features (yet). Also, it'll be great to combine it with FAS features for more powerful analytics of data like Count by group and so forth. There was a discussion on whether the tool should be migrated to statscache or not but considering the target audience, initial plan and the timeline of GSoC, it was scheduled for after GSoC.

  • Continue work with Onboarding

    Onboarding is a really long process and I'm looking forward to bettering the Onboarding and Join Process of Fedora

GSoC 2016: Moving towards staging

Posted by Justin W. Flory on July 29, 2016 03:50 PM

This week wraps up for July and the last period of Google Summer of Code (GSoC 2016) is almost here. As the summer comes to a close, I’m working on the last steps for preparing my project for deployment into Fedora’s Ansible infrastructure. Once it checks out in a staging instance, it can make the move to production.

Next steps for GSoC 2016

My last steps for the project are moving closer to production. Earlier this summer, the best plan of action was to use my development cloud instance for quick, experimental testing. Once a point of stability is reached, it would be tested on a staging instance of the real Fedora Magazine or Community Blog. Once reviewed and tested, it would work its way to production for managing future installations and upgrades for any WordPress platform in Fedora.

When the time comes to move it to production, I will file a ticket in the Infrastructure Trac with my patch file to the Ansible repository.

One last correction

One sudden difficulty I’ve found is using the synchronize module in my upgrade playbook. Originally, I was copying and replacing the files using the copy module to carry out this, but I found synchronize to offer a better solution, using rsync. However, after switching, I ran into a small error that had me hung up.

When running the upgrade playbook, it would trigger an issue with rsync requiring a TTY session to work as a privileged user. I found a filed bug for this in the Ansible repository. Fixing it required setting a specific flag in the server configuration when using rsync. To avoid doing this, I altered my upgrade playbook to not avoid dependence on a root user for running, and instead using user and group permissions for the wordpress user. I’m working through smoothing out a few minor hiccups with the synchronize module during today, mostly dealing with the directory not being found when executing the module, even though it exists.

Flock 2016

On Sunday, I’ll be flying out to Poland for Flock 2016, Fedora’s annual contributor conference. During Flock, I’ll meet several other Fedora contributors in person, including my mentor. We plan to set up the staging instance either later tonight or during Flock, depending on how time ends up going.

I’ll also be delivering a talk and hosting a workshop during the week as well! One of the workshops I’m hoping to attend is the Ansible best practice working session. I’ll be seeing if there’s anything I can glean to build into the last week of the project during the workshop.

The post GSoC 2016: Moving towards staging appeared first on Justin W. Flory's Blog.

Improving statistics using python-fedora API

Posted by Sachin S. Kamath on July 24, 2016 12:09 PM

I was working on adding the group scraping feature this week. This is one thing that was proposed in a recent meeting of Commops, originally for CommOps retrospective wiki.

I was initially thinking of using statscache for this, but came across a few things that stopped me from doing so. Firstly, statscache is not deployed anywhere, which basically meant that I'll have to pull the historic fedmsg data using fedmsg-hub for the first run, and anyone who wanted to use the tool will also have to do the same. This tool is to be used by anyone as is, and not everyone will have the resources/bandwidth to download all the fedmsg messages. Also, statscache lacks the feature of grouping users. I could only find a by_user count of messages. It made more sense to run the tool on FedoraInfracloud and grab data from it.

Now that I could not use statscache, I initially tried scraping using Selenium and requests. After receiving some quality time on it, i realized that I was getting bad responses from the server. (Sigh, CSRF Token Issues). After some research and IRC discussions, I came across python-fedora API. It is an amazing API that does almost anything. Using it, one can log into FAS and perform a lot of actions like editing profile, getting user info,etc.

In fedora-python, the FAS Modules can handle logins, session caching and user handling. I wrote a function that'd pull all the users from a specific group, which looks something like this :

<script src="https://gist.github.com/sachinkamath/7f5a458a8793aaecc6fd472f40fa999d.js"></script>

And guess what, it worked like a charm. Okay - now for the login part - I had two choices; either prompt the password / get it from a config file. I chose the latter because it'd make automation easier. I ended up using ConfigParser to pull data from a cfg file.

During this, I noticed a very interesting thing - The next time I ran the script, I modified my password a bit (the hacker in me prompted me to :p) and surprisingly - it worked. Session caching is amazing- isn't it. Shoutout to #fedora-admin for helping me understand that :)

And finally, I integrated it in the main script and added the argparse argument of --group / -g to specify a group for which the data has to be generated. Of course, this should not be paired up with --user, or it will throw an error. Also, all the internal errors like, bad group name and incorrect credentials is handled by python-fedora itself! Hurray. The script looks much better now! :)

Fig : Data being pulled for the CommOps group

And right now, the develop branch has about 45 commits.

Current repo commit count

I am looking forward to working with Onboarding Series badges and yml files next week and cleaning up and organizing the script files. By next week, the tool should be pycodestyle ready :)

GSoC 2016 Weekly Rundown: Documentation and upgrades

Posted by Justin W. Flory on July 18, 2016 03:37 AM

This week and the last were busy, but I’ve made some more progress towards creating the last, idempotent product for managing WordPress installations in Fedora’s Infrastructure for GSoC 2016. The past two weeks had me mostly working on writing the standard operating procedure / documentation for my final product as well as diving more into handling upgrades with WordPress. My primary playbook for installing WordPress is mostly complete, pending one last annoyance.


The first complete draft of my documentation for managing WordPress installations in Fedora’s infrastructure is available on my Pagure repository. The guide covers deployment, including upgrades, as well as more notes about working with the playbooks. As my project work begins to finish, the documented procedure is an outline for the final work. It will also be expanded as I close out the project.

Installing new WordPress site

After testing on my development instance in the Fedora cloud, my playbook is able to successfully install multiple WordPress sites to various hosts (pending one caveat for automatically setting up MySQL databases). I was able to spin up multiple sites quickly and easily to a point where I was satisfied with how it worked.

A few challenges I faced in this part were figuring out templating the right information into the WordPress configuration file. I was originally going to try using a variable file, but due to the issue of storing private information, I was trying to use external variables. After revisiting the idea with Patrick, I’m going to use a variables file with the information for each hypothetical installation. This file will then be stored in the private Ansible repository that holds server and application credentials.

Determining SELinux flags and contexts was also challenging. I had to learn which ones to apply to WordPress for basic functionality to still work (particularly for things like uploading media files to the server and letting WordPress cron work as expected). I’m not wholly satisfied with how I implemented it yet, as I want to dig more into setting the contexts with different parts of modules like unarchive and file, if possible.

Upgrading and master

The last significant task to handle is writing the playbook for handling upgrades for WordPress installations. There were two options originally available. The first option would be to allow upgrading via the WordPress admin panel. The second option would be writing a playbook to handle the upgrade. We opted for the second method as this will allow the files on the web server to be read-only, which will serve as an extra measure of hardened security.

I hope to have a playbook created in the next week to tackle upgrading an existing WordPress installation to a newer version. This will be the last significant task of my proposal, before I begin taking what I have so far and finding ways to integrate it into Fedora’s infrastructure.

One of these smaller but important tasks will be writing a “master” playbook to orchestrate the entire process of setting up a machine to run it (and referring to the necessary roles). Some of these roles I’ll be referring to are the httpd and mariadb roles.

Moving towards Flock

With Flock fast on approach, I’m hoping to have the majority of my project work finished and completed before that time frame. Anything past Flock should mostly be tidying up or fully documenting any changes made in the last stretch. This is my target goal at the moment! I’m looking forward to being a part of Flock again this year and meeting many members of the Fedora community.

The post GSoC 2016 Weekly Rundown: Documentation and upgrades appeared first on Justin W. Flory's Blog.

Understanding the statscache daemon

Posted by Sachin S. Kamath on July 17, 2016 11:17 AM

The last two weeks were pretty hectic. I had to read up a lot of documentation, code, fight spam and recover from a failed Fedora Upgrade. Phew, glad to have myself finally back up.

To start with, I was working on the stats tool very less this month and was concentrating more on the new things I have on list. If you had been following my GSoC Posts, you would probably know that I have been working on a statistics tool for the Summer Interns. During the last CommOps meeting, we had a crazy idea - Scaling the tool for an entire group/team and later for the community stats. That really sounds ambitious and it is. The tool currently uses datagrepper from which HTTP requests can be made to retrieve historic fedmsg data. This method worked fine for the interns as the weekly/monthly data of each of them did not cross 10 pages. However, it will be really slow to pull data of more than, say 50 people from statscache (especially of those who have been doing a lot on koji and copr)

To solve this issue, statscache was built. Statscache is a daemon to build and keep fedmsg statistics. It basically listens to the fedmsg hub waiting for messages. As soon as it encounters a message, it is passed onto the plugins which evaluates the topic and stores the statistics and the relevant parts of the raw_message locally. For statscache to function as intended, it requires the statscache-plugins. It is the plugins that does all the hard work of maintaining statistics. You could say statscache and statscache-plugins are made for each other :)

Deploying statscache locally is fairly simple. As simple as :
$ git clone https://github.com/fedora-infra/statscache; cd statscache $ python setup.py develop

And plugins like :
$ git clone https://github.com/fedora-infra/statscache_plugins; cd statscache_plugins $ python setup.py develop

After this is done, we need to gather the fedmsg messages. To do that, we will run fedmsg-hub in the main statscache repo.(To install fedmsg-hub, you need to do sudo dnf install fedmsg-hub.) You can stop fedsmg-hub anytime and statscache will have the statistics of all data you gathered before you exited. After this is done, the web Flask server can easily be started by running python statscache/app.py. This should fire up the web front-end on http://localhost:5000. If everything was done correctly, something like this should be on your screen :

You can now head over to the dashboard and see the plugins in action.For instance, you can see the volume of data each category received using the volume-by-category plugins, which looks something like this:

This is often identified using the topic name of fedmsg. Every category of fedmsg has a unique topic name assigned to it. For example, if someone opens a new issue on Pagure, the topic will be org.fedoraproject.prod.pagure.issue.new where org.fedoraproject.prod is common to all the fedmsg topics, whereas pagure says that the interaction was made on Pagure and the rest is self explanatory. You can see all the topics here.

Now, I am currently working on devising a way to auto-generate statistics of all users of a FAS group. I'll make a new post as soon as I make progress here. Till then, Happy Hacking to me :)

GSoC 2016 Weekly Rundown: Breaking down WordPress networks

Posted by Justin W. Flory on July 02, 2016 08:27 AM

This week, with an initial playbook for creating a WordPress installation created (albeit needing polish), my next focus was to look at the idea of creating a WordPress multi-site network. Creating a multi-site network would offer the benefits of only having to keep up a single base installation, with new sites extending from the same core of WordPress. Before making further refinements to the playbook, I wanted to investigate whether a WordPress network would be the best fit for Fedora.

Background for Fedora

Understanding the background context for how WordPress fits into the needs for Fedora is important. There are two sites powered by WordPress within Fedora: the Community Blog and the Fedora Magazine. Each site uses a different domain (communityblog.fedoraproject.org and fedoramagazine.org, respectively).

At the moment, there are not any plans to set up or offer a blog-hosting service to contributors (and for good reason). The only two websites that would receive the benefits of a multi-site network would be the Community Blog and the Magazine. For now, the intended scale of expanding WordPress into Fedora is to these two platforms.

Setting up the WordPress network

To test the possibilities of using a network for our needs, I used a development CentOS 7 machine for my project testing purposes. There are some guidelines on creating networks for reading first before proceeding. After reading these, it was clear the approach to take was the domain method. I moved to the installation guide on the development machine.GSoC 2016 - Adding sites to WordPress network

I wanted to document the process I was following for the multi-site network, so I created a short log file of my observations and information I found as I proceeded.

One of the time burners of this section was picking up Apache again. A few years ago, I switched my own personal web servers to nginx from Apache. Fedora’s infrastructure uses Apache for its web servers. It took me a little longer than I had hoped to get familiar with it again, mostly with virtual hosts and SELinux contexts for WordPress media uploads. Despite the extra time it took with Apache, I feel like this will save me time later when I am working on polishing the final deliverable or working with the Apache roles available.

In addition to this, I also picked out the dependencies for WordPress, such as the PHP packages needed and setting up a MariaDB database. After a while, I was able to get the WordPress network established and running on the development machine. It was convenient having a testable interface at my fingertips to work with.

WordPress network: Conclusion?

At the end of my testing and poking around, it appeared to me that there would not be an easy solution to using a WordPress network for Fedora. The network had the best ability when set up to use wildcard sub-domains, which wouldn’t be a plausible solution for us because of the two different domains. There were more manual ways of doing it (i.e. not in the WordPress interface) with Apache virtual hosts. However, I felt like it would be easier to write one playbook that handles a single WordPress installation, and can be run for both sites separately (or new sites).

Given that the factor of scale is two websites, I think maintaining two separate WordPress installations will be the easier method and save time and keep efficiency.

This week’s challenges

This week had a late start for me on Wednesday due to traveling on a short vacation with my family from Sunday to Tuesday. Coming back from the trip, I also have a new palette of responsibilities that I am assisting with in Community Operations and Marketing, following decause’s departure from Red Hat. I’m still working on finding a healthy balance of time and focus between other important tasks I am responsible for and my project work.

I’m hoping that having a full week will allow me to make further progress and continue to overcome some of the challenges that have arisen in the past few weeks.

Next week’s goals

For next week, I’m planning on focusing on my existing product and making it feel and run more like a “Fedora playbook”. I mostly want to work on saving unnecessary effort and being consistent by tapping into the existing Ansible roles in Fedora Infrastructure. This would make setting up an Apache web server, MySQL database, and a few other tasks more automated. It keeps the tasks and organization in a consistent manner as well since they are across Fedora’s infrastructure already.

By next Friday, the plan is to have a more idempotent product that runs effectively and as expected in my development server. Beyond that, the next step would be to work on getting my site into a staging instance.

The post GSoC 2016 Weekly Rundown: Breaking down WordPress networks appeared first on Justin W. Flory's Blog.

GSoC - Journey So Far ( Badges, Milestones and more..)

Posted by Sachin S. Kamath on June 29, 2016 07:37 AM

2 days ago, I woke up to a mail from Google saying that I passed the mid term evaluations of GSoC and could continue working towards my final evaluation. "What a wonderful way to kick start a day, I thought".

Google Summer of Code Mid Term E-Mail

Image : E-mail from Google Summer of Code

Working on the statistics tool was an amazing experience. You can browse my previous posts for a very detailed idea of what I've been working on. Apart from all the code written, I also got an opportunity to communicate with a lot of amazing people who are part of the Fedora Community as well get bootstrapped to the fedora-infrastructure team (and got an awesome badge for it)

Getting sponsored to the fi-apprentice group allows one to access the Fedora Infrastructure machines and setup in read only mode (via SSH) and understand how things are done. However, write access is only given to those in the sysadmin group, who are generally the l33t people in #fedora-admin on Freenode ;)

Apart from that, I got the opportunity to attend a lot of CommOps Hack sessions and IRC Meetings where we discussed and tackled various pending tickets from the CommOps Trac. We are currently working on Onboarding Badges Series for various Fedora groups. Onboarding badges are generally awarded to all those who complete a set of specific tasks (pre-requisites) and get sponsored into the respective Fedora group. One such badge is to be born very soon - take a look at the CommOps Onboarding ticket here.

Life-Cycle of a badge :

Getting a new badge ready is not a very easy task. A badge is born when someone files a badge idea in the Fedora Badges Trac. The person who files a ticket is expected to describe the badge as accurately as possible - including the description, name and particulars. After that is done, a person needs to triage the badge and suggest changes to it (if required). The triager is also expected to fill in additional details about the badge so that the YAML can be made to automate the badge awarding process. The next step is to write yaml definitions for the badge and attach initial concept artwork of the badge. This is reviewed by the fedora-design team and is either approved or further hacked upon.After the approval, the badge is all set to be shipped. QR codes might be printed to manually award the badge, especially when it is an event badge.

Having talked about the badges, I was awarded the following badges during my GSoC Period :

Image : Coding period badges (and counting ..)

Badges are a great way to track your progress in the community. It is also super-useful for the new-contributors as they can work keeping the badges as goals. Read bee's blog post about it here.

To keep a check on myself, I have compiled all my data over here. This repo has all the things I have done inside the community and also has SVG graphs that holds the metrics of it. Hoping to have a great summer ahead.

Useful Links for new contributors :

You can also find me hanging out in #fedora-commops on Freenode. Feel free to drop-in and say hello :)

GSoC Mid Term Evaluation

Posted by Tummala Dhanvi on June 29, 2016 05:16 AM

tldr; I have failed the midterm evaluation of GSoC but I am continuing to complete the project

I am sorry to let you guys know that I have failed the midterm evaluation, I have reached my goals but I should have been ahead and doing more! It was my mistake I should have done more work! and obviously didn’t communicate well but IMHO it shouldn’t have made me fail as I have done at-least some good work (I thought of contacting the google member but had put the idea down because there is mistake on my side!)

Here is the review for my project given by mentor zach to me.

Tummala, it has been recommend that you fail at the midterm because of your lack of communication with myself as your mentor and the rest of the community. When we started this I gave you a list of requirements and goals, you did not follow thru on the requirements even after we talked about it several times, while you managed to meet the goals of the first half, you wasted a lot of time and should be much further along. We talked during the bonding phase and I explained to you that packaging the source code required was your minimum goal, but that it was only the goal to make sure you got up to speed and did not have any issues with this complex part of the process. From your reports to me, and your communication with others it is clear that you spent very little time working over the first half and had you applied yourself you should have been able to make much more progress. In order to be a successful member of an open source community you need to learn and appreciate the importance of communication with the wider community. You lack of communication left many people questioning what you did during this period, and had you communicated better with myself and others we could have identified issues and helped you to stay engaged. I appreciate the work you have done, and have enjoyed working with you. Please do not let this deter you from continuing to engage in open source communities including Fedora, but as you do keep in mind how important open communication is to the success of a project.

As zach mentioned I am talking this as a learning process and am continuing the project.

I could take this one as the example of “FAIL FAST, FAIL OFTEN” it’s better that I have failed as a student and can learn a lot from my mistakes, than failing at my first job or not doing anything!

But I will continue to work on the project.


Filed under: fedora, GSOC, gsoc2016, Uncategorized

GSoC 2016 Weekly Rundown: Assembling the orchestra

Posted by Justin W. Flory on June 24, 2016 04:34 PM

This week is the Google Summer of Code 2016 midterm evaluation week. Over the past month since the program started, I’ve learned more about the technology I’m working with, implementing it within my infrastructure, and moving closer to completing my proposal. My original project proposal details how I am working with Ansible to bring improved automation for WordPress platforms within Fedora, particularly to the Fedora Community Blog and the Fedora Magazine.

Understanding background

My project proposal originated from a discussion based on an observation about managing the Fedora Magazine. Fedora’s infrastructure is entirely automated in some form, often times using Ansible playbooks to “conduct” the Fedora orchestra of services, applications, and servers. However, all the WordPress platforms within Fedora are absent from this automated setup. This has to do with the original context of setting up the platforms.

However, now that automation is present in so much of the Infrastructure through a variety of tasks and roles, it makes sense to merge the two existing WordPress platforms in Fedora into the automation. This was the grounds for my proposal back in March, and I’ve made progress towards learning a completely new technology and learning it by example.

Initial research

GSoC 2016: "Ansible For DevOps" as a learning resourceFrom the beginning, I’ve used two resources as guides and instructions for GSoC 2016. “Ansible For DevOps“, a book by Jeff Geerling, has played a significant part in helping bootstrap me with Ansible and the in’s and out’s. I’m about halfway through the book so far, and it has helped profoundly with learning the technology. Special thanks to Alex Wacker for introducing me to the book!

The second resource is, as one would expect, the Ansible documentation. The documentation for Ansible is complete and fully explanatory. Usually if there is an Ansible-specific concept I am struggling with learning, or finding a module for accomplishing a task, the Ansible documentation helps point me in the right direction quickly.

Research into practice

After making some strides through the book and the documentation, I began turning the different concepts into practical playbooks for my own personal infrastructure. I run a handful of machines for different purposes, ranging from my Minecraft server, a ZNC bouncer, some PHP forum websites, and more. Ever since I began using headless Linux servers, I’ve never explored automation too deeply. Every time I set up a new machine or a service, I would configure it all manually, file by file.

First playbook

After reading more about Ansible, I began seeing ways I could try automating things in my “normal” setup. This helped give a way to ease myself into Ansible without overwhelming myself with too large of tasks. I created repositories on Pagure for my personal playbooks and Minecraft playbooks. The very first one I wrote was my “first 30 minutes” on a new machine. This playbook sets up a RHEL / CentOS 7 machine with basic security measures and a few personal preferences ready to go. It’s nothing fancy, but it was a satisfying moment to run it in my Vagrant machine and see it do all of my usual tasks on a new machine instantly.

For more information on using Ansible in a Vagrant testing environment, check out my blog post about it below.

Setting up Vagrant for testing Ansible

<iframe class="wp-embedded-content" data-secret="aI0cn9NNEG" frameborder="0" height="338" marginheight="0" marginwidth="0" sandbox="allow-scripts" scrolling="no" security="restricted" src="https://blog.justinwflory.com/2016/06/setting-vagrant-testing-ansible/embed/#?secret=aI0cn9NNEG" title="“Setting up Vagrant for testing Ansible” — Justin W. Flory's Blog" width="600"></iframe>

Moving to Minecraft

After writing the first playbook, I tried moving to focusing on some other areas I could try automating to improve my “Ansible chops”. Managing my Minecraft server network is one place where I recognized I could improve automation. I spend a lot of time repeating the same sort of tasks, and having an automated way to do these tasks would make sense.

I started writing playbooks in the adding and restarting Minecraft servers based on the popular open source server software, Spigot. Writing these playbooks helped introduce me to different core modules in Ansible, like lineinfile, template, copy, get_url, and more.

I have also been using sites like ServerFault to find answers for any starting questions I have. Some of the changes between Ansible 1.x and 2.x caused some hiccups in one case for me.

Using Infrastructure resources

After getting a better feel for the basics, I started focusing less on my infrastructure and more on the project proposal. One of the key differences from me writing playbooks, roles, and tasks for my infrastructure is that there are already countless Ansible resources available from Fedora Infrastructure. For example, to create a WordPress playbook for Fedora Infrastructure, I would want to use the mariadb_server role for setting up a database for the site. Doing that in my playbook (or writing a separate role for it just for WordPress) would increase the difficulty of maintaining the playbooks and make it inconvenient for other members of Fedora Infrastructure.

Creating a deliverable

In my personal Ansible repository, I have begun constructing the deliverable product for the end of the summer. So far, I have a playbook that creates a basic, single-site WordPress installation. The intention for the final deliverable is to have a playbook for creating a “base” installation of a WordPress network, and then any other tasks for creating extra sites added to the network. This will make sure that any WordPress sites in Fedora are running the same core version, receive the same updates, and are consistent in administration.

I also intend to write documentation for standing up a WordPress site in Fedora based on my deliverable product. Fortunately, there is already a guide on writing a new SOP, so after talking with my mentor, Patrick Uiterwijk, on documentation expectations and needs next week, I will be referring back to this document as a guide for writing my own.

Reflection on GSoC 2016 so far

I was hoping to have advanced farther by this point, but due to learning bumps and other tasks, I wasn’t able to move at a pace as I hoped. However, since starting GSoC 2016, I’ve made some personal observations about the project and how I can improve.

  • Despite being behind from where I wanted to be, I feel I am at a point where I am mostly on track and able to work towards completing my project proposal on schedule.
  • I recognize communication on my progress has not been handled well, and I am making plans to make sure shorter, more frequent updates are happening at a consistent and regular basis. This includes a consistent, weekly (if not twice every week) blog post about my findings, progress, commits, and more.
  • After talking with Patrick this week, we are going to begin doing more frequent check-ins about where I am in the project and making sure I am on track for where I should be.

Excerpt from GSoC 2016 evaluation form

As one last bit, I thought it would be helpful to share my answers from Google’s official midterm evaluation form from the experience section.

“What is your favorite part of participating in GSoC?”

“Participating in GSoC gave me a means to continue contributing to an open source community I was still getting involved in. I began contributing to Fedora in September 2015, and up until the point when I applied for GSoC, I had anticipated having to give up my activity levels of contributing to open source while I maintained a job over the summer. GSoC enabled me to remain active and engaged with the Fedora Project community and it has kept me involved with Fedora.

The Fedora Project is also a strong user of Ansible, which is what my project proposal mostly deals with. My proposal gives me a lot of experience and the opportunity to learn new technology that not only allows me to complete my proposal, but also understand different levels and depths of contributing to the project far beyond the end of the summer. With the skills I am learning, I am being enabled as a contributor for the present and the future. To me, this is exciting as the area that I am contributing in has always been one that’s interested to me, and this project is jump-starting me with the skills and abilities needed to be a successful contributor in the future.

GSoC is also actively teaching me lessons about time management and overcoming challenges of working remote (which I will detail in the next question). I believe the experience I am getting now from participating in GSoC allows me to improve on myself as an open source developer and contributor and learn important skills about working remotely with others on shared projects.”

“What is the most challenging part of participating in GSoC?”

“The hardest part for me was (is) learning how to work remotely. In the past, when I was contributing at school, I had resources available to me where I could reach out to others nearby for assistance, places I could leave to focus, and a more consistent schedule. Working from home has required me to reach out for help either by improving how well I can search for something or reaching out to others in the project community about how to accomplish an objective.

There are also different responsibilities at home, and creating a focused, constructive space for me to focus on project work is an extremely important part of helping me accomplish my work. Learning to be consistent in my own work and setting my own deadlines is a large part of what I’m working on doing now. Learning the ability to follow and set personal goals for working on the project was a hard lesson to learn at first, but finding that balance quickly and swiftly is something that is helping me move forward.”

The post GSoC 2016 Weekly Rundown: Assembling the orchestra appeared first on Justin W. Flory's Blog.

Example of newer document writing for Fedora-docs

Posted by Tummala Dhanvi on June 23, 2016 07:30 AM

tldr; How to build documentation using asciidoc and pintail

If you have read the FAD report, you should have know that we are moving away from publican flow and using pintail to build the asciidoc. In this post we will be building a example doc.

I suppose that you are using Fedora, if you are using any other GNU/Linux you need to install most of them from source.

So let’s get a overview of how this thing work.

We write docs in Asciidoc (.adoc) format then convert into mallard (.page) format using the asciidoctor-mallard tool then convert (build) them into html using pintail which is a two step process. We can make it a one step process using the pintail-asciidoc plugin which does the same thing in the background.

so let’s build docs using the first way ie without using the plugin. For this we need to have 2 tools installed asciidoctor-mallard and pintail which are available in my copr, you can do them in this manner.

sudo dnf copr enable dhanvi/asciidoctor-mallard

sudo dnf install rubygem-asciidoctor-mallard

sudo dnf copr enable dhanvi/pintail

sudo dnf install pintail

and then we get started with writing, let’s create a new directory and write a example doc

mkdir asciidoc-example

cd asciidoc-example

touch example.adoc

Open example.adoc (note the format .adoc -> asciidoc) and paste these simple asciidoc lines. Fedora-docs team and my mentor Zach are working the guidelines for writing in asciidoc more details here https://pagure.io/documentation-guide/issues

= Document Title
Doc Writer <doc@example.com>
:doctype: book
:source-highlighter: coderay
:listing-caption: Listing

A simple http://asciidoc.org[AsciiDoc] document.

== Introduction

A paragraph followed by a simple list with square bullets.

* item 1
* item 2

Now let’s convert them into mallard (.page) using asciidoctor-mallard

asciidoctor-mallard example.adoc

ignore any warning, the asciidoctor-mallard needs some fixes.

You will find a example.page created when you run the above command, here is the contents of the example.page file.

<?xml version="1.0" encoding="UTF-8"?>
<page xmlns="http://projectmallard.org/1.0/" xmlns:its="http://www.w3.org/2005/11/its" xml:lang="en" type="topic" id="example">
<credit type="author">
<name>Doc Writer</name>
<title>Document Title</title>
A simple 		<link href="http://asciidoc.org">AsciiDoc</link> document.
<section id="_introduction">
A paragraph followed by a simple list with square bullets.
item 1
item 2

now we convert the mallard (.page) to html using the pintail

pintail init

pintail build

pintail init creates the config file which can be used for the configuration of how the pintail builds the docs and adding plugins etc..

pintail build creates the html files inside the pintail/build directory. And here is the screenshot of how the files looks like

Screenshot from 2016-06-22 23:33:53

I know it need some good template, we will work on it after we finish the continuous deployment. So this is how a basic asciidoc is build using the pintail.

Let’s do the same thing using the pintail-asciidoc plugin you need to first install it and packages mentioned above also.

sudo dnf copr enable dhanvi/pintail-asciidoc

sudo dnf install python-pintail-asciidoc

we create a new directory and do the above steps again but you need not do two steps and just need to do pintail build every time.

mkdir asciidoc-example2

cd asciidoc-example2

touch example.adoc

open the example.adoc in your favorite doc  and add the same lines as above. and do these steps.

pintail init

echo "plugins = pintail.asciidoc" >> pintail.cfg

pintail build

you need to do the init and echo only one time after every changes to the example.adoc you just need to do pintail build. The echo command say’s to use the plugin which we installed to pintail to build the asciidoc (.adoc) files.

So that’s all for the post and the next goal for me is to hack on git hooks and work on continuous deployment 🙂

Filed under: fedora, GSOC, gsoc2016

fedstats - A final overview

Posted by Sachin S. Kamath on June 21, 2016 06:01 AM

Mid term evaluations of GSoC starts today. It's been a month since it all started and I'd like to blog (brag) about what I've done so far.

To start with, here are the list of things I was assigned to work on till the mid-term and the current status of it :

  1. Statistics Tool

    • Automate Event Report Visualization
      STATUS : Check. The tool was used to gen rate user statistics for PyCon US and can be used for any future events. [Link]

  • Support for multiple formats of Output
    STATUS : Check. The tool is feature-rich with support for SVG, PNG, Text, Markdown, CSV and Gource.
  • Pull data of individual users / multiple users for reporting.
    STATUS : Check. The tool uses datagrepper to generate statistics. If it's on the datagrepper, it should be covered in the stats. Since the tool is based on arguments, stats can easily be generated for multiple users
  • Document code to help future contributors.
    STATUS : Check. The tool has a README which explains the features and the working on the tool in a concise and clear way. [Link to README]
  • Onboarding Series

    • Identify OnBoarding badges
      STATUS : Check. The CommOps onboaring badge is now a ticket in the Design Trac.
  • Identify steps to make onboarding better
    STATUS : Check. The discussion is scattered across all the CommOps meeting that we have had in this month.

  • Stats-Tool : Overview

    Features :

    • Gather data from datagrepper for analyzing and visualizing data.

    • Support for SVG, PNG, CSV, Markdown, Text and gource style outputs. (Working on PDF and HTML - HTML is currently halted due to a bug in the python-grip module. Waiting for the developer's response on it)

    • Generate category-wise reports (pie / donuts)

    This week's work:

    • Added support for gource style outputs (Inspiration - fedmsg2gource)

    • Text output mode now branches out to two. The default text mode only prints the statistics onto a text while and when combined with --log, it dumps the category-wise activities into the text file.

    • Extended the scope of the tool by adding --start and --end arguments which take in dates in the format MM/DD/YYY. Useful for generating event reports.

    • --user=ALL argument added. This is useful pulling ALL the messages from datagrepper. To be combined with either --delta which defaults to delta of one week or --start and --end arguments.

    • Added more options to interactive mode and set defaults.

    • Code cleanup and minor enhancements.

    EDIT : I have generated statistics of all the Google Summer of Code Interns for the mid term. If you are interested, take a look here and here.

    Repository Statistics :

    Total Issues : 13
    Total Fixed : 11
    Open Issues : 2 (External Bugs)

    Sortable !

    Posted by Devyani Kota on June 19, 2016 06:44 AM

    I wanted to finish working on this feature by the end of the previous week but university exams were keeping me busy. As they say, Better late than never ! eh? 🙂

    Quick Recap : I was working on the bookmark-modal window which would feature the hub pages bookmarked by the user, that the bookmarks-bar cannot hold (overflowing bookmarks)

    Week 1:
    I hacked on the master.html file to get the edit_bookmarks modal window to pop-up on hitting the *edit* button. Added few hacks on style.css as well to bring it closer to Mizmo’s mockup.
    Screenshot from 2016-06-19 11-52-28.png

    Screenshot from 2016-06-08 23-08-49.png

    Week 2:
    Making the bookmarks toggle !
    During the Hub’s meeting, Sayan suggested that the mockup required extra styling that wasn’t preferred. We wanted to use fedora-bootstrap alone, so that the modal window looked bootstrappy, thus dropping the idea to add styling.
    The user is to be able to re-order the bookmarks at his will, so Pierre suggested to use something like jquery sortable() but using bootstrap. I tried experimenting using jqueryui and after several attempts, it finally worked ! 🙂 Though there is still more experiments to be done…

    Screenshot from 2016-06-19 12-06-00.png

    Screenshot from 2016-06-18 23-59-41.png

    Preview : This week I plan to work on providing suggestions to the user to re-order hub-bookmarks depending on the frequency of the hub pages he/she visits often.

    Till next week ! Good day to you 🙂

    GSoC Week 3 Update

    Posted by Tummala Dhanvi on June 11, 2016 11:09 AM

    I have started working with the packaging of the (pintail-asciidoc) https://github.com/projectmallard/pintail-asciidoc which is based on python.

    like the before project it didn’t have any documentation on how to build and install and it wasn’t a python egg for the easy packaging.

    So I have to create the spec file from scratch and which is a difficult task to do and I couldn’t find (may be I didn’t search well)

    Update : I have done the packaging and the package is under the review process


    Filed under: fedora, GSOC, gsoc2016

    Setting up Vagrant for testing Ansible

    Posted by Justin W. Flory on June 10, 2016 08:45 AM

    As part of my Google Summer of Code project proposal for the Fedora Project, I’ve spent a lot of time learning about the ins and outs of Ansible. Ansible is a handy task and configuration automation utility. In the Fedora Project, Ansible is used extensively in Fedora’s infrastructure. But if you’re first starting to learn Ansible, it might be tricky to test and play with it if you don’t have production or development servers you can use. This is where Vagrant comes in.

    What is Vagrant?

    Together, Vagrant and Ansible are a powerful combination.

    Together, Vagrant and Ansible are a powerful combination.

    Many people in the tech industry are already familiar with virtual machines (VMs) and using them for testing. If using a virtual machine is useful for testing and experimentation, Vagrant takes that idea and makes it a thousand times for powerful. Vagrant creates and configures a single virtual machine or several groups of inter-connected virtual machines. For someone trying to learn configuration management software like Ansible (or Puppet, or Chef, or Salt…), it features tight integration for creating virtual machines from playbooks.

    Using Vagrant allows you to make quick, simple, and easy changes in a safe, local environment. Vagrant is also incredibly easy to set up, and in my experiences, it also ran well on a laptop. My trusty Toshiba Satellite with 8GB of RAM and an Intel i3 chip was able to handle three CentOS 7 virtual machines at once, and still manage to do other regular tasks.

    Installing Vagrant for Fedora

    Since I’m working with Fedora on my hardware while working on the Fedora Project over the summer, it would make sense for this guide to cover how to install and set up Vagrant inside of Fedora. However, I imagine it’s similar for most other distributions, so try adapting these commands for your own distribution.

    The Fedora repositories have a Vagrant package available. To install it, run the following command.

    $ sudo dnf install vagrant

    This will pull down Vagrant and all the dependencies it needs to run. However, what it won’t do is pull down some of the many providers that it might need to use a virtual machine.

    Vagrant and providers

    For my testing, I used a centos7 box image from geerlingguy. This image creates a current, updated CentOS 7 virtual machine. In order to use it, you must have one of the two providers available: VMware or VirtualBox. Seeing as how VirtualBox is easier for me to install and use on my system, I chose to use VirtualBox as the “provider” for building and simulating the CentOS 7 box within Vagrant.

    It took a bit of figuring out at first, but I found a current and well-documented guide on how to install VirtualBox on to a Fedora 22 or 23 system. For a more detailed explanation of how to do it, you can read the instructions, but for simplicity, I have the commands here to show how to add it to your Fedora system quickly.

    $ sudo dnf upgrade
    $ cd /etc/yum.repos.d/
    $ sudo wget http://download.virtualbox.org/virtualbox/rpm/fedora/virtualbox.repo
    $ sudo dnf install VirtualBox-5.0 binutils gcc make patch libgomp glibc-headers glibc-devel kernel-headers kernel-devel dkms
    $ sudo /usr/lib/virtualbox/vboxdrv.sh setup
    $ sudo usermod -a -G vboxusers your_username

    From here,  VirtualBox  will be available as a provider within Vagrant.

    Running a CentOS 7 image

    Now that you have both Vagrant and VirtualBox installed, you can create a Vagrant virtual machine with this image. Navigate to a new directory you want to use for managing your virtual machines. Once there, you can use the following commands to start your CentOS 7 virtual machine.

    $ vagrant box add geerlingguy/centos7
    $ vagrant init geerlingguy/centos7
    $ vagrant up --provider virtualbox

    After a lot of downloading and then waiting for the first setup to finish, you should receive a notification that your virtual machine started! Huzzah! You can log in directly to it by typing vagrant ssh in the same directory you ran the above commands.

    There’s a lot of cool things you can do to set up your virtual machines and configure how they start. For example, you can choose to use the VirtualBox GUI for running your virtual machines if you don’t want to SSH into it. You can tweak several different flags to alter the environment for the virtual machine. However, that is out of the scope of this guide, and there is a fair amount of documentation already online.

    Provisioning with Ansible

    The fun part (and what was really cool for me) was provisioning new virtual machines with Ansible. You can instruct Vagrant to seek an Ansible playbook when creating a new virtual machine. It will use the instructions of the playbook to configure, install, or tweak whatever is in the playbook, as if it’s being run for the first time. Or maybe it’s the second, the third, the fourth time you’ve run it. In either case, the idempotent nature of Ansible should help make sure you avoid repeating anything that doesn’t need repeating.

    In order to tell Vagrant to search for an Ansible playbook, you will need to edit the Vagrantfile for wherever you initialized Vagrant. Open it up in your favorite text editor and add the following bits at the bottom, but before the final end statement.

    config.vm.provision "ansible" do |ansible|
      ansible.playbook = "playbook.yml"

    These short but sweet instructions tell Vagrant to look for a playbook.yml file when starting this virtual machine. It will then be easy to provision (i.e. configure / modify / change) the machine with your playbook later on.

    Writing the playbook

    For this blog post, I’ll offer a simple but clear example of a playbook you can use to start a Vagrant machine. This snippet specifically comes from Ansible for DevOps by Jeff Geerling, which I have (and am) using as a guide as I continue to learn more about Ansible (I highly recommend the book, consider getting a copy).

    - hosts: all
      sudo: yes
      - name: Ensure NTP (for time synchronization) is installed.
        yum: name=ntp state=present
      - name: Ensure NTP is running.
        service: name=ntpd state=started enabled=yes

    All this example playbook does installs NTP if it is not present on the system, and then start and enable it, if it is not already. This is a very simple example, but it’s good for getting started quickly.

    Running the playbook

    In the same directory as your Vagrantfile, create a playbook.yml with the above content. Once you have the YAML file there, running the following command will run the Ansible playbook and allow you to see how it runs.

    $ vagrant provision

    Now, Vagrant will take your playbook and instantly run it in your machine. If all goes right, your virtual machine will now have NTP installed and be syncing your clock to the Internet! While a simple task, it was a satisfying feeling for me to see this run, but also to imagine the other possibilities that this could be used for. It would be easy to run a playbook on one, two, ten, a hundred, a thousand servers, and have it do the same thing on all of them.

    The automation was fascinating to me and began giving me ideas of how I could automate my infrastructure, as well as to creating one for WordPress (for my GSoC project).

    Congratulations! By the end of this short but (hopefully) useful guide, you will have Vagrant virtual machines that are controlled and orchestrated by Ansible.

    The post Setting up Vagrant for testing Ansible appeared first on Justin W. Flory's Blog.

    Digging deep into datagrepper - More Statistics Features

    Posted by Sachin S. Kamath on June 06, 2016 08:06 PM

    This week was really exciting as I worked further on my statistics tool for the Fedora and Redhat summer interns. Last week, I had worked on some features like text reports and visualization.

    After the basic skeleton written last week, a lot of features were added to the tool including CSV and markdown output, detailed analytics of a category and category-based text reports. Time to dive into the tool.

    To gather statistics of a user, you basically run the tool either in interactive mode or with the necessary arguments. Let me generate my category-wise report as SVG for this week. The command to do that'd be python main.py --user=skamath --mode=svg. The number of weeks defaults to 1. Let's see what I've got.

    All these SVG's are interactive. Right click and select View Image to get a feel of it. Hover to view percentages and filter by clicking on the color legends.

    Awesome! I just got my activity donut. I have interacted a lot with Pagure this week. Hmm, let's take a closer look at that.

    To generate the detailed statistics of a particular category, the --category needs to be set while running the tool. By default, it is set as None.

    This time, the command will be python main.py --user=skamath --mode=svg --category=pagure, and BHAM!

    11 interactions with issues and 26 interactions with pull-request.

    Under issues, 73% was issue edits and the rest was comments on issues. Boy, fedmsg is powerful - isn't it?

    Under pull requests, around 46% were comments and rest were equally distributed among new and closed PR's :)

    Similarly, statistics in text mode and markdown can also be generated. Markdown can easily be converted to HTML pages and also can be used to update the blog with weekly reports.

    Here is the full text log generated by the command python main.py --user=skamath. I am not using mode here because it defaults to text

    Apart from the metrics features, I also added an interactive mode for the tool which can be accessed by python main.py --interactive. All the arguments required will then be prompted by the tool (as the name suggests, interactively).

    There's a lot more to be improved. I have miles and miles to go, before I sleep :)

    GSoC Week 2 Update

    Posted by Tummala Dhanvi on June 04, 2016 10:04 AM

    From the last week you might have seen me working on the packaging and the end of the week I came to know that you can directly build the ruby gems in copr http://frostyx.cz/posts/copr-rubygems but it didn’t have a feature to upload the rubygems so created a ticket for the same https://bugzilla.redhat.com/show_bug.cgi?id=1342829 but it was closed saying to upload the srpms generated from using the gem2rpm, which was the same process I was doing, even though it looks like my packaging was waste of time but, it isn’t was as I take responsibilty as the packager and there are always few things that the gem2rpm couldn’t do manually

    I have submitted the review request and it’s still awaiting the review and now I moved on my second part of the project.

    Later I was looking at build bot and jenkins for the next part of the project, and was thinking of first doing the setup on GitHub any then change it to pagure as the CI is still under construction in pagure.

    I have also spent some time on packaging my package cross platform using the opensure build but that also didn’t go well https://build.opensuse.org/package/show/home:dhanvi/asciidoctor-mallard and stopped working on it as my mentor mentioned that It wasn’t our priority.

    I just started with the cloning of the second package to be cloned and will be packaging it in the next week.

    That’s all for the week (I should also confess the second part of the working with build bot didn’t go well at all and was mostly unproductive.)


    Filed under: fedora, GSOC, gsoc2016

    Scroll ! Think ! Bookmark !

    Posted by Devyani Kota on June 01, 2016 11:58 AM

    Hi there,
    so we are in the second half of the year and its been quite exciting so far 😛
    Looking forward to amazing times !

    It’s been a week since the coding period began, the first feature that we plan to work on is the bookmark feature. Soon there will be quite some hub-pages for Users and also for groups, such as Infrastructure, Design, Marketing, Commops, etc.
    So it might get difficult to keep track of the pages a user wishes to follow closely.
    Currently, the hub pages are just added to the bookmarks bar.
    We plan to provide the user with a *EDIT* bookmarks option, which would list down the hub pages that were bookmarked by the user initially, followed by the pages that are most frequently opened by the user, that exist in the overflowed list and not in the main bookmarks bar.
    Aye ! that’s confusing 😛

    Screenshot from 2016-05-31 16-59-55

    So, we see here that the bookmarks were just added to the main bar.
    Instead, an ‘Edit bookmarks’ option will be listing the hub pages that exist in the overflowed list, with those that user bookmarked in the top of the list.


    Thanks to Mizmo, Sayan, Pingou for providing a few pointers on the same in yesterday’s Hubs-meeting ! Discussed ideas were that the bookmarks shouldn’t change dynamically, rather we’ll be providing the user a suggestion to move the frequently opened bookmarks further up in the bookmarks list.
    This is what I plan to accomplish this week. *fingers crossed* 🙂

    A small info : We have our weekly Hubs meeting in #fedora-hubs at 14:00 UTC on Tuesdays. Anyone interested can attend or if you have any suggestions; they are most welcome !

    That’s all folks ! See you next week 🙂

    GSoC Week 1 Update

    Posted by Tummala Dhanvi on May 28, 2016 04:05 PM

    This the first week of my GSoC and I am very excited to do the project and  with lots of energy, hope it lasts till the end of the project.

    As the part of my first part of the project I have started working on packaging the https://github.com/asciidoctor/asciidoctor-mallard

    First looking at the code I tried installing it by using the –pre flag but didn’t work so I found out that it wasn’t my issue but the asciidoctor team didn’t publish the gem, so thought of creating a issue but postponed it to later (update: I have created the issue at https://github.com/asciidoctor/asciidoctor-mallard/issues/17) so I have to build the ruby gem my self, after reading a lot of manuals I have came to know that it’s as simple as doing ruby build name.gemspec (I was afraid that I have to create a lot of work to do, but later came to know that it was all ready).

    The manual that I refered to create the rpm is https://fedoraproject.org/wiki/How_to_create_an_RPM_package and I have also built a hello package in the copr (https://copr.fedorainfracloud.org/coprs/dhanvi/hello/) using this tutorial (https://fedoraproject.org/wiki/How_to_create_a_GNU_Hello_RPM_package)

    Later I got a idea that it would be better if there is a automatic tool to create the rpm spec file as we had in the debian world, then came to know about the gem2rpm http://www.redpill-linpro.com/sysadvent//2015/12/07/building-rpms-from-gems.html and gave it a shot with the gem I build manually and it worked mostly fine.

    Later after talking with my mentor zach and reading the manual, I build the srpms and tested them on copr, I tried doing the same on local using mock, but had a lot of learning curve to it so postponed it to later.

    Even thought it looked a easy thing to do but it took me more than a week to figure out how to do it.

    Update: I found that the ruby gems are automatically build using gem2rpm here https://copr.fedorainfracloud.org/coprs/g/rubygems/rubygems/ but it doesn’t much help me as the gem I am working on doesn’t exist on the rubygems.org website


    Filed under: fedora, GSOC, gsoc2016

    Getting fedstats-gsoc production ready

    Posted by Sachin S. Kamath on May 28, 2016 06:57 AM
    Getting fedstats-gsoc production ready

    I have been working on a tool that generates the statistics of Fedora interns. When writing code and running it, I came across various errors. I thought blogging about it and keeping track of it will be a good idea for anyone who will be using it in the future. So here goes :

    Q. What is this anyway ?

    A : It is a CLI tool/script written in python that pulls data from datagrepper and generates graphs/output as per the users' requirement.

    Q. What are it's features?

    A: Take a look at the project on Github or Pagure.

    Q. The program throws errors while running. What should I do?
    A: This program was tested on Fedora 23 and ran without any errors/warnings. However, each person has a different machine and errors might have crept in. To begin with, make sure your default Python compiler is 2.7 and not 3.0. This tool is not Python 3.* compatible (yet!). Also make sure your environment variables were set correctly. python --version should say Python 2.7.*. If you face an issue, please open an issue in the Issue Tracker of Pagure.

    Q. Why are the SVG's generated blank / completely dark.
    A: You are using your default image viewer which probably doesn't support viewing of clickable SVG's. Try opening the SVG in a web browser and check if the problem persists. File an issue if you can't get it to work.

    Q. The PNG image is black / blank.
    A: This is an issue with the installed packages. To overcome this issue, run pip install tinycss cssselect cairosvg. Try running the tool again and check if the problem persists. If at any point of time the gcc compiler fails with an error message of ffi.h not found, do sudo dnf install libffi-devel to solve the issue.

    Q. The text generated is blank.
    A: You do not have fedora-meta installed. The tool does check for this in the startup and warns you. If you had not noted it, you need to do rpm -q python2-fedmsg-meta-fedora-infrastructure to check if fedmsg-meta is installed. If it says the package is not installed, you need to do sudo dnf install python2-fedmsg-meta-fedora-infrastructure to install the package. Run the tool again to check if the problem persists. (Shoutout to pingou and Ralph for helping me identify the issue)

    I'll try to add in more as I develop code further. Let me know what you think of the tool in the comments below.

    Suggestions/criticism welcome :)

    [GSoC '16] Let the Coding Begin!

    Posted by Sachin S. Kamath on May 25, 2016 05:57 PM
    [GSoC '16] Let the Coding Begin!

    The coding period of GSoC has finally started. It started on 23rd of May, but to me, it just started today as I had taken a 2 day excuse (Exams, sigh). As I mentioned in my earlier post, I will be working with the Fedora Project to build metrics tools in Python and also will also be helping the Commops team in refining the Fedora Onboarding process.

    My internship this summer will be mostly, Python centric and will involve a lot of scraping, automation, analytics and data crunching. I am currently working on a gsoc-statistics tool for Fedora which will auto-magically generate weekly reports given a fedora FAS username. Instead of pushing code to Github, which I do; quite often, I have decided to work with Pagure, Fedora's own repository tracker. There's a GSoC project to improve Pagure as well ;)

    To start off with the work, I decided to draw a rough skeleton of what I'd like the stats-tool to look like. I usually plan everything on paper, but since I'm home, I had the luxury of using the whiteboard. Here's what my whiteboard looks like now :

    [GSoC '16] Let the Coding Begin!

    This is just the beginning, and I'm excited already. Hoping to have a great summer this year :)

    Oh, and if you are a FOSS enthusiast and would like to start contributing to Fedora, do take a look at WhatCanIDoForFedora. If you have any other Fedora related questions, feel free to ping me on IRC / e-mail. I go by the nick skamath on Freenode. You can find me in the #fedora-commops channel.

    Fedora-Hubs: Google Summer of Code 2016

    Posted by Devyani Kota on May 24, 2016 09:10 PM


    A warm hello to the Summer Coding World.

    Devyani is a CS Undergraduate who will be working on Fedora-Hubs as her Google Summer of Code 2016 Project.

    The Google Summer of Code 2016 results were declared on 22nd April 2016. The Community Bonding Period has been amazing: meeting like-minded people, making new friends and being a part of this awesome community !

    Initially in May, I started to hack around the Feed-Widget of the Project, Fedora-Hubs.
    The project is available on Pagure.

    I had to go through the documentation, and hubs/widgets code more in detail thus, that week was pretty much a reading week rather than coding week. Hubs is implemented using Flask, which uses Flask-SQLAlchemy. Flask-SQLAlchemy wasn’t exactly my strength, especially the models and the relationships that exist. Hubs are related to widgets by many-to-many relationship. I went thoroughly through the Flask-SQLAlchemy documentation and googled more about establishing relationships amongst the fields. I tried to write down the code to establish relation between the databases to understand the connections between the various fields: Hubs, Users, Widgets 😛
    This helped me a lot to understand the basic structure of Hubs.

    I also had to go through the Feed-Widget to work on getting similar threads of the mailing-list posts together based on Mizmo‘s mockups. That’s still a work in progress, though.
    Largely the summer goals will be to integrate hubs successfully, with all the widgets working efficiently. By the summer, we plan to provide the user with an individual hubs-page featuring the widgets; a cool bookmarks bar for the user to switch tabs at his will; provide info regarding the unread notifications that he/she follows or is subscribed to.
    We also plan to work on the badges-widget before Flock 2016 that displays a cool path of the badges that can be achieved. For the more interested folks, I have my project proposal updated on the wiki 🙂

    For more updates on Hubs, follow me @devyanikota. I go by the nick devyani7 on freenode, you can find me lurking on #dgplug, #fedora-apps, #fedora-hubs. My Github profile and Pagure profile.

    Until next time, Happy hacking !

    GSoC Community Bonding Period

    Posted by Tummala Dhanvi on May 21, 2016 09:48 PM

    I am sorry for the delay in the blog posts, I was neglecting the publishing the posts which are in my drafts.

    tldr; Participated in Community Bonding of Fedora and FAD.

    In my previous post I have mentioned about getting select for GSoC 2016 and this is the this years schedule https://summerofcode.withgoogle.com/how-it-works/ and https://developers.google.com/open-source/gsoc/timeline


    Here is the email that google sends to the mentors, I got it via internal mailing list of FOSS@Amrita

    Community Bonding Period
    As part of the student’s acceptance into GSoC they are expected to actively participate in theCommunity Bonding period (April 22 – May 22). The Community Bonding period is intended to get students ready to start contributing to your organization full time in May.

    Unfortunately, some students think their acceptance into GSoC guarantees them the initial $500 payment. That is not the case. A student does not receive $500 just for writing a good proposal. They must be active in the Community Bonding period to earn that $500.

    Community Bonding activities may involve:

    • Becoming familiar with the community practices and processes. (This often involves a mix of observation and participation.)
    • Participating on Mailing Lists / IRC / etc. (Not just lurking.)
    • Setting up their development environment.
    • Small (or large) patches/bug fixes. (These do not need to be directly related to their GSoC project.)
    • Participating in code reviews for others. (Even someone who isn’t familiar with the project can contribute by pointing out potential inefficiencies, bad error handling, etc.)
    • Working with their mentor and other org members on refining their project plan. This might include finalizing deadlines and milestones, adding more detail, figuring out potential issues, etc.
    • If the student is already familiar with the organization, they could be helping others get involved in the community.
    • Reading (and updating!) documentation they will need to understand to complete their project.
    • Reporting or replicating bugs.

    Active means active. Students have committed to the program schedule, and we would like you to hold them to it. There is no simple standard, as every org is different, every student has different time constraints, and there are many different ways to interact. Some students may require coaxing and encouragement in order to get them to actively participate.

    If you do not see regular public interaction from the student, you should strongly encourage it. Public is important — it is a key principle of open source — work happens where everyone can see it. Similarly, all work done by the students should be shared in a publicly available repository.

    By May 16th if the student has not been active in Community Bonding please notify Google at gsoc-support@google.com to let us know. After a brief investigation, we may remove the student from the program. They will not receive any payments.

    Do not feel bad about “failing” a student.
    The past eleven years of GSoC have demonstrated that students who don’t interact early and often are more likely to fail later. Often they just disappear. We don’t want you to waste your time on students who don’t care about the project/organization and can’t even attempt to show interest these first few weeks of the program. Small contributions early on are often a very positive signal.

    Organizations will not be penalized for failing students who are not fulfilling their responsibilities during the Community Bonding period. We expect there to be students who fail this Community Bonding period, just like we expect there to be some students who fail the midterm and others that fail the final. This is completely normal.

    So it basically means actively participating in the community via IRC mailing list etc..

    This is also the time of my exams of so it’s very tough to spend time because of the exam fever and all stuff, but I did mostly actively participate in the IRC meeting that we have weekly https://fedoraproject.org/wiki/Docs_Project_meetings , there are is also a mailing list for fedora-docs (https://lists.fedoraproject.org/archives/list/docs@lists.fedoraproject.org/) but wasn’t much active in it. Also there are office hours but it’s for the USA schedule here https://apps.fedoraproject.org/calendar/docs/ fedocal (one more awesome app from the fedora-app team).

    I was also active in CommOps team’s mailing list and IRC 🙂


    FAD stands for Fedora Activity Day like the sprints https://en.wikipedia.org/wiki/Sprint_(software_development) here is the wiki page https://fedoraproject.org/wiki/Fedora_Activity_Day_-_FAD

    Fedora-docs team planned one for 2016 more details (https://fedoraproject.org/wiki/FAD_Documentation_2016) and it went all well and I have take part in the remotely along with linuxmodder and other

    I took part in it some time but couldn’t follow well because of the bandwidth issues and with my microphone. I was communication via IRC.

    Here is the complete report https://communityblog.fedoraproject.org/event-report-fedora-docs-fad/ and we have decided to use asciidoc for the source and pintail for building the html.

    I have also got a cool badge for attending this FAD remotely. https://badges.fedoraproject.org/badge/docs-fad-2016


    Goals for GSoC

    Since the tools were decided in FAD. My mentor zolesby and me decided the goals for my project https://pagure.io/docs-fp-o/issues?assignee=dhanvi

    Packaging 2 packages is my goal (hope I can full fill it ) And continuous integration and continuous deployment is second half of the project.

    That’s all guys wait for my further blog posts for updates on GSoC.

    Filed under: fedora, GSOC, gsoc2016

    Nethack Encyclopedia Redux'd

    Posted by Mo Morsi on May 12, 2016 02:56 AM

    I've been working on way too many projects recently... Alas, I was able to slip in some time to update the NetHack Encyclopedia app on the Android MarketPlace (first released nearly 5 years ago!).

    Version 5.3 brings several features including new useful tools. The first is the Message Searcher that allows the user to quickly query the many cryptic game messages by substring & context. Additionally the Game Tracker has been implemented, faciliting player, item, and level identification in a persistant manner. Simply enter entity attributes as they are discovered and the tracker will deduce the remaining missing information based on its internal alogrithm. This is ontop of many enhancements to the backend including the incorporation of a searchable item database.

    The logic of the application has been highly refactored & cleaned up, the code has come along ways since first being written. In large, I feel pretty comfortable with the Android platform at the current time, it has its nuances, but all platorms do, and it's pretty easy to go from concept to implementation.

    As far as the game itself, I have a ways to go before retrieving the Amulet! It's quite a challenge, but you learn with every replay, and thus you get closer. Ascension will be mine! (someday)

    read more

    [GSoC '16] Summer with Fedora

    Posted by Sachin S. Kamath on May 02, 2016 05:15 AM
    [GSoC '16] Summer with Fedora

    This summer, I am super excited to announce that I am participating in Google Summer of Code for the Fedora Project. As Google describes it :

    Google Summer of Code is a global program focused on bringing more student developers into open source software development. Students work with an open source organization on a 3 month programming project during their break from school.

    This year, I will be working with the awesome CommOps Team to improve their exisitng Toolbox. My project proposal can be found here. Fedora is not a new to me. I have been contributing to Fedora, both actively and passively for around 2 years now. To be specific, I have contributed to the Glittergallery project and have delivered a couple of talks on contributing to Fedora, the details of which can be found here

    I would like to thank decause who is my mentor for this project, the awesome peeps at CommOps (special shout-out to jflory7 and linuxmodder for helping me get started and bee2502 for the first Karma Badge :)

    Right now, it is the community bonding period and I am getting more involved with the CommOps team in understanding the place better. I am also getting familiar with the CommOps Toolbox on which I will be hacking on.

    Expect more posts as I make progress. Till then, Goodbye and stay silly :)

    Google Summer of Code, Fedora Class of 2016

    Posted by Justin W. Flory on April 27, 2016 08:50 AM

    This summer, I’m excited to say I will be trying on a new pair of socks for size.

    Bad puns aside, I am actually enormously excited to announce that I am participating in this year’s Google Summer of Code program for the Fedora Project. If you are unfamiliar with Google Summer of Code (or often shortened to GSoC), Google describes it as the following.

    Google Summer of Code is a global program focused on bringing more student developers into open source software development. Students work with an open source organization on a 3 month programming project during their break from school.

    I will work with the Fedora Project over the summer on the CommOps slot. As part of my proposal, I will assist with migrating key points of communication in Fedora, like the Fedora Magazine and Community Blog, to Ansible-based installations. I have a few more things planned up my sleeve too.

    Google Summer of Code proposal

    My proposal summary is on the GSoC 2016 website. The full proposal is available on the Fedora wiki.

    The What

    The Community Blog is becoming an important part of the Fedora Project. This site is a shared responsibility between CommOps and the Infrastructure team. Unlike most applications in the Fedora infrastructure, the Community Blog is not based off Ansible playbooks. Ansible is an open-source configuration management suite designed to make automation easier. Fedora already uses Ansible extensively across its infrastructure.

    My task would consist of migrating the Community Blog (and by extension, Fedora Magazine) to an Ansible-based set up and writing the documentation for any related SOPs.

    The Why

    Ansible is a useful tool to make automation and configuration easier. In their current set up, the Community Blog and Fedora Magazine are managed separately from each other, and are managed by a single member of the Infrastructure team. By moving them to Ansible-based installations and merging the WordPress bases together, it provides the following benefits:

    1. Makes it easier for other Infrastructure team members to fix, maintain, or apply updates to either site
    2. Prevents duplicate work by maintaining a single, Ansible-based WordPress install versus two independent WordPress sites
    3. Creates a standard operating procedure for hosting blog platforms within Fedora (can be used for other extensions in the future)

    Thanks to my mentors

    I would like to issue a special thanks to my mentors, Patrick Uiterwijk and Remy DeCausemaker. Patrick will be my primary mentor for the slot, as a member of the Fedora Infrastructure team. I will be working closest with him in the context of my proposal. I will also be working with Remy on the “usual” CommOps tasks that we work on week by week.

    Another thanks goes out to all of those in the Fedora community who have positively affected and influenced my contributions. Thanks to countless people, I am happy to consider Fedora my open source home for many years to come. There is so much to learn and the community is amazing.

    Getting started

    As of the time of publication, the Community Bonding period is currently happening. The official “coding” time hasn’t started yet. Without much delay, I will be meeting up with Patrick and Remy later today in a conference call to check in after the official announcement, make plans for what’s coming up in the near future, and become more acquainted with the Infrastructure community.

    In addition to our conference call, I’m also planning on (formally) attending the next Fedora Infrastructure meeting on Thursday. Shortly afterwards, I hope to begin my journey as an Infrastructure apprentice and learn more about the workflow of the team.

    Things are just getting started for the summer and I’m beyond excited that I will have a paid excuse to work on Fedora full-time. Expect more check-ins as the summer progresses!

    The post Google Summer of Code, Fedora Class of 2016 appeared first on Justin W. Flory's Blog.

    GSoC 2016

    Posted by Tummala Dhanvi on April 24, 2016 07:49 PM

    tl;dr I have got selected for GSoC 2016 with Fedora and I will be working with the Fedora Docs team this summer.

    Getting GSoC (Google Summer of Code) is one of my biggest dream, because of which I have sacrificed (or I was made to sacrifice sorry sir!) some other thing which are very important. And from then on I decided that I choose what to do; well I may be able to do everything but what’s wrong in dreaming big. If you have followed my blog regularly (which I doubt any one will ), you might have found my old blog post saying GSoC is my target (well I did that thinking that saying out your goals loud helps you to achieve the goal but I deleted the post before applying for GSoC thinking that it might give my mentor a bad impression!)

    I was very happy that I got GSoC that I bunked classes and traveled to home to share it as a surprise to my parents, but my sister told to my mother in advance 😦 but she didn’t know what GSoC is! spend a few days at my home and went back to my school since I have exams to write. And FOSS@Amrita got more selections than before https://www.quora.com/How-many-students-from-your-college-have-been-selected-for-Google-Summer-of-Code-2016

    For my friends who didn’t get through, I felt sorry for you guys and there is always a next year!

    Looking forward for this summer.

    Pic is taken from https://communityblog.fedoraproject.org/fedora-google-summer-of-code-2016/ it’s a combination of project atomic GSoC and Fedora

    Filed under: fedora, GSOC, gsoc2016 Tagged: fedora, FOSS, gsoc

    GSoC Battle 2016

    Posted by Devyani Kota on April 24, 2016 08:27 AM

    Hello reader,
    The third attempt and yeah made it, this time. 😀
    No more regrets in life ! soon a GSoCer !

    I started contributing to Fedora infrastructure projects after I returned from PyCon India 2014 prior which was contributing to gnome-shell, one can say that after this event, meeting people and the environment as a whole motivated and inspired a lot, and since then I never backed down !
    yeah, the trick was to not lose hope and keep working harder, guess that worked 🙂
    After the Outreachy debacle, (which the lowest phase I guess) am blessed to have such motivating people around me who kept pushing me.
    Huge thanks to Kushal da, Sayan, Subho, and Elita of course for helping me through it.

    The cliche result night, for two seconds my mind said, “That’s alright, maybe not this time either. Am anyway not losing hope, will move on !” But, then there was my name. “phew!!”, “yay ! I finally made it”. Am so happy, I will be working on Fedora-hubs this summer.
    I should thank my mentor RalphBean for helping me with my naive doubts, and for guiding me fix bugs all this time. Huge thanks to Pierre, MizmoJustin, Sayan, for reviewing my proposal and getting it done on time 😛
    Am glad I will be working with them this summer, hacking on Hubs !

    A little throwback: am really glad I finished the #dgplug summer training successfully that moulded me be what I am !

    For those interested, can take a look at the proposal on fedora-wiki.
    See ya all !

    Happy Coding 🙂



    LVM Internals

    Posted by Mo Morsi on March 30, 2016 01:35 AM

    This post is intended to detail the LVM internal disk layout including the thin-volume metadata structure. While documentation of the LVM user space management utilities is abundant, very little exists in the realm of on-disk layout & structures. Having just added support for this to CloudForms, I figure this would be a good opportunity to expand on this for future reference.

    The LVM framework relies on the underlying 'device-mapper' library to map blocks on physical disks (called physical volumes) to custom constructs depending on the intent of the system administrator. Physical and other volumes can be sliced, mixed, and matched to form Logical Volumes, which are presented to the user as normal disks, but dispatch read / write operations to the underlying storage objects depending on configuration. The Physical and Logical Volumes are organized into Volume Groups for management purposes.

    To analyze a LVM instance, one could start from the bottom up, inspecting each physical volume for the on disk metadata structures, reconstructuing and using them to lookup the blocks to read and write. Physical volumes may constitute any block device Linux normally presents, there are no special restrictions, and this way LVM managed volumes can be chained together. On a recent VM, /dev/sda2 was used for the LVM managed / and /home partitions on installation, after which I extended the logical volume pool to include /dev/sdb using the recent thin pool provisioning features (more on this below).

    Examining /dev/sda2 we can find the LVM Disk Label which may reside on one of the first 4 512-byte sectors on the disk. The address of the Physical Volume Header is given from this, specifically:

      pv_header_address = label_header.sector.xl * SECTOR_SIZE (512 bytes) + label_header.offset_xl

    The Physical Volume Header gives us the base information about the physical volume including disk data and metadata locations. This call all be read sequentially / incrementally from the addresses contained in the header. These data structures can be seen below:

    SECTOR_SIZE         = 512
    LVM_ID_LEN          = 8
    LVM_TYPE_LEN        = 8
    LVM_ID              = "LABELONE"
    PV_ID_LEN           = 32
    MDA_MAGIC_LEN       = 16
    FMTT_MAGIC          = "\040\114\126\115\062\040\170\133\065\101\045\162\060\116\052\076"
    LABEL_HEADER = BinaryStruct.new([
      "A#{LVM_ID_LEN}",       'lvm_id',
      'Q',                    'sector_xl',
      'L',                    'crc_xl',
      'L',                    'offset_xl',
      "A#{LVM_TYPE_LEN}",     'lvm_type'
    # On disk physical volume header.
    PV_HEADER = BinaryStruct.new([
      "A#{PV_ID_LEN}",        'pv_uuid',
      "Q",                    'device_size_xl'
    # On disk disk location structure.
    DISK_LOCN = BinaryStruct.new([
      "Q",                    'offset',
      "Q",                    'size'
    # On disk metadata area header.
    MDA_HEADER = BinaryStruct.new([
      "L",                    'checksum_xl',
      "A#{MDA_MAGIC_LEN}",    'magic',
      "L",                    'version',
      "Q",                    'start',
      "Q",                    'size'
    # On disk raw location header, points to metadata.
    RAW_LOCN = BinaryStruct.new([
      "Q",                    'offset',
      "Q",                    'size',
      "L",                    'checksum',
      "L",                    'filler'

    The raw LVM metadata contents areas consists of simple JSON-like key / value data structs where objects, arrays, and primtive values (including strings) may be encoded. The top level of each extracted metadata contents will consist of a single key / value pair, the volume group name and encoded properties. From there logical and physical volumes are detailed. Sample metadata contents can be seen below:

    fedora {
        id = "sOIQC3-75Rq-SQnT-0lfj-fgni-cU0i-Bnbeao"
        seqno = 11
        format = "lvm2"
        status = ["RESIZEABLE", "READ", "WRITE"]
        flags = []
        extent_size = 8192
        max_lv = 0
        max_pv = 0
        metadata_copies = 0
        physical_volumes {
            pv0 {
                id = "ZDOhNU-09hz-rsd6-MrJH-20sN-ajcg-opqhDf"
                device = "/dev/sda2"
                status = ["ALLOCATABLE"]
                flags = []
                dev_size = 19945472
                pe_start = 2048
                pe_count = 2434
            pv1 {
                id = "QT6OH2-1eCc-CyxL-vYkj-RJn3-vuFO-Jg9Qu2"
                device = "/dev/sdb"
                status = ["ALLOCATABLE"]
                flags = []
                dev_size = 6291456
                pe_start = 2048
                pe_count = 767
        logical_volumes {
            swap {
                id = "iSNIOA-N4dh-qeYp-hAG9-mUGG-PFsL-MomHTO"
                status = ["READ", "WRITE", "VISIBLE"]
                flags = []
                creation_host = "localhost"
                creation_time = 1454442463
                segment_count = 1
                segment1 {
                    start_extent = 0
                    extent_count = 256
                    type = "striped"
                    stripe_count = 1
                    stripes = [
                        "pv0", 0
            pool00 {
                id = "seRK1m-3AYe-0Y3N-TCvF-ABhh-7AKj-gX85eY"
                status = ["READ", "WRITE", "VISIBLE"]
                flags = []
                creation_host = "localhost"
                creation_time = 1454442464
                segment_count = 1
                segment1 {
                    start_extent = 0
                    extent_count = 1940
                    type = "thin-pool"
                    metadata = "pool00_tmeta"
                    pool = "pool00_tdata"
                    transaction_id = 1
                    chunk_size = 128
                    discards = "passdown"
                    zero_new_blocks = 1
            root {
                id = "w0IcgL-HHnY-ptff-wZmT-3uQx-KBuu-Ep0YFq"
                status = ["READ", "WRITE", "VISIBLE"]
                flags = []
                creation_host = "localhost"
                creation_time = 1454442464
                segment_count = 1
                segment1 {
                    start_extent = 0
                    extent_count = 1815
                    type = "thin"
                    thin_pool = "pool00"
                    transaction_id = 0
                    device_id = 1
            lvol0_pmspare {
                id = "Sm33QK-HzFZ-Vo6b-6qBf-DsE2-uufG-T5EAG7"
                status = ["READ", "WRITE"]
                flags = []
                creation_host = "localhost"
                creation_time = 1454442463
                segment_count = 1
                segment1 {
                    start_extent = 0
                    extent_count = 2
                    type = "striped"
                    stripe_count = 1
                    stripes = [
                        "pv0", 256
            pool00_tmeta {
                id = "JdOGun-8vt0-UdUI-I3Ju-aNjN-NurO-Yd7kan"
                status = ["READ", "WRITE"]
                flags = []
                creation_host = "localhost"
                creation_time = 1454442464
                segment_count = 1
                segment1 {
                    start_extent = 0
                    extent_count = 2
                    type = "striped"
                    stripe_count = 1
                    stripes = [
                        "pv0", 2073
            pool00_tdata {
                id = "acemRb-wAqV-Nvwh-LR2L-LHyT-Lhvm-g3Wl3F"
                status = ["READ", "WRITE"]
                flags = []
                creation_host = "localhost"
                creation_time = 1454442464
                segment_count = 2
                segment1 {
                    start_extent = 0
                    extent_count = 1815
                    type = "striped"
                    stripe_count = 1
                    stripes = [
                        "pv0", 258
                segment2 {
                    start_extent = 1815
                    extent_count = 125
                    type = "striped"
                    stripe_count = 1
                    stripes = [
                        "pv0", 2075

    This is all the data necessary to map logical volume lookups to normal / striped physical volume sectors. Note if there are multiple physical volumes in the volume group, be sure to extract all metadata from as mappings may result in blocks being cross-referenced.

    To lookup a logical address, first determine which logical volume segment range the address falls into. Segments boundries are specified via extents whose size is given in the volume group metadata (note this is represnted in sectors, or 512-byte blocks). From there it's a simple matter of reading the blocks off the physical volume segments given by the specified stripe id and offset, being sure to correct map positional start / stop offsets.

    The process gets a little more complicated for thinly provisioned volumes, which are a relatively new addition to the LVM framework where logical volumes marked as 'thin' do not directly map to physical extents but rather are pooled together via a mapping structure and shared data partition. This allows the centralized partition to grow / shrink on demand and the decoupling of pool properties from actual underlying data space availability.

    To implement this each thin logical volume references a pool volume which in return references metadata and data volumes (as can be seen above). Addresses to be accessed from the thin volume are first processed using the pool metadata volume which contains an on disk BTree structure mapping thin volume blocks to data volume blocks.

    The thin volume metadata superblock can be read off the metadata volume starting at address 0. This gives us the data & metadata space maps as well as the device details and data mapping trees allowing us to perform the actual address resolution. Device Details is a one level id -> device info BTree providing thin volume device information while the Data Map is a 2 level BTree mapping of device id -> device blocks -> data blocks.

    Once this information is parsed from the pool metadata determine which data volume blocks to read for a given thin volume address is simply a matter of looking up the corresponding blocks via the data map and offsetting the start / ending positions accordingly. The complete thin volume metadata structures can be seen below:

      SECTOR_SIZE = 512
      THIN_MAGIC = 27022010
      SUPERBLOCK = BinaryStruct.new([
       'L',                       'csum',
       'L',                       'flags_',
       'Q',                       'block',
       'A16',                     'uuid',
       'Q',                       'magic',
       'L',                       'version',
       'L',                       'time',
       'Q',                       'trans_id',
       'Q',                       'metadata_snap',
       "A#{SPACE_MAP_ROOT_SIZE}", 'data_space_map_root',
       "A#{SPACE_MAP_ROOT_SIZE}", 'metadata_space_map_root',
       'Q',                       'data_mapping_root',
       'Q',                       'device_details_root',
       'L',                       'data_block_size',     # in 512-byte sectors
       'L',                       'metadata_block_size', # in 512-byte sectors
       'Q',                       'metadata_nr_blocks',
       'L',                       'compat_flags',
       'L',                       'compat_ro_flags',
       'L',                       'incompat_flags'
      SPACE_MAP = BinaryStruct.new([
        'Q',                      'nr_blocks',
        'Q',                      'nr_allocated',
        'Q',                      'bitmap_root',
        'Q',                      'ref_count_root'
      DISK_NODE = BinaryStruct.new([
        'L',                      'csum',
        'L',                      'flags',
        'Q',                      'blocknr',
        'L',                      'nr_entries',
        'L',                      'max_entries',
        'L',                      'value_size',
        'L',                      'padding'
        #'Q',                      'keys'
      INDEX_ENTRY = BinaryStruct.new([
        'Q',                      'blocknr',
        'L',                      'nr_free',
        'L',                      'none_free_before'
      METADATA_INDEX = BinaryStruct.new([
        'L',                      'csum',
        'L',                      'padding',
        'Q',                      'blocknr'
      BITMAP_HEADER = BinaryStruct.new([
        'L',                      'csum',
        'L',                      'notused',
        'Q',                      'blocknr'
      DEVICE_DETAILS = BinaryStruct.new([
        'Q',                      'mapped_blocks',
        'Q',                      'transaction_id',
        'L',                      'creation_time',
        'L',                      'snapshotted_time'
      MAPPING_DETAILS = BinaryStruct.new([
        'Q',                       'value'

    One can see this algorithm in action via this LVM parsing script extract from CloudForms. You will need to install the 'binary_struct' gem and run this script as a privileged user inorder to read the binary disks:

    $ sudo gem install binary_struct
    $ sudo ruby ruby lvm-parser.rb -d /dev/sda2 -d /dev/sdb

    From there you can extract any info from the lvm metadata structures or data segments for further analysis.


    "If it's on the Internet it must be true"
    -George Washington

    read more

    Modelling Models !

    Posted by Devyani Kota on March 16, 2016 09:59 PM

    Hey there reader !
    Its been quite some time since I updated about brainstormings going on for the new widgets for the hubs pages.
    So, I was working on an issue for the Subscriber/Subscribe widget, where I had to list the number of subscribers that were subscribed to this user(his/her hub page), and also the number of users he/she was subscribed to, basically the subscription stats.
    pretty simple, eh? That’s what I thought too 😛
    Well PR #111 fixes it !

    I learned a lot while trying to figure out ways to solve this issue, jotting them down :

    • how easily one can access the attributes if we know the relation between two objects/tables.
    • using dir() function to see beyond an alphanumeric object !
    • write test-cases.

    Structure of HUBS :
    User : A user table with attributes like openid, fullname, and created_on.
    Hub:   This is the individual user’s hub page(in simple terms-profile page) with attributes                like name, created_on, widgets, etc
    So, we wanted to implement a many-to-many relationship between users and hubs, ‘coz many users can subscribe themselves to many hubs.
    So, I spent quite some time with the flask-sqlalchemy tutorial to learn the pythonic implementation of the many-to-many relationship. 😛 my bad !!
    Its really amazing to read code written by awesome developers. Ralph‘s code already had the ‘Association’ table to connect the users and hubs.

    >>> for assoc in widget.hub.associations:
    ........sub_list = [u.name for u in assoc.user.subscriptions]
    >>> subscribers = [u.username for u in widget.hub.subscribers]

    ‘widget.hub.associations’ returned a list of associated user objects.
    This is where dir() function came to my rescue 😛
    So I tried to print the attributes of the user objects that were being returned by simply printing the output of the same.

    >>> print dir(widget.hub.associations)

    Thus, displaying a list of the attributes of the object.Screenshot from 2016-03-15 02-38-05
    Hub names were extracted, which helped extract the subscriptions of the respective hub_user’s page.
    Rest, was a simple edit in the jinja template, that made the Subscribe/Unsubscribe button do its magic 😉

    But, that was not the end !
    I had tests failing ! 😦
    I didn’t have any experience with writing tests.

    Why were tests failing ?
    ‘coz the default values of the subscribers was still assigned ‘0’, when I had changed it into a list. Changing its default value to ‘[]’ and voila !
    It works !

    I came across an awesome quote when bugs overpower you 😛

    Your Goals don’t care if its the weekend !


    Bitcoin Aware Barber's Pole

    Posted by Mo Morsi on September 26, 2015 12:17 AM

    A few months back the guild hosted an Arduino Day workshop. The event was a great success, there was a large turnout, many neat presentations & demos, and much more. My project for the day was an Arduino controlled Barber's Pole that would poll data from the network and activate/deactivate multiple EL wires attached to it. Unfortunately due to a few technical issues the project took a bit longer than originally planned and had to be sidelined. But having recently finished it up, I present the Bitcoin Aware Barber's Pole, now on display at the guild!

    The premise was straightforward, an Arduino Diecimila would be used in combination with the Ethernet Shield to retrieve data from the Internet and activate one of two el-wires mounted to a pole like object. For that several materials were considered but we ended up using pvc as it was easiest to work with and was what we were going for aesthetically. Since the EL wire is driven from an AC source we used two SPDT relays to activate the circuit based on the state of the Arduino's digital pin output. The constructed circuit was simple, incorporating the necessary components to handle flyback current.

    The software component of this project is what took the most time, due to several setbacks. Perhaps the biggest was the shortage of address space we had to work with, micro-controller platforms are notorious for this, but the Diecimila only gave us 16KB of flash memory to use, which after what I'm assuming is space for the bootloader, shared libraries, and other logic is reserved, amounts to ~14KB of memory for the user program and data. Contrast this to modern general purpose PCs, where you'd be hard pressed to find a system with less than 2GB of memory! This had many side effects including not having enough address space to load and use the Arduino HttpClient or Json libraries. Thus a very rudimentary HTTP parsing and request implementation was devised so as to serve the application's needs. All and all it was very simple but specialized, only handling the edge cases we needed an nothing else.

    Of course the limited address space meant we were also limited in the amount of constants and variables we could use. Using the heap (also small on this platform) always introduces additional complexities / logic of its own so was avoided. Since each data source would require the metadata needed to access it, we decided to only poll one location and use it to activate either of the two el-wires depending on its state.

    In all of this you may be asking why we just didn't order a newer Arduino chip with a bigger address space to which I reply what would I do with the one that I had?!?! Plus developing for platforms with memory & other restrictions introduces fun challenges of its own.

    At one point we tried splitting the sketch into multiple modules via the Arduino IDE interface. This was done to try and better organize the project, in a more OOD fashion, but introduced more complexities than it was worth. From what I gather, most sketches are single module implementations, perhaps incorporating some external libraries via the standard mechanisms. When we attempted to deviate from this we noticed so weird behavior, perhaps as a result of the includes from centralized Arduino & supporting libraries being pulled into multiple modules. We didn't debug too far, as overall the application isn't that complex.

    One of the last challenges we had to face was selecting the data to poll. Again due to the limited memory space, we could only store so much http response data. Additionally even any rudimentary parsing of JSON or other format would take a bit of logic which we didn't have the space for. Luckily we found Bitcoin Average which provides an awesome API for getting up-to-date Bitcoin market data. Not only do they provide a rich JSON over REST interface, but fields can be polled individually for their flat text values, for which we retrieve the BTC/USD market average every 5 minutes. When bitcoin goes up, the blue light is activated, when it goes down, the red light is turned on. Of course this value is a decimal and enabling floating point arithmetic consumes more memory. To avoid this, we parsed the integer and decimal portions of the currency separately and ran the comparisons individually (in sequence).

    But unfortunately there was one last hiccup! While the Bitcoin Average documentation stated that HTTP was supported, but in fact querying their server via port 80 just resulted in a 301 redirect to HTTPS running on 443. Since even w/ more modern Arduino platforms w/ larger address spaces, HTTPS/SSL handling proves to be outright impossible due to the complexity of the algorithms, we had to devise a solution to be able to communicate with the server via http in order to retrieve the data. To do so we wrote & deployed a proxy that listens for http requests, issue a https request to bitcoin average and returned the result. This was simple enough to do w/ the Sinatra micro-framework as you can see below:

    # HTTP -> HTTPS proxy
    # Written to query the bitcoinaverage.com via http (only accessible by https).
    # Run as a standard Rack / Sinatra application
    # Author: Mo Morsi <mo@morsi.org>
    # License: MIT
    require 'sinatra'
    require 'open-uri'
    URL = "https://api.bitcoinaverage.com/ticker/USD/last"
    get '/' do
      open(URL) do |content|

    The final result was hosted on this server and the Arduino sketch was updated to use it. All in all the logic behind the Barber's Pole can be seen below:

    //// Bitcoin Barber Shop Pole
    //// Author: Mo Morsi <mo@morsi.org>
    //// Arduino Controller Sketch
    //// License: MIT
    //// For use at the Syracuse Innovators Guild (sig315.org)
    #include <SPI.h>
    #include <Ethernet.h>
    //// sketch parameters
    byte mac[]                           = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 };
    int port                             = 80;
    char server[]                        = "projects.morsi.org";
    char host[]                          = "Host: projects.morsi.org";
    char request[]                       = "GET /barber/ HTTP/1.1";
    char user_agent[]                    = "User-Agent: arduino-ethernet";
    char close_connection[]              = "Connection: close";
    char content_length_header[]         = "Content-Length";
    char CR                              = '\r';
    char NL                              = '\n';
    unsigned long lastConnectionTime     = 0;
    const unsigned long postingInterval  = 300000; // - every 5 mins
    boolean lastConnected                = false;
    const int  max_data                  = 32;
    int  data_buffer_pos                 = 0;
    char data_buffer[max_data];
    int  content_length                  = -1;
    boolean in_body                      = false;
    int current_btc                      = 0;
    int current_btc_decimal              = 0; // since were not using floats
    const int blue_pin                   = 5;
    const int red_pin                    = 7;
    unsigned long lastLightingTime       = -1;
    const unsigned long lightingInterval = 5000;
    // arduino hook in points & config
    EthernetClient client;
    void setup() {
    void loop() {
    void pins_config(){
      pinMode(blue_pin, OUTPUT);
      pinMode(red_pin, OUTPUT);
    void serial_config(){
      while (!Serial) { ; } // this check is only needed on the Leonardo
    // network operations
    void net(){
      else if(should_issue_request())
      lastConnected = client.connected();
    void block(){
      for(;;) { ; }
    boolean should_reset(){
      return !client.connected() && lastConnected;
    void net_reset(){
    boolean should_issue_request(){
      return !client.connected() && (millis() - lastConnectionTime > postingInterval);
    void net_config(){
      if (Ethernet.begin(mac) == 0) {
        Serial.println("net failed");
    void net_read(){
      if(client.available()) {
        char c = client.read();
    void net_request(){
      if (client.connect(server, port)) {
        lastConnectionTime = millis();
      }else {
    // data buffer management
    void buffer_append(char c){
      data_buffer[data_buffer_pos] = c;
      data_buffer_pos += 1;
      if(data_buffer_pos >= max_data)
        data_buffer_pos = 0;
    void buffer_reset(){
      data_buffer_pos = 0;
    // moves last char in buffer to first, sets pos after
    void buffer_cycle(){
      data_buffer[0]  = data_buffer[data_buffer_pos-1];
      data_buffer_pos = 1;
    void buffer_print(){
      Serial.print("buf ");
      Serial.print(": ");
      for(int p = 0; p < data_buffer_pos; p++)
    // http parsing / handling
    // https://en.wikipedia.org/wiki/HTTP_message_body
    int char_pos(char ch){
      for(int p = 1; p < data_buffer_pos; p++)
        if(data_buffer[p] == ch)
          return p;
      return -1;
    int seperator_pos(){
      return char_pos(':');
    int decimal_pos(){
      return char_pos('.');
    boolean status_detected(){
      if(data_buffer_pos < 4) return false;
      int cr_pos    = data_buffer_pos - 3;
      int lf_pos    = data_buffer_pos - 2;
      int alpha_pos = data_buffer_pos - 1;
      // only upper case letters
      int alpha_begin = 65;
      int alpha_end   = 90;
      return data_buffer[cr_pos]    == CR          &&
             data_buffer[lf_pos]    == NL          &&
             data_buffer[alpha_pos] >= alpha_begin &&
             data_buffer[alpha_pos] <= alpha_end;
    boolean header_detected(){
      if(data_buffer_pos < 5) return false;
      int cr_pos     = data_buffer_pos - 2;
      int lf_pos     = data_buffer_pos - 1;
      return seperator_pos()     != -1   &&
             data_buffer[cr_pos] == CR   &&
             data_buffer[lf_pos] == NL;
    boolean is_header(char* name){
      int pos = 0;
      while(name[pos] != '\0'){
        if(name[pos] != data_buffer[pos])
          return false;
      return true;
    boolean body_detected(){
      if(data_buffer_pos < 4) return false;
      int first_cr  = data_buffer_pos - 4;
      int first_lf  = data_buffer_pos - 3;
      int second_cr = data_buffer_pos - 2;
      int second_lf = data_buffer_pos - 1;
      return (data_buffer[first_cr]  == CR &&
              data_buffer[first_lf]  == NL &&
              data_buffer[second_cr] == CR &&
              data_buffer[second_lf] == NL);
    int extract_content_length(){
      int value_pos = seperator_pos() + 1;
      char content[data_buffer_pos - value_pos];
      for(int p = value_pos; p < data_buffer_pos; p++)
        content[p-value_pos] = data_buffer[p];
      return atoi(content);
    void process_headers(){
      else if(header_detected()){
          content_length = extract_content_length();
      else if(body_detected()){
        in_body = true;
    int extract_new_btc(){
      int decimal  = decimal_pos();
      int buf_size = decimal == -1 ? data_buffer_pos - 1 : decimal;
      int iter_end = decimal == -1 ? data_buffer_pos     : decimal;
      char value[buf_size];
      for(int p = 0; p < iter_end; p++)
        value[p] = data_buffer[p];
      return atoi(value);
    int extract_new_btc_decimal(){
      int decimal  = decimal_pos();
      if(decimal == -1 || decimal == data_buffer_pos - 1) return 0;
      int buf_size = data_buffer_pos - decimal - 1;
      int iter_start = decimal + 1;
      char value[buf_size];
      for(int p = iter_start; p < data_buffer_pos; p++)
        value[p - iter_start] = data_buffer[p];
      return atoi(value);
    void process_body(){
      if(!in_body || data_buffer_pos < content_length) return;
      process_new_btc(extract_new_btc(), extract_new_btc_decimal());
      content_length = -1;
      in_body = false;
    void process_response(){
    // target specific data processing
    void print_btc(int btc, int btc_decimal){
    boolean value_increased(int new_btc, int new_btc_decimal){
      return new_btc > current_btc || (new_btc == current_btc && new_btc_decimal > current_btc_decimal);
    boolean value_decreased(int new_btc, int new_btc_decimal){
      return new_btc < current_btc || (new_btc == current_btc && new_btc_decimal < current_btc_decimal);
    void process_new_btc(int new_btc, int new_btc_decimal){
      //print_btc(current_btc, current_btc_decimal);
      //print_btc(new_btc, new_btc_decimal);
      if(value_increased(new_btc, new_btc_decimal)){
      else if(value_decreased(new_btc, new_btc_decimal)){
      current_btc = new_btc;
      current_btc_decimal = new_btc_decimal;
    // pin output handling
    boolean should_turn_off(){
      return lastLightingTime != -1 && (millis() - lastLightingTime > lightingInterval);
    void lights(){
        lastLightingTime = -1;
    void turn_on_blue(){
      lastLightingTime = millis();
      digitalWrite(blue_pin, HIGH);
    void turn_off_blue(){
      digitalWrite(blue_pin, LOW);
    void turn_on_red(){
      lastLightingTime = millis();
      digitalWrite(red_pin, HIGH);
    void turn_off_red(){
      digitalWrite(red_pin, LOW);
    void turn_on_both(){
    void turn_off_both(){

    The actual construction of the pole consists of a short length of PVC pipe capped at both ends. The text was spray painted over and a small hole drilled in the back for the power & network cables. The circuity was simply placed flat inside the pvc, no special mounting or attachments were used or needed.

    The final setup was placed near the enterance of the Guild where anyone walking in / out could see it.

    All in all it was a fun project that took a bit longer than originally planned, but when is that not the case?! Microcontrollers always prove to be unique environments, and although in this case it just amounted to some C++ development, the restricted platform presented several interesting challenges I hadn't encountered since grad school. Going forward I'm contemplating looking into the Raspberry Pi platform for my next project as it seems to be a bit more flexible & has more address space, while still available at a great price point.


    read more

    CloudForms v2 (MiQ) DB - 08/2015

    Posted by Mo Morsi on September 13, 2015 03:20 PM

    Now that's a db! Created using Dia. Relevant table reference / listing can be found here

    Modeling is the first step towards Optimization.

    read more

    Polished to a Resilience

    Posted by Mo Morsi on August 01, 2015 05:46 PM

    Long time since the last post, it's been quite an interesting year! We all move forward as do the efforts on various fronts. Just a quick update today on two projects previously discussed as things ramp up after some downtime (look for more updates going into the fall / winter).

    Polisher has received alot of work in the domains of refactoring and new tooling. The codebase is more modular and robust, test coverage has been greatly expanded, and as far as the new utilities:

    • gem_mapper.rb: Lists all gem / gemfile dependencies & the versions available downstream
    • missing_deps.rb: Highlights dependencies missing downstream as well as any alternate versions available
    • gems2update.rb: Cross references dependencies downstream w/ updates available upstream and recommends specific versions to update to. This facilitates a consistent update across dependencies which may impose different requirements on the same gems. If a unified update strategy cannot be deduced gems2update will highlight the conflicts.

    These can be seen in action via the asciinema.org screencasts referenced above.

    Resilience, our expiremental REFS parser, has also been polished. Various utilities previously written have been renamed, refactored, and expanded; and new tooling written to continue the analysis. Of particular notability are:

    • fcomp.rb - A file metadata comparison tool, that runs a binary diff on file metadata in the fs
    • axe.rb - The attribute extractor, pulls file specific metadata out of the refs filesystem and dumps it into a local file. Additional analysis will be of this metadata (in part)
    • rarser.rb - The complete filesystem parser / file extractor, this pulls files and directories off the image and dumps it into local files

    Also worthy to note are other ongoing efforts including updating ruby to 2.3 in rawhide and updating rails to 4.x in EPEL.

    Finally, the SIG has been going through (another) transitionary period. While membership is still growing there are many logistical matters currently up in the air that need to be resolved. Look for updates on that front as well as many others in the near future.

    read more

    Event report: FUDCon APAC 2015, Pune

    Posted by Sarup Banskota on July 14, 2015 06:30 PM

    I’m writing a blog post after very long. Somewhere between the last post and this one, I’ve graduated and started working for Mesitis Capital as the Product Designer. On the open source community front, I haven’t programmed much recently, but I have been mentoring a couple of students over this year’s GSoC. Two weeks ago, I was at FUDCon in Pune. Here’s a quick summary.

    Day 0 - Arrival Day

    For the first evening, it was mostly just people arriving and us meeting up over dinner at Kushal’s place. I really enjoyed meeting Suchakra after long - we had a quick discussion around our AskFedora student Anuradha, since mid-term evaluations were around the corner. I met Harish and Danishka who live in Singapore - they shared with me tips around housing, transport, expenses, hackerspaces - all the things I’ll need when I move later this year.

    I had a workshop the next day, so I wanted to sleep “early”, but it got pretty late as usual ;-)

    Day 1 - First Workshop Day

    FUDCon at MITCOE - picture stolen from Suchakra's blog

    The morning was mostly meeting folks who arrived that day - Gnokii, Tuan and the rest. Come afternoon, and it was time for my workshop on building responsive front end. This was my first attempt doing few things - conducting a session without slides, programming stuff on the stage, the topic itself - and I think a lot of those choices were great because I ended up heavily modifying what I had wanted to show. I do regret that I couldn’t get around to teaching the stuff I really wanted to, but given a beginner audience, I’m happy they picked up some key ideas. A couple of them also emailed me after the event asking for further resources, so it does look like it was handy.

    In the evening, we had a sort of mini FUDPub - most of us speakers & volunteers staying at the hotel went to a nearby Pub. Gnokii, Somvandda, Yogi, Danishka and I got on a table and we were discussing breweries and food - pretty interesting stuff. It turns out Charul and Sinny were neighbors - so Suchakra and I ended up chatting about work, projects, college life, etc - again sleeping quite late.

    Day 2 - Meeting students

    I didn’t have any sessions scheduled for the second day, so I took the opportunity to hang out with students. I learned that many students from Amrita University, Kollam were in town, so we headed up for lunch together, discussing projects and scope for them to contribute to some FOSS projects. Later during the day, some students from MITCOE spent quite some time with me; we talked about how the Fedora Project is organized, who does what, and how one gets into areas that interests them. There were two students interesting in contributing to the Design team, so I explained them about various things Design team does, the people involved, tools they use, and encouraged them to attend the workshops from the Design track on the final day.

    In the evening, we had the social event at bluO in Phoenix MarketCity, a large shopping complex. There was bowling organized, great food, and a very energetic environment.

    Day 3 - Final Day

    I had an early joint workshop session on how Git works with Mayur. Once again, catering to the audience, we decided to focus on what it is, and how to fiddle with it. While Mayur took the stage and maintained an overall flow around the session, I went around looking at people’s screens and ensuring everyone was doing the right thing. There were lots of questions popping around Git server centric infrastructure - it was fun answering them. There were also a couple of people who weren’t new to Git but didn’t like merge conflicts, so we sat down and helped them around it.

    Harish soon followed with a key signing party. I’m happy I attended it - it was great refresher material and I got some concepts cleared in my head around the whole GPG process. As it always is the case, I learn better by doing, so I’ll try to teach it to somebody and hopefully become more clear that way.

    For the night, we had dinner at the hotel - once again, it was fun recommending Indian dishes to my non local friends, and it does look like they enjoyed it.

    Overall, amazing time at my first FUDCon. I look forward to it next year! :-)

    Picture credits: Suchakra’s blog at http://suchakra.wordpress.com

    Update on CentOS GSoC 2015

    Posted by Karsten Wade on June 02, 2015 03:34 PM

    Here’s an update on the CentOS Project Google Summer of Code for 2015 posted on the CentOS Seven blog:


    This might be of interest to the Fedora Project community, so I’m pushing my own reference here to appear on the Fedora Planet. Much of the work happening in the CentOS GSoC effort may be useful as-is or as elements within Fedora work. (In at least one case, the RootFS build factory for Arm, the work is also happening partially in Fedora, so it’s a triple-win.)

    Trabalhando com Objetos

    Posted by Edgar Muniz Berlinck on April 08, 2015 01:27 PM
    Trabalhando com Objetos

    Criando objetos

    Antes de iniciar nosso exemplo eu gostaria de mostrar um pouco do que o Parse nos oferece.

    A maioria dos sistemas grava, altera e busca informações em uma base de dados. Neste sentido o Parse facilitou muito a vida do desenvolvedor. Suponha que eu queira criar um objeto chamado Pessoa.

    var Pessoa = Parse.Object.extend(“Pessoa”);

    Primeiro eu preciso criar uma referência a um novo objeto do parse, para isso eu utilizo a função Parse.Object.extend. Criada a referência, posso começar a criar instancias:

    var pessoa = new Pessoa();
    var outraPessoa = new Pessoa();

    Viu? Bem simples.


    Com nossos objetos criados e devidamente instanciados, podemos criar e alterar atributos. Os atributos podem ser criados e/ou valorados utilizando a método set. Suponha que eu queira criar um atributo chamado nome no meu objeto pessoa:

    pessoa.set(“nome”, “Fulano”);

    Para recuperar este atributo utilizamos o método get:

    var nome = pessoa.get(“nome”);


    Além dos atributos, podemos criar métodos em nosso objetos:

    var Pessoa = Parse.Object.extend(“Pessoa”, {
    // Métodos de Instância
    falar : function (frase) {

         dormir : function () {
    alert(this.get(“nome”) + “ dormiu”);
       // Métodos da Classe
          create : function (nome) {
    var pessoa = new Pessoa();
               pessoa.set(“nome”, nome);
    return pessoa;

    Note que definimos dois tipos de métodos: Métodos de instância e métodos da classe, todos estão disponíveis de acordo com o contexto:

    var pessoa = new Pessoa();
    pessoa.set(“nome”, “Gohan”);
    pessoa.falar(“Oi, eu sou o Gohan”);
    pessoa.dormir(); // Gohan dormiu

    var outraPessoa = Pessoa.create(“Goku”);
    pessoa.falar(“Oi, eu sou o Goku”);
    outraPessoa.dormir(); // Goku dormiu

    Persistindo Objetos

    Agora que já sabemos como criar um objeto, vamos aprender como persisti-lo no Parse. Basta utilizar a função save. Para esta exemplo eu vou aproveitar a classe pessoa criada acima.

    var pessoa = new Pessoa();
    pessoa.set(“nome”, “Goku”);
    pessoa.set(“ki”, 9000);


    O método save aceita callbacks para tratar eventos, os eventos disponíveis são o success e o error. Acompanhe o código abaixo:

                     success : function (pessoa) {
                } ,
    error : function (message) {

    O que acontece se tudo correr bem? O Parse vai procurar uma entidade chamada Pessoa e caso não encontre ele vai cria-la pra você. Você vai notar que sua entidade terá os seguintes atributos:

    • objectId, que é a chave identificadora do objeto;
    • nome, atributo que criamos;
    • ki, atributo que criamos;
    • createAt: criado automáticamente;
    • updatedAt: criado automaticamente.

    Viu como é fácil?

    O o que é o null que passamos? Simples, podemos inicializar os atributos da classe usando os métodos get e set ou podemos opcionalmente passar tudo no método save, assim:

    pessoa.save({nome: “Goku”, ki: “9000”} , {success: …, error: …});

    Recuperando, Atualizando e Excluindo Objetos

    Recuperar um objeto é tão simples quando salva-lo. precisamos apenas usar o objeto Parse.Query. O meio mais fácil é recuperar pela objectId:

    var Pessoa = Parse.Object.extend(“Pessoa”);
    var query = new Parse.Query(Pessoa);

    Agora vamos supor que a pessoa que queremos tenha o objectId igual a “xWMyz4YEGZ”. Para recupera-lo precisamos fazer o seguinte:

    query.get(“xWMyz4YEGZ”, {
    success: function(pessoa) {
    Note que da mesma forma que o método save, o método get também possui os callbacks success e error.

    Se tudo correu bem, a pessoa será retornada e seus dados serão impressos no console. Viu como é fácil?

    Agora e se eu quiser alterar alguma coisa? Simples, basta alterar os atributos desejados e chamar o método save novamente, assim:

    query.get(“xWMyz4YEGZ”, {
    success: function(pessoa) {
    pessoa.set(“nome”, “Gohan”);

    Pronto! O objeto está atualizado! Se você quiser desfazer qualquer alteração que ainda não foi salva, basta usar o método fetch.

    success: function(pessoa){
    error: function (message) {


    E para excluir? Basta usar o método destroy.

    query.get(“xWMyz4YEGZ”, {
    success: function(pessoa) {
    pessoa.set(“nome”, “Gohan”);

    Chamar o método unset exclui um atributo:

    query.get(“xWMyz4YEGZ”, {
    success: function(pessoa) {

    Métodos utilitários

    Os objetos do parse oferecem alguns métodos utilitários dependendo do tipo de dado que estamos usando. São eles:

    Increment e Decrement

    Métodos muito úteis para atualizar campos numéricos sem se preocupar com a concorrência.




    Estava pensando que não teria nenhum utilitário para trabalhar com arrays? Está enganado! O Parse nos oferece os seguintes métodos:

    • add, que inclui um objeto no fim da lista;
    • addUnique, que inclui um objeto caso ele não exista. É importante salientar que a posição em que ele é armazenado pode não ser necessáriamente a última;
    • remove, que remove todas as instâncias de determinado objeto.

    Para usa-los é bem simples:

    pessoa.addUnique(“mestres”, “Mestre Kame”);
    pessoa.addUnique(“mestres”, “Senhor Kayo”);

    Tudo é muito fácil com o Parse, e simplesmente funciona!

    No próximo post vamos iniciar nosso projeto Parse Social.

    Introdução ao Parse

    Posted by Edgar Muniz Berlinck on April 08, 2015 01:18 PM
    Introdução ao Parse

    Conhecendo um pouco do Parse

    O Parse nos oferece uma pequena interface de administração. Apesar de ser bem intuitiva eu vou apresentá-la.
    Na imagem acima está seu Dashboard. Aqui você pode criar aplicativos ou ver um resumo dos que já existem.
    Ao selecionar um aplicativo o parse nos apresenta mais informações sobre ele. O mais importante é o Core, apresentado na figura acima. Aqui você pode ver todos os objetos gravados, pode criar novos objetos, alterar objetos existentes e até exclui-los.

    As outras seções não serão abordadas pois não fazem parte do escopo deste material. Mas fique a vontade para explorar um pouco mais sobre elas.

    Iniciando um Projeto

    Primeiro precisamos criar um app. Então acesse sua conta (caso não tenha criado, crie. É grátis) e clique em Create a New App em seu dashboard.:
    Para estes exemplos vamos criar um app chamado Parse Social. Este app será uma pequena rede social usando o parse.
    Aplicativo criado! Agora precisamos das chaves de acesso. Clique em keys, você será redirecionado para a página abaixo:
    O application ID é a chave identificadora do aplicativo que acabamos de criar. As demais chaves são usadas dependendo da api que vamos usar. Como vamos usar javascript copie o Application ID e o JavaScript Key e cole em algum lugar.

    Chegou a hora de iniciar nosso projeto. Neste exemplo eu vou utilizar o SublimeText, que é um editor que uso no meu dia-a-dia, o navegador Google Chrome e o servidor IIS. Você pode usar qualquer editor de texto e o navegador de sua preferência. O servidor web é opcional, mas eu recomento que utilize. Eu uso o IIS pois já vem instalado e configurado no Windows, mas você pode usar o apache ou outro servidor de sua preferência.

    Esta será nossa estrutura de diretórios:
    Para este projeto utilizaremos o padrão single-page, pois gosto de fazer desta forma. Vamos agora para o index.html. Ele deve começar mais ou menos como na figura abaixo:

    Note que neste projeto, além do Parse, vamos utilizar o Bootstrap e o font-awesome, pois gosto de trabalhar com eles. Vamos também usar uma fonte do google. Não vou falar muito deles pois eu não sou especialista e o assunto não está no nosso escopo.

    Observe que nada é embarcado no projeto, tudo é importado da nuvem. Eu gosto de trabalhar assim porque é mais fácil, mas se quiser embarcar tudo no seu aplicativo fique a vontade.

    Então vamos entender os imports:

    • Nas linhas 6, 7, 8 e 9 temos os imports para o bootsrap;
    • linha 11 é o import do font-awesome;
    • linha 12 é o import da fonte que vamos usar;
    • linha 14, finalmente é a biblioteca do parse.

    Repare que na linha 25 estamos importando um arquivo chamado app.js, vamos cria-lo e começar a codificar. A priori vamos precisar apenas incluir o seguinte:


    Lembra das chaves que pedi para você guardar? Então, substitua APPLICATION_ID e JAVASCRIPT_KEY pelas respectivas chaves. Com isso estamos prontos para começar!


    Não posso finalizar este capítulo sem explicar um pouco sobre o comando que acabamos de usar. Sempre que houver a necessidade de escrever ou recuperar algum objeto do parse é importante se conectar a ele primeiro. É isto que o comando initialize faz.

    Esta operação é cara, então um dos principais motivos de eu gostar muito de trabalhar com single page é que eu a chamo apenas uma vez. Não tem problema estar sempre conectado ao parse, isto é até interessante.

    Curso - Desenvolvimento de Aplicativos usando Parse - O básico

    Posted by Edgar Muniz Berlinck on April 07, 2015 06:52 PM

    Você já ouviu falar do Parse? Desde que o conheci minha experiência com desenvolvimento de aplicativos mudou muito. Mudou tanto que resolvi dedicar parte do meu tempo escrevendo um pouco sobre ele. Apesar da documentação ser bem completa eu senti falta de material em Português, com um nível um pouco mais introdutório.

    Então o quê é o parse?

    O Parse é um backend desenvolvido inicialmente para o desenvolvimento de aplicativos. Ele abstrai muitas funcionalidades comuns, tais como Autenticação, CRUD de objetos, Manipular Arquivos e Envio de Push.

    Com o Parse você pode desenvolver para as seguintes plataformas: Android, iOS, Windows Phone, Web e Arduino.

    As linguagens suportadas são: .Net, Java, Javascript, PHP, Objective C, Swift e C.

    Este curso será focado no Parse para Javascript, pois é o mais acessível para todos.

    A primeira parte eu vou focar no básico. Aquelas coisinhas que você vai usar o tempo todo.

    Até o próximo post.

    The Right Mind And The Confused Mind

    Posted by Mo Morsi on March 15, 2015 09:28 PM

    From The Unfettered Mind which offers some great advice on the subject of meditation (take or leave what you will):

    The Right Mind And The Confused Mind
    The Right Mind is the mind that does not remain in one place. It is the mind that stretches throughout the entire body and self. The Confused Mind is the mind that, thinking something over, congeals in one place. When the Right Mind congeals and settles in one place, it becomes what is called the Confused Mind. When the Right Mind is lost, it is lacking in function here and there. For this reason, it is important not to lose it. In not remaining in one place, the Right Mind is like water. The Confused Mind is like ice, and ice is unable to wash hands or head. When ice is melted, it becomes water and flows everywhere, and it can wash the hands, the feet or anything else. If the mind congeals in one place and remains with one thing, it is like frozen water and is unable to be used freely: ice that can wash neither hands nor feet. When the mind is melted and is used like water, extending throughout the body, it can be sent wherever one wants to send it. This is the Right Mind.

    The Mind Of The Existent Mind And The Mind Of No-Mind
    The Existent Mind is the same as the Confused Mind and is literally read as the "mind that exists." It is the mind that thinks in one direction, regardless of subject. When there is an object of thought in the mind, discrimination and thoughts will arise. Thus it is known as the Existent Mind.

    The No-Mind is the same as the Right Mind. It neither congeals nor fixes itself in one place. It is called No-Mind when the mind has neither discrimination nor thought but wanders about the entire body and extends throughout the entire self.

    The No-Mind is placed nowhere. Yet it is not like wood or stone. Where there is no stopping place, it is called No-Mind. When it stops, there is something in the mind. When there is nothing in the mind, it is called the mind of No-Mind. It is also called No-Mind-No-Thought.

    When this No-Mind has been well developed, the mind does not stop with one thing nor does it lack any one thing. It is like water overflowing and exists within itself. It appears appropriately when facing a time of need. The mind that becomes fixed and stops in one place does not function freely. Similarly, the wheels of a cart go around because they are not rigidly in place. If they were to stick tight, they would not go around. The mind is also something that does not function if it becomes attached to a single situation. If there is some thought within the mind, though you listen to the words spoken by another, you will not really be able to hear him. This is because your mind has stopped with your own thoughts.

    If your mind leans in the directions of these thoughts, though you listen, you will not hear; and though you look, you will not see. This is because there is something in your mind. What is there is thought. If you are able to remove this thing that is there, your mind will become No-Mind, it will function when needed, and it will be appropriate to its use.

    The mind that thinks about removing what is within it will by the very act be occupied. If one will not think about it, the mind will remove these thoughts by itself and of itself become No-Mind. If one always approaches his mind in this way, at a later date it will suddenly come to this condition by itself. If one tries to achieve this suddenly, it will never get there.

    An old poem says:

    To think, "I will not think"-
    This, too, is something in one's thoughts.
    Simply do not think
    About not thinking at all.

    read more

    Desk Headphones

    Posted by Rohit Paul Kuruvilla on February 07, 2015 05:56 PM

    Recently, I replaced the headphones I've had for a long time with some new ones. I've used Beyerdynamic DT 770 for years (now discontinued). On a flight last year, someone leaned the chair back in front of me suddenly, the cable got caught, and the jack bent really bad. The cut in out a lot. I realize I could just replace the jack, but I thought it was a good excuse to go nuts.

    My whole setup with new headphones, case, DAC, preamp, and cables was a little under $400 (half for the headphones and half for all of the toys). You could definitely just get the headphones for $200.

    If you're looking for something on the cheap, I recommend a pair of Sony MDR7506. Standard issue studio headphones. Can't go wrong for only $70.


    Beyerdynamic Custom One Pro

    I ended up getting a pair of Beyerdynamic Custom One Pro since I liked my DT 770s so much. So far I'm a big fan. They only run $200. I didn't want to go overboard. My friend Bryn Jackson recommended PSB M4U, V-MODA M-100, and Master & Dynamic MH40 as well. They are all really solid choices, but I wanted to stick with Beyerdynamic and stay a bit on the cheaper side.

    You can also customize them:

    My headphones

    I ordered replacement cushions directly from Beyerdynamic. They have all kinds of things you can change. Definitely check it out.


    I also picked up a hard case to protect my new investment.

    Slappa HardBody PRO Headphone Case

    Ended up getting Slappa HardBody PRO Headphone Case for only $30. Not bad. It's a bit bulky but it should fit nicely in a backpack. It's a little too big to fit comfortably in a messager bag unfortunately.


    Next is the DAC (digital-to-analog converter). This takes the digital audio from your computer (via USB) and converts it to analog audio that your headphones can actuall produce. Your computer, phone, etc. has one of these built in since it has a headphone jack. Most stock ones are pretty low quality for audio nerds.

    FiiO E10K

    I ended up getting a FiiO E10K (also a recommendation from Bryn) which I'm super happy with. It sounds really great. Especially for being only $70. It's also powered by USB which is really nice.

    Headphone Amp

    Finally, I got a tube headphone amp. I'm a big fan of tubes. They make everything sound warm and full. My friend, Sam McDonald, recommended the one he had.

    Bravo Audio V2 Class A 12AU7 Tube Multi-Hybrid Headphone Amplifier

    I've been really happy with the Bravo Audio V2 Class A 12AU7 Tube Multi-Hybrid Headphone Amplifier. My only complaints are the knob is a little close to the headphone jack and the input is on the side instead of the back, but those are just minor nitpicks. There's also a power cable you need to plug into the wall. That's expected for a tube preamp though. It sounds really good. Especially for only $70.

    Setting It Up

    The only other thing I got was a fancy 1/8" to RCA cable for $9.

    1. Connect the FiiO with the included USB cable to your computer
    2. Plug in the 1/8" end of the 1/8" to RCA cable into the line out on the back of the FiiO
    3. Set the gain switch on the back of the FiiO to "L"
    4. Plug in the preamp's power
    5. Plug the RCA end of the 1/8" to RCA cable into the preamp on the right side
    6. Turn the volume on the preamp all the way down and connect your headphones
    7. Turn the preamp on with the switch on the back and turn the FiiO on with the dial on the front. Since we're using the line out of the FiiO, the volume knob won't do anything so we can control it with the preamp instead.

    That's it! You can experiement with the bass switch on the front of the FiiO. For my setup, I've enjoyed having it on most of the time.

    Easy enough! Let me know if you try this out on Twitter!

    Event report: Design FAD, Westford

    Posted by Sarup Banskota on January 29, 2015 06:30 PM

    We had a fantastic Design team FAD between 16-18 January at Red Hat’s Westford office. For me, it turned out to be an opportunity to (finally!) meet in person with my mentor Emily, and Mo, two people I’ve been in touch with over IRC/email like forever. Among others physically present were Marie, Sirko, Suchakra, Chris, Prima, Zach, Samuel, Langdon, Paul, Luya and Ryan. Kushal showed up remotely albeit the odd hours in India.

    Mo on the whiteboard

    Mo did a great job outlining topics we needed to discuss on the whiteboard the first day. At first it looked like a lot to me and honestly I felt like we’d never get to half of them. At the end of the day, to my (pleasant) surprise, we had covered most, if not all of the planned topics. We spent quality time evaluating what the team’s goals are and prioritizing them. We revised our ticket flow into a more structured and well-defined one. We discussed newbie management and how to deal with design assets.

    Random discussions

    Suchakra, Zach and I worked on redesigning askfedora. What was supposed to be a low-fidelity mockup winded up being pretty hi-fi, since I wanted to take Inkscape lessons from Suchakra and we dug into the details. Suchakra has blogged twice about it, so if you’d like to learn more, find the first one here and the second here.

    Askfedora mockup - photo courtesy Suchakra's blog

    If we manage to squeeze in time, we’d like to work on the redesign in the weekends. Another group focused on cleaning tickets, so as you’d imagine, lots of trac emails getting tossed around. When I had a look at the design trac after they were done, it seemed like another trac altogether!

    Ticket discussions

    GlitterGallery was also brought up. What I took back for the GG team from the FAD was that our main priorities are improving the file history view and SparkleShare integration. On my return, I’ve already started work on a new branch.

    Quick GG status demo

    Emily and I intended to do a GG hackfest once everyone leaves on the final day, but we had transportation issues and couldn’t continue. To make up for that, we held an IRC meeting yesterday to assign tasks to Paul, Emily, Shubham (new kid on the block), and I. I’m excited about how the repo is active again!

    Productive FAD for everyone :) Thanks to the local organizers and Gnokii, super worthwhile.

    (Gnokii, sorry I sucked at gaming!)

    Gnokii playing Champions of Regnum

    (Photos courtesy Prima).

    Event report: IIT Madras Hackfest & Release Party

    Posted by Sarup Banskota on January 25, 2015 06:30 PM

    This year started for me with a 3 nights Hackfest workshop at Indian Institute of Technology, Madras. While the workshop strayed completely off my goals, the post event commentary seems to indicate that attendees had a good time.

    Students were screened for attending based on a general-FOSS questionnaire, followed by their submissions to a set of programming tests set by the mentors. I mentored on behalf of the Fedora Project. Other mentors included Anoop & Kunal (Drupal), Kunal (Wikimedia Foundation) and Nalin (iBus Sharda Project).

    Mentors group photo

    I began to worry because almost everyone showed up with Windows machines initially, and I had planned intensive exercises with no time allocated for setting up a Linux distribution. However, it wouldn’t have made a lot of sense to dive into programming activity when students were new to the idea of a distribution, command line and installing packages. Which is why I decided to dedicate a whole lot of time explaining all of those things with patience; from my experience, I’ve always had folks quit eventually once they get back home because they couldn’t set up their development environment. At least I got to distribute some fresh Fedora 21 DVDs that way ;)

    Kids happy with their DVDs

    Half of first night was spent explaining software philosophy, what it means for a project to be FOSS, what it means to be part of a community - that kind of thing, after which I had students install packages required for the rest of the event. I followed it up by an extensive workshop on Git. Most of them picked it up rather well. I would have gone ahead further with explaining colloboration over GitHub and the general workflow, but they seemed too sleepy for another hour of devspeak. 5am!

    By this time, I realized that goals I had set weren’t going to be met, so I made a change in plan. Originally, I had thought I’d introduce them to Python and Flask while I pick it up myself (since that’s the stack used in most of Fedora’s infra projects), but this was a complete newbie crowd. I stuck with what I’m comfortable with. After spending time collaborating over GitHub on some projects we started, I had the students pick up Ruby the second night. I explained the concept of programming libraries, how they’re organized and shared, and how they’re hackable. A ruby library I once wrote would solve one of their screening process problems, I showed them how. The second day got me wondering what it’d have been like to have had a mentor help me when I got started, because I remember installing and understanding RVM/Ruby the first time took me two weeks (these kids had it set in minutes). It wasn’t until for GlitterGallery that I tried it again!

    Whiteboard Musings

    On the way from the airport to the Uni, I thought I’d showcase Shonku, but for the same reasons as I stuck with Ruby, I chose Jekyll. I was a little furious when I learned I’d even have to explain what a blog is, but given that everyone had a Jekyll blog running in a couple of hours, complete with some theme-hacks, I’d guess it was worthwhile.

    Happy about the productive second night, I spent the following afternoon arranging cake for the release party. I was dissapointed at most of the major Chennai cake shops not having colors other than pink and green, I definitely didn’t want a Fedora Cake with the wrong colors! As a result, I had to overshoot the requested budget few dollars but I landed a nice one from Cakewalk, complete with a photoprint. Samosas and juice was courtesy IITM.


    Last night was Release Party and final night. All of us mentors got together in the larger lab to talk about things that were common across any community. I explained students what IRC is, had them lurk around our channels for a bit (and make a complete mess!), and showed them what it means to write proper emails to a mailing list (no top posting, etc). I did a brief introduction to Fedora.next and what it means to the community.

    Speaking about Fedora.next

    We had an exchange of thoughts, people shared their experiences getting to know about Free Software projects, and the overall atmosphere was pleasant. Our Fedora group left to our meeting room, where I had everyone create a FAS account, showed them around some of our wikipages, and provided them with tips on getting involved better. Finally, in a hope to get them started with Rails, I started talking about designing databases, how APIs talk to each other, and how web apps are structured in general. Well, we did end up cloning GG and setting it up, but I can’t tell how much of that they really understood ;)

    All, in all, good fun.

    Students: friendly group photo

    (Thanks to Abhishek Ahuja for the great photos).

    NSRegularExpression Notes

    Posted by Rohit Paul Kuruvilla on December 23, 2014 02:52 PM

    I spent awhile today trying to convert a regular expression from Ruby to NSRegularExpression. It was being dumb and took me awhile to figure it out.

    The main this is NSRegularExpression's options. By default Ruby, has AnchorsMatchLines on and NSRegularExpression doesn't. I simply turned that on and had good luck.

    Here's my specific case (Jekyll front-matter):




    NSRegularExpression(pattern: "\\A(---\\s*\\n.*?\\n?)^(---\\s*$\\n?)", options: .DotMatchesLineSeparators | .AnchorsMatchLines, error: nil)!