Fedora summer-coding Planet

MemberOf Support Is Complete

Posted by Ilias Stamatis on June 28, 2017 10:09 PM

lib389 support for MemberOf plug-in is finally complete!

Here’s what I have implemented so far regarding this issue:

  • Code for configuring the plug-in using our LDAP ORM system.
  • The wiring in the dsconf tool so we can manage the plug-in from the command line.
  • Some generic utility functions to use for testing this and all future plug-ins.
  • Functional tests.
  • Command-line tests.
  • A new Task class for managing task entries based on the new lib389 code.
  • The fix-up task for MemberOf.

I have proudly written a total of 40 test cases; 8 functional and 32 cli tests.

All of my commits that have been merged into the project up to this point – not only related to MemberOf, but in general – can be found here: https://pagure.io/lib389/commits/master?author=stamatis.iliass%40gmail.com

As I’ve said again in a previous post, I have additionally discovered and reported a few bugs related to the C code of MemberOf. I have written reproducers for some of them too (test cases that prove the erroneous behavior).

At the same time, I was working on USN plug-in support as well. This is about tracking modifications to the database by using Update Sequence Numbers. When enabled, sequence numbers are assigned to an entry whenever a write operation is performed against the entry. This value is stored in a special operational attribute for each entry, called “entryusn”. The process for me was pretty much the same; config code, dsconf wiring, tests, etc. This work is also almost complete and hopefully it will be merged soon as well.

To conclude, during this first month of Google Summer of Code, I have worked on MemberOf and USN plug-ins integration, did code reviews on other team members’ patches, and worked on other small issues.


Why integrates fedmsg with kiskadee

Posted by David Carlos on June 28, 2017 06:20 PM

On this post we will talk why we decides to integrate kiskadee [1] with fedmsg [2], and how this integration will enable us to easily monitors the source code of several different projects. The next post will explain how this integration was done.

Exists a initiative on Fedora called Anitya [3]. Anitya is a project version monitoring system, that monitor upstream releases and broadcast them on fedmsg. The registration of a project on Anitya it's quite simple. You will need to informe the homepage, which system used to host the project, and some other informations required by the system. After the registration process, Anitya will check, every day, if a new release of the project were released. If so, it will publish the new release, using a JSON format, on fedmsg. In the context of anitya, the systems used to host projects are called backends, and you can check all the supported backends on this link https://release-monitoring.org/about.

The Fedora infrastructure have several different services that need to talk to each other. One simple exemple is the AutoQA service, that listen to some events triggered by the fedpkg library. If we have only two services interacting the problem is minimal, but when several applications request and response to other several applications, the problem becomes huge. fedmsg (FEDerated MeSsaGe bus) is a python package and API defining a brokerless messaging architecture to send and receive messages to and from applications. Anitya uses this messaging architecture to publish on the bus the new releases of registered projects. Any application that is subscribed to the bus, can retrieve this events. Note that fedmsg is a whole architecture, so we need some mecanism to subscribe to the bus, and some mecanism to publish on the bus. fedmsg-hub is a daemon used to interact with the fedmsg bus, and it's been used by kiskadee to consume the new releases published by anitya.

Once kiskadee can receive notifications that a new release of some project was made, and this project will be packed to Fedora, we can trigger a analysis without having to watch directly to Fedora repositories. Obviously this is a generic solution, that will analysis several upstream, including upstream that will be packed, but is a first step to achive our goal that is help the QA team and the distribution to monitors the quality of the upstreams that will become Fedora packages.

[1]https://pagure.io/kiskadee
[2]http://www.fedmsg.com/en/latest/
[3]https://release-monitoring.org

Week four: Summer coding report

Posted by squimrel on June 26, 2017 11:04 PM

This was a sad week since I was too ill to work until Thursday evening and gone on the weekend starting from Friday. That being said I did not work on anything apart from a PR to allow the user to specify the partition type when talking to UDisks. (To be honest I’m still ill but I can do work :-).)

Anyways let me explain to you (again) how we’re trying to partition the disk on Linux and why we run in so much trouble doing so.

<figure><figcaption>How we’re trying to partition</figcaption></figure>

The good thing about using UDisks is that it’s a centralized daemon everyone can use over the bus so it can act as an event emitter and it can also manage all the devices and since the bottleneck when working with devices is disk i/o anyways a centralized daemon is not a bad idea.

Let’s focus on using UDisks to partition a disk and the current problem (not the problems discussed in previous reports).

The issue is that libparted thinks the disk is a mac label because of the isohybrid layout bootable ISO images are using so that every system can boot them. Instead it should treat the disk as a dos label. This is important because the maximum number of partitions on a mac label is only 3 and on a dos label it’s 4.

The issue could be fixed by:

  • Fixing libparted.
  • Telling libblockdev to use libfdisk instead of libparted.
  • Not using UDisks at all and instead using libfdisk directly.

But can we actually use the fourth partition on a system that runs MacOS or Windows natively?
This is a legit question because we don’t actually know if adding a partition breaks the isohybrid layout.
I’d guess that it doesn’t and I’d also guess that once the kernel took control the partition table is read “correctly” by Linux and therefore it should detect the fourth partition and work. But I don’t know yet.

Using the proof of concept. I did test this on a VM and on a Laptop that usually runs Linux and in both cases persistent storage worked fine.
At the moment my mentor tests this on a device that usually runs Windows and on a device that usually runs MacOS to see if it works over there and even though the results are not out yet it doesn’t look that good.

If this doesn’t work we’re in big trouble since we’ll have to take a totally different approach to creating a bootable device that has persistent storage enabled.

GSoC2017 (Fedora) — Week 3&4

Posted by Mandy Wang on June 26, 2017 05:28 PM

I went to Guizhou and Hunan in China for my after-graduation trip last week. I walked on the glass skywalk in Zhangjiajie, visited Huangguoshu waterfallss and Fenghuang Ancient City, ate a lot of delicious food at the night market in Guiyang and so on. I had a wonderful time there, welcome to China to experience these! (GNOME Asia, hold in Chongqing, in October is a good choice, Chongqing is a big city which has a lot of hot food and hot girls.)

The main work I did these days for GSoC is carding and detailing the work about establishing the environment for Plinth in Fedora. I realize it by some crude way before, such as using some packages in Debian directly, but now I will make these steps more clear, organize the useful information and write them into INSTALL file.

But my mentor and I had a problem when I tried to run firstboot, I don’t know which packages are needed when I want to debug JS in Fedora, in other words, I want to find which packages in Fedora has the same function with the libjs-bootstrap, libjs-jquery and libjs-modernizr in Debian. If you know how to deal with it, please tell me, I’d be grateful.


RetroFlix / PI Switch Followup

Posted by Mo Morsi on June 25, 2017 10:11 PM

I've been trying to dedicate some cycles to wrapping up the Raspberry PI entertainment center project mentioned a while back. I decided to abandon the PI Switch idea as the original controller which was purchased for it just did not work properly (or should I say only worked sporadically/intermitantly). It being a cheap device bought online, it wasn't worth the effort to debug (funny enough I can't find the device on Amazon anymore, perhaps other people were having issues...).

Not being able to find another suitable gamepad to use as the basis for a snap together portable device, I bought a Rii wireless controller (which works great out of the box!) and dropped the project (also partly due to lack of personal interest). But the previously designed wall mount works great, and after a bit of work the PI now functions as a seamless media center.

Unfortunately to get it there, a few workarounds were needed. These are listed below (in no particular order).

<style> #rpi_setup li{ margin-bottom: 10px; } </style>
  • To start off, increase your GPU memory. This we be needed to run games with any reasonable performance. This can be accomplished through the Raspberry PI configuration interface.

    Rpi setup1 Rpi setup2

    Here you can also overclock your PI if your model supports it (v3.0 does not as evident w/ the screenshot, though there are workarounds)

  • If you are having trouble w/ the PI output resolution being too large / small for your tv, try adjusting the aspect ratio on your set. Previously mine was set to "theater mode", cutting off the edges of the device output. Resetting it to normal resolved the issue.

    Rpi setup3 Rpi setup5 Rpi setup4
  • To get the Playstation SixAxis controller working via bluetooth required a few steps.
    • Unplug your playstation (since it will boot by default when the controller is activated)
    • On the PI, run
              sudo bluetoothctl
      
    • Start the controller and watch for a new devices in the bluetoothctl output. Make note of the device id
    • Still in the bluetoothctl command prompt, run
              trust [deviceid]
      
    • In the Raspberry PI bluetooth menu, click 'make discoverable' (this can also be accomplished via the bluetoothctl command prompt with the discoverable on command) Rpi setup6
    • Finally restart the controller and it should autoconnect!
  • To install recent versions of Ruby you will need to install and setup rbenv. The current version in the RPI repos is too old to be of use (of interest for RetroFlix, see below)
  • Using mednafen requires some config changes, notabley to disable opengl output and enable SDL. Specifically change the following line from
          video.driver opengl
    
    To
          video.driver sdl
    
    Unfortunately after alot of effort, I was not able to get mupen64 working (while most games start, as confirmed by audio cues, all have black / blank screens)... so no N64 games on the PI for now ☹
  • But who needs N64 when you have Nethack! ♥‿♥(the most recent version of which works flawlessly). In addition to the small tweaks needed to compile the latest version on Linux, inorder to get the awesome Nevanda tileset working, update include/config.h to enable XPM graphics:
        -/* # define USE_XPM */ /* Disable if you do not have the XPM library */
        +#define USE_XPM  /* Disable if you do not have the XPM library */
    
    Once installed, edit your nh/install/games/lib/nethackdir/NetHack.ad config file (in ~ if you installed nethack there), to reference the newtileset:
        -NetHack.tile_file: x11tiles
        +NetHack.tile_file: /home/pi/Downloads/Nevanda.xpm
    

Finally RetroFlix received some tweaking & love. Most changes were visual optimizations and eye candy (including some nice retro fonts and colors), though a workers were also added so the actual downloads could be performed without blocking the UI. Overall it's simple and works great, a perfect portal to work on those high scores!

That's all for now, look for some more updates on the ReFS front in the near future!

Week three: Summer coding report

Posted by squimrel on June 19, 2017 08:22 PM

A tiny PR to libblockdev got merged! It added the feature to ignore libparted warnings. This is important for the project I’m working on since it’s using udisks which uses libblockdev to partition disks and that wasn’t working because of a parted warning as mentioned in my previous report.
Poorly it’s still does not work since udisks tells libblockdev to be smart about picking a partition type and since there’re already three partitions on the disks libblockdev tries to create an extended partition which fails due to the following error that is thrown by parted:

mac disk labels do not support extended partitions.

Let’s see what we’ll do about that. By the way all this has the upside that I got to know the udisks, libblockdev and parted source code.

Releasing and packaging squimrel/iso9660io is easy now since I automated it in a script.

Luckily I worked on isomd5sum before because I need to get a quite ugly patch through so that it can be used together with a file descriptor that uses the O_DIRECT flag.

A checkbox to enable persistent storage has been added to the UI.

So far the time to handle unexpected issues has been available but since next week is the last week before the first evaluations things should definitely work by then.

389 DS development and Git Scenarios

Posted by Ilias Stamatis on June 14, 2017 02:24 PM

DS Development

Let’s see how development takes place in the 389 Directory Server project. The process is very simple, yet sufficient. There is a brief how-to-contribute guide, which also contains first steps to start experimenting with the server: http://www.port389.org/docs/389ds/contributing.html

The project uses git as its VCS and it is officially hosted on pagure; Fedora’s platform for hosting git repositories. In contrast to other big projects, no complex git branching workflow is followed. There’s no develop nor next branch. Just master and a few version branches. New work is integrated into master directly. In case of lib389, only master exists at the moment, but it’s still a new project.

To work on something new you first open a new issue on pagure and you can additionally have some initial discussion about it. When your work is done you can submit a patch into the pagure issue related to your work. You then have to send an e-mail to the developer mailing list kindly asking for review.

After the review request you’re going to receive comments related to the code, offering suggestions or asking questions. You might have to improve your patch and repeat this process a few times. Once it’s good, somebody will set the review status to “ack” and will merge your code to the master branch. The rule is that for something to be merged, one core developer (other than the patch author of course) has to give his permission; give an ACK. The name of the reviewer(s) is always included in commit messages as well.

Working with git

Until now, I’m working only on the lib389 code base. I’m maintaining a fork on github. My local repository has 2 remotes. origin is the official lib389 repository in order for me to pull changes, and the other one called github is the one hosted on github. I’m actually using the github one only to push code that is not yet ready for submission / review.

So, every time I want to work on something, I checkout master, create a new topic branch and start working. E.g.

git checkout master
git checkout -b issue179 # create a new branch and switch to it

If you already have experience with git (branching, rebasing, reseting, etc.) you will probably not find anything interesting below this point. However, I would like to hear opinions/suggestions about the different approaches that can be used in some cases that I describe below. They might probably help me improve the way I work.

Squashing commits

So you’re submitting your patch, but then you need to do some changes and re-submit. Or while working on your fix, you prefer to commit many times with small changes each time. Or you have to make additions for things that you hadn’t thought of before. But the final patch needs to be generated from a single commit. Hence, you have to squash multiple commits into a single one. What do you do?

Personally, since I know that I’ll have to submit a single patch, I don’t bother creating new commits at all. Instead, I prefer to “update” my last commit each time:

git add file.py # stage new changes
git commit --amend --date="`date`"

Notice that I like to update the commit date with the current date as well.

Then I can push my topic branch to my github remote:

git push github issue179 --force

I have to force this action since what I actually did previously was to rewrite the repository’s history. In this case it’s safe to do it because I can assume that nobody had based their work on this personal topic branch of mine.

But actually what I described above wasn’t really about squashing commits, since I never create more than one. Instead, there are 2 ways that I’m aware of, that can be used when you have committed multiple times already. One is by using interactive rebasing. The other is by using git reset. Both approaches and some more are summed up here: https://stackoverflow.com/questions/5189560/squash-my-last-x-commits-together-using-git

Rebasing work

You started working on an issue but in the meanwhile other changes have been merged into master. These changes probably affect you, so you have to rebase your work against those changes. This actually happened to me when we decided to base new work on python3 on lib389. But this was after I had already started working on an issue, so I had to rebase my work and make sure that all new code is now python3 compatible.

The process is normally easy to achieve. Instead of merging I usually prefer to use git rebase. So, if we suppose that I’m working on my issue179 branch and I want to rebase it against master all I really have to do is:

git checkout issue179
git rebase master

Rebasing makes for a cleaner history. If you examine the log of a rebased branch, it looks like a linear history and that’s why I normally prefer it in general. It can be dangerous sometimes though. Again, it is safe here, assuming that nobody else is basing work on my personal topic branch.

A more complex scenario

Let’s suppose that I have made some work on a topic branch called issue1 but it’s not merged yet; I’m waiting for somebody to review it. I want to start working on something else based on that first work done in issue1. Yet I don’t want to have it on the same patch, because it’s a different issue. So, I start a topic branch called issue2 based on my issue1 branch and make some commits there.

Then, a developer reviews my first patch and proposes some changes, which I happily implement. After this, I have changed the history of issue1 (because of what I had described above). Now issue2 is based on something that no longer exists and I want to rebase it against my new version of issue1 branch.

Let’s assume that ede3dc03 is the checksum of the last commit of issue2; the one that reflects the diff between issue2 and old issue1. What I do in this case is the following:

git checkout issue1 # new issue1
git checkout -b issue2-v2
git cherry-pick ede3dc03
git branch -D issue2 # delete old issue2 branch
git branch --move issue2-v2 issue2 # rename new branch to issue2

A cherry-pick in git is like a rebase for a single commit. It takes the patch that was introduced in a commit and tries to reapply it on the branch you’re currently on.

I actually don’t like this approach a lot, but it works for now. I’m sure there are more approaches and probably better / easier ones. So, I would be very glad to hear about them from you.

Creating the patch

So after doing all the work, and having it in a single commit pointed by HEAD, the only thing we have to do is create the patch:

git format-patch HEAD~1

Please always use format-patch to create patches instead of git diff, Unix diff command or anything else. Also always make sure to run all tests before submitting a patch!


Week two: Summer coding report

Posted by squimrel on June 12, 2017 05:31 PM

My PRs to rhinstaller/isomd5sum got merged! Which caused the 1.2.0 release to fail to build. Bam! I’m good at breaking things.
There’s a commit that makes this a proper dependency of the MediaWriter.

I had a look at packaging because package bundling is not cool according to the guidelines. Which means that I’ll have to package squimrel/iso9660 if I want to use it.

You can now make install squimrel/iso9660 and it’ll correctly place the shared library and header file.

The helper part of the FMW was reorganized but poorly I’m stuck at adding the partition due to this error:

Failed to read partition table on device ‘/dev/sdx’

reported by libblockdev due to this warning:

Warning: The driver descriptor says the physical block size is 2048 bytes, but Linux says it is 512 bytes.

reported by libparted most likely due to the sector size of 2048 bytes of iso images.

The Windows and Mac build fail on the Linux-only development branch since I broke them on purpose.

Up next:

  • Somehow add the partition.
  • Merge the dev branch of squimrel/iso9660.
  • Create a .spec file for squimrel/iso9660.
  • Make implantISOFD work using an udisks2 file descriptor.
  • Look at what’s next.

GsoC: First week of oficial development

Posted by David Carlos on June 08, 2017 04:11 PM

This post will be a simple report of the first oficial development week on GsoC program. Kiskadee is almost ready for the the first release, missing only a documentation review, and a CI configuration with jenkins. The next image shows the current architecture of kiskadee [1]:

In this first release, kiskadee already is able to monitor a Debian mirror, and the juliet [2] test suite. For this two targets, two plugins were implemented. We will talk briefly of each kiskadee component.

Plugins

In order to monitor different targets, kiskadee permits integrate plugins in it's architecture. Each plugin will tell kiskadee how some target must be monitored, and how the source code of this target will be downloaded.

We have defined a common interface that a plugin must follow, you can check this on kiskadee documentation [3] .

Monitor

The monitor component is the entity that controls which monitored packages needs to be analysed. The responsibilities of the monitor are

  • Dequeue packages queued by plugins.
  • Check if some dequeued package needs to be analised.
    • A package will be analised if it does not exists in the database, or if it exists but have a new version.
    • Save new packages and new package versions in database.
  • Queue packages that will be analysed by the Runner component.

We are using the default python implementation for queue, since the main purpose of this first release is to guarantee that kiskadee can monitor different targets, and run the analysis.

Runner

The runner component is the entity that trigger the analysis on the packages queued by monitor. This trigger is made using docker. In this release we are calling the container directly, and running the static analyzer inside of it, passing the source code as a parameter. For now we only have support to Cppcheck tool. After we run the analysis, we parse the tool output using Firehose tool [4], and saving this parsed analysis on the database. We also updates the package status, informing that a analysis was made.

The next post will be a roadmap, to the next kiskadee release.

[1]https://pagure.io/kiskadee
[2]https://samate.nist.gov/SRD/testsuite.php
[3]https://pagure.io/docs/kiskadee/
[4]https://github.com/fedora-static-analysis/firehose

dsconf: Adding support for MemberOf plug-in

Posted by Ilias Stamatis on June 07, 2017 10:43 AM

Directory Server supports a plug-in architecture. It has a number of default plug-ins which configure core Directory Server functions, such as replication, classes of service, and even attribute syntaxes, but users can also write and deploy their own server plug-ins to perform whatever directory operations they need.

How do we configure these plug-ins at the moment? By applying LDIF files. Is this easy and straightforward for the admin? Nope.

I’m currently working on this ticket for adding support of the memberOf plugin into lib389: https://pagure.io/lib389/issue/31 What we want to achieve here is to be able to fully configure the plug-in using dsconf, a lib389 command line tool.

So, for example, the simple act of enabling the memberOf plugin becomes:

dsconf ldap://example.com memberof enable

Currently, if we want to achieve the same we have to apply the following LDIF file to the server using a tool such as ldapmodify:

dn: cn=MemberOf Plugin,cn=plugins,cn=config
changetype: modify
replace: nsslapd-pluginEnabled
nsslapd-pluginEnabled: on

The former is much more simple and intuitive, right?

More examples of using dsconf to interact with memberOf:

dsconf ldap://example.com memberof show   # display configuration
dsconf ldap://example.com memberof status # enabled or disabled
dsconf ldap://example.com memberof fixup  # run the fix-up task
dsconf ldap://example.com memberof allbackends on # enable all backends
dsconf ldap://example.com memberof attr memberOf  # setting memberOfAttr

But that’s not all. Additionally, I will write unit tests for validating the plug-in’s functionality. That means checking that the C code – the actual plug-in – is doing what it is supposed to do when its configuration changes. Again, we are going to test the C code of the server using our python framework. That makes it clear that lib389 is not only an administration framework, but it is used for testing the server as well.

In the meanwhile, while working on memberOf support, I have discovered a lot of issues and bugs. One of them is that when the plug-in is disabled it doesn’t perform syntax-checking. So somebody could disable the plug-in, set some illegal attributes and then make the server fail. We’re going to fix this soon, along with more issues.

Until now I have raised the following bugs related to memberOf:
https://pagure.io/389-ds-base/issue/49283
https://pagure.io/389-ds-base/issue/49284
https://pagure.io/389-ds-base/issue/49274

This pretty much is how my journey begins. As I promised, in my next post I’ll talk about how the 389 DS development is taking place.


Week one: Summer coding report

Posted by squimrel on June 05, 2017 10:55 PM

Since I got to know everything I need to know in the community bonding period I could jump right into writing source code from day one.

The first three days were like a marathon. I stayed up for up-to 24 hours and my longest continuous coding session lasted 13 hours.

After those three days the part which I considered the most complex part of the project at the time was done. I was happy about that because even though in my feasibility study I have already declared this project as feasible but now I had the proof so I could calm down and relax.

Then I spent some time to look at how I’d add a vfat partition and add the overlay file to it. Since this is the platform specific part I looked at how I’d do that on Linux first.

Using libfdisk this worked just fine and I even figured out how to skip user interaction but poorly I couldn’t find a library that would create a vfat filesystem on the partition. Luckily my mentor pointed me to udisks. Therefore I discarded the idea of using libfdisk since I’ll use udisks instead.

In the mean time I’ve been working now and then on squimrel/iso9660 and I’ve also addressed requested changes to the PR to isomd5sum on which I worked on during community bonding period. It’s not directly project related but I’d be great if we’d be able to use this as a proper dependency in the FMW instead of bundling.

During next week the FMW helper code will be restructured a tiny little bit so that it’s easier to integrate squimrel/iso9660 since that’s cross-platform code.
Also Linux specific code that adds a vfat partition to the portable media device using udisks will be added.

GSoC2017 (Fedora) — Week 1

Posted by Mandy Wang on June 05, 2017 03:58 PM

I’m very exciting when I got the email that I was accepted by Fedora in GSoC2017. I will work for the idea – Migrate Plinth to Fedora Server – this summer.

I attend my graduation thesis defense today, and I have to spend most of my time on my graduation project last week, so I only done a little bit of work for GSoC in the first week. I will officially start my work this week – migrate the first set of modules from Deb-based to RPM-based.

This is the rough plan I made with Mentor Tong:

First Phrase

  • Before June 5, Fedora wiki {Plinth (Migrate Plinth from Debian)}
  • June 6 ~ June 12, Coding: Finishing LDAP configuration First boot module
  • June 13 ~ June 20, Finish User register and admin manager
  • June 21 ~ June 26, Adjust Unit Test to adopt RPM and Fedora packages
  • Evaluation Phrase 1

Second Phrase

  • June 27 ~ July 8, Finish system config related models
  • July 9 ~ July 15, Finish all system models
  • July 16 ~ July 31, Finish one half APP models
  • Evaluation Phrase 2

Third Phrase

  • August 1 ~ August 13, Finish other app models
  • Final Test and finish wiki
  • Final Evaluation

My project

Posted by Ilias Stamatis on June 05, 2017 12:09 AM

In my previous blog post I mentioned that I’m working on the 389 Directory Server project. Here I’ll get into some more details.

389 Directory Server is an open-source, enterprise class LDAP Directory Server used in businesses globally and developed by Red Hat. For those who are not familiar with directory services, an LDAP server basically is a non-relational, key-value database which is optimized for accessing, but not writing, data. It is mainly used as an address book or authentication backend to various services but you can see it used in a number of other applications as well. Red Hat additionally offers a commercial version of 389 called Red Hat Directory Server

The 389 Project is old with a large code base that has gone through nearly 20 years of evolution. Part of this evolution has been the recent addition of a python administration framework called lib389. This is used for the setup, administration and testing of the server, but it’s still a work in progress.

Until now, the administration of the server has always been a complex issue. Often people have to use the Java Console or apply LDIF files, both of which have drawbacks. Also, there is a variety of helper perl scripts that are installed along with the DS, but unfortunately the server cannot be managed with them alone. The primary goal of lib389 is to replace all these legacy scripts and add capabilities that are not currently provided. It has to be a complete one stop while command line focused.

So, much of my work will be lib389-related. Fortunately there is no strict list of tasks to follow. The project offers much freedom (thanks William!) so I actually get to choose what I want to work on! I have begun by adding configuration support for a plug-in. I’ll soon explain what this means on my next post. At a later stage I might do some work for the C code base and “move” from lib389 to the actual DS. I’m already really looking forward to it!

This was an overview of my project in general and I hope that I managed to effectively explain what it is about. Once more I haven’t given many details, but I’ll dive into more specific issues over the upcoming weeks. Additionally, I’ll publish a blog post explaining how the 389 DS development is done and discussing my personal work-flow as well.

Happy GSoC!


The config files on the ISO image can be modified

Posted by squimrel on June 01, 2017 07:23 AM

Mission accomplished!

At least kind of. There has been some work done in the past three days.
The result is a very tiny partial implementation of ECMA-119 that can grow files on the ISO 9660 image by a couple bytes if it’s lucky.

CD-ROM standards from 1987-1997

Posted by squimrel on May 30, 2017 02:46 PM

Specifically ECMA-119 (1987), ISO 9660:1988, ISO 9660:1995, Joliet (1988), IEEE P1282 (Rock ridge draft — 1994), El Torito (1995). I also looked a little bit into ECMA-167 (1994) and ECMA-167 (1997) and I ignored all revisions of UDF.

Do I need to implement all those standards?

No, but I could if I’m bored. It took me quite a while to figure out that I actually only need ECMA-119, Joliet, IEEE P1282 and El Torito.

That’s also just theoretically. Since I practically only need to modify two files I most likely don’t have to implement anything according to Joliet, IEEE P1282 or El Torito.

Modifying files on a fixed size ISO layout sounds expensive? Well most likely we’re lucky since files are sector aligned (2048 byte) and the rest is padded out with zeros. If we’re lucky we don’t even need to move anything around on the image.

Why implement ECMA-119 even though it’s the oldest standard?

Because that’s what the ISO 9660 image I’ll have to handle uses.

What’s about ISO 9660?

This 2nd Edition of Standard ECMA-119 is technically identical with ISO 9660. — ECMA-119

That’s correct but only when referring to ISO 9660:1988. I feel like ECMA-119 provides much more detail and it’s also often refereed to from ECMA-167. Note that .iso files are generally referred to as an ISO 9660 image and that doesn’t seem to change.

Yes there’s a specified way to detect which standard the ISO 9660 image follows by the way.

I’d guess that ECMA-119 is used because the oldest standard must be the most portable one.

Coding officially started

You can follow my active development in the first two weeks on squimrel/iso9660.

Welcome to Google Summer of Code

Posted by David Carlos on May 30, 2017 12:24 AM

This is the first post of this blog, and as the first post I would like to announce that I was accepted in Google Summer of Code (GsoC) this year. GsoC it's a Google program that occurs during the summer, with the objective to encourage students all over the world, to contribute to free and open-source software. The students choose some organization to contribute to, and submit a proposal to a new project, or to a existent project proposed by the organization. The organization that I submited a proposal was Fedora, a Red Hat like Linux Distribution, developed by a great community of contributors from around the world. I have been using Linux for a long time, and now it's time to realy start contributing with the community, helping to track the quality of the source code, that goes inside each package made available by Fedora.

As a software developer I always had the interest in static analysis, and the benefit that such pratice can bring to the cicle of software development. Many tools were developed for this purpose, but a system that permits the developer to easly integrates such analysis in their development cicle, does not exists. A tool that tried acomplish this goal was [Debile](https://wiki.debian.org/Services/Debile), developed in the context of the Debian Distribution. The main problem of Debile is to be a lot coupled to Debian infrastructure, not allowing to run the analysis system in others sources of code. Our proposal, to Google Summer of Code is to build a extensible system, that with a few steps of the developer, can continuosly monitors and colect data of static analysis from diferents sources of code. This ideia was initialy discussed in the devel mailing lists, by the guys in the [Static Analysis SIG](https://fedoraproject.org/wiki/StaticAnalysis), on Fedora. All the data collected will be stored in a database, and made available to the developers that want to monitor the quality of some source code. This system will be called kiskadee [1], and already can monitors some Debian mirrors. Now our objective is to monitor the Fedora repos, and integrates more static analyzers to kiskadee.

You can read my proposal [here](https://goo.gl/uEB2Qk) to understand better our objective, and follow the development process [here](http://pagure.io/kiskadee)

I will make weekly posts, reporting the status of kiskadee development. Let's Code :)

[1] the great kiskadee is a bird that watches its prey (usually bugs) and catch.

What This is About

Posted by Ilias Stamatis on May 27, 2017 07:57 PM

This blog is about my Google Summer of Code experience in the summer of 2017.

For those who may not know, GSoC is an international annual program organized by Google open to university students. Every year there is a number of open-source organizations that participate in the program and students can apply for them by creating project/idea proposals. If accepted, students have to work all over the summer on their projects by writing code, documentation and tests. They also get to collaborate with the open-source community of their organization and their mentors who are the there to guide them through the process. Google awards stipends to all students who successfully complete their project during the summer.

Example open-source organizations for this year include The Linux Foundation, Mozilla, The GNU Project, GNOME, KDE, FreeBSD, Apache Software Foundation and many more. The full list of all 201 accepted organizations can be found here: https://summerofcode.withgoogle.com/organizations/

I’m currently participating in the GSoC program with the Fedora Project. More specifically, I’m working with the Red Hat team that develops the 389 Directory Server, helping with the development of a python framework for easier administration of the DS. In future posts I’m going to give more details about my project, what I have done already and what are the plans for this summer.

I’m going to post weekly updates in this blog about what I’m doing, what I’m learning and about the whole experience of being a GSoC participant in general.

Stay tuned!


Proof-of-concept of approach #1

Posted by squimrel on May 26, 2017 02:16 AM

Or how the Fedora Media Writer will create persistent storage enabled devices

As mentioned before the Fedora Media Writer should not depend on command line tools.

I talked about the different approaches on how to accomplish this task already. But it hasn’t been proven that they work yet.
I’ll go ahead and prove approach #1 since my mentor and I decided that I should work on this approach first.

The naive idea of approach #1 is to:

  1. Extract the ISO.
  2. Modify two configuration files to enable persistent storage.
  3. Build a new ISO.
  4. Write the new ISO on the portable media device.
  5. Add a partition to the portable media device that will be used for persistent storage.

To prove this I went ahead and wrote a bash script that does exactly that:

<iframe frameborder="0" height="0" scrolling="no" src="https://medium.com/feed/@squimrel" width="0">https://medium.com/media/7d3eff5be34a73e8fcf53fb7f627dfbe/href</iframe>

I tested if the portable media device did in-fact have persistent storage enabled. And it works!

Glad to be a Mentor of Google Summer Code again!

Posted by Tong Hui on May 25, 2017 04:44 PM
This year I will mentoring in FedoraProject, and help Mandy Wang finish her GSoC program about “Migrate Plinth to Fedora Server” which raised by me. While, why I proposed this idea? Plinth is developed by Freedombox which is a Debian based project. The Freedombox is aiming for building a 100% free software self-hosting web server to … Continue reading "Glad to be a Mentor of Google Summer Code again!"

How the overhead could be reduced

Posted by squimrel on May 23, 2017 03:56 AM

An approach to ISO manipulation

I talked about different approaches on how to manipulate an ISO before. In approach one I particularly complained about how unhappy I am with the expected overhead.

When I started to look at source code that actually deals with ISO 9660 I realized that I got the power! I’m not limited to command-line tools so who says that I actually have to extract everything from the ISO to modify it?

To read files from an ISO I can use libcdio. To build one with “El Torito” and all that fancy stuff I’d have to use cdrtools.

Build a bootable ISO using cdrtools

Poorly cdrtools is not build in a way for me to easily use it. It still lives in an SCCS repository and it’s designed in a way that mkisofs works not so that I can easily reuse its source code. I it’s simply not a library. Also the code contains stuff like:

<iframe frameborder="0" height="0" scrolling="no" src="https://medium.com/feed/@squimrel" width="0">https://medium.com/media/b6b81b68aa8fbc056f34b175d4e58883/href</iframe>

I mean I get it. This project is older than me and no one likes to work on it so there’s no one to blame.

By the way Fedora ships genisoimage which is part of cdrkit and cdrkit is a fork of cdrtools which lives in a git repository since 2006 but is not maintained since 2010. On the other hand cdrtools is under “active” development. Anyways in both repositories the situation is equally bad.

Build from scratch

Since I don’t want to reuse the source code I’ll most likely read the specification of ISO 9660 and “El Torito” and build what I need from scratch.

Ideally the program would calculate the md5sum and apply isohybrid as part of the build process so that no post-processing is required.

Drop isohybrid

Probably isohybrid is not even required since if what I proposed will be implemented well enough the program should be able to go ahead and modify what is needed without destroying the isohybrid layout which is already provided by the released ISO.

Overwhelmed

Obviously I won’t try to do this all at once but design with this idea in mind. Most likely I’ll look at how to build an ISO from the extracted files first before I’ll do any of the fancy stuff I proposed.

Coding will start on the May 30. Until then I should’ve read the specifications.

Reading source code that creates bootable devices

Posted by squimrel on May 19, 2017 10:03 PM

Someone must have figured out how to properly create a bootable media device right? Cross-platform? Not depending on a lot external tools?

From the source code I’ve read so far there’re basically the following two categories:

  • Linux only and depends on a bunch of Linux command line tools.
  • Horrifying source code and only works on Windows and/or Mac by porting some Linux command line tools or using other equivalent vendor tools and binaries that pop out of nowhere.

Well guess what. I’d like to avoid to call any external program if somehow possible. Only last year the Fedora Media Writer moved away from having dd as a dependency. Let’s keep it that way.

Maybe it’s a good thing that I’m on my own. Otherwise I’d do something someone has already done before and that would be evil right?

Hello Summer Coding World

Posted by squimrel on May 18, 2017 12:08 PM

I’m a nobody who accomplished nothing in life. You may call me squimrel.

As the title suggests I’ll write source code this summer. Specifically I’ll work on the Fedora Media Writer project which is able to write live Fedora images on portable media devices.

One (me) should make it possible so that the images on the portable drives have persistent storage enabled. This will be achieved by modifying two configuration files and adding another partition for persistent storage.

It’s not clear how exactly the program should modify the configuration files on the live image in which the flags for persistent storage can be set yet. There are multiple approaches and every single one of them comes with different challenges about which I’ll talk when I face them.

It’s doesn’t have to be said that making this work on Windows and Mac comes as an extra challenge itself.

I’m a bit uncertain how this project will work out but I’m confident that source code will be written and that I’ll learn a lot about accomplishing this task using C++.

If you wish to know more about me, my “experience”, etc. you can read my summer coding application at the Fedora Project Wiki.

Refactor a project written in C

Posted by squimrel on May 17, 2017 08:14 AM

Namely isomd5sum. The first commit to it to the repository it previously lived in was in 2002. From the commit message “Add isomd5sum stuff to anaconda” one could guess that the origin was a different one so it might be older than that.

isomd5sum is a very small project and as most old projects written in C it works just fine. Over the years 13 different users created 111 commits. The last one was merged two years ago in 2015.

So it’s not a very active project. That’s not bad either because it solves the problem it’s supposed to solve already:

isomd5sum provides a way of making use of the ISO9660 application data area to store md5sum data about the iso.

Why refactor?

Well right there’s no particular reason to do that especially since there’re only minor bugs like forgetting to free a kilobyte of memory before exit or using more memory than needed to make it easier to avoid a potential buffer-overflow when passed a manipulated ISO. These kind of errors can be changed without refactoring everything.

But this project is a dependency for the Fedora Media Writer project and I’d like Fedora Media Writer to use it as a proper dependency using git submodules. Currently it’s just a copy of the projects source code with some commits added on top of it.

To make this happen I’ll need to apply a couple patches to isomd5sum anyways and I prefer working on source code I don’t hate too much. So better refactor everything so that I’ll have to hate myself if I’m still unhappy with it.

Also the project does need refactoring since it’s full of magic values and other things that we call bad practices nowadays.

So even though the refactor does not improve the functionality. (It might even have introduced bugs. You can never be sure.) It simplifies and does a lot of the changes that I will have to make anyways.

Currently my PR is still under review. I’d be happy if you’d help reviewing it.

3D Printing Fun

Posted by Mo Morsi on May 14, 2017 06:52 PM

The PI Sw1tch project took a bit of a setback recently when we discovered the 5200mAH usb battery we had wasn't sufficient to drive the PI + display for any extensive period of time. I've ordered a higher-capacity battery from Amazon, but until it arrives the working prototype was redesigned into a snappable wall mount:

Wall pi2 Wall pi1

The current implementation can be found on thingiverse for all your 3D printing needs!

Additionally, we threw together a design for a wall mount for my last smartphone, the Samsung Intercept, which has been sitting on my shelf since I upgraded to the Huawei Union. The Intercept was a great phone, and it still works well, albiet a bit slow compared to modern devices (not to mention it only runs Android 2.2). But it more than suffices for a "smart" entertainment hub, and having mounted & wired up to my stereo system, I now have easy access to all the albums in the world (...that are available via youtube...). The device supposidly can be rooted, though I was not able to accomplish that myself and don't really care to spend more time figuring that out (really wouldn't gain much). But just goes to show how a little inguinity + some design work can go along way at reducing e-waste.

Wall intercept2 Wall intercept1

Now time to figure out what to do w/ my ancient Droid (the original A855!)

Keep on hacking!

RetroFlix - A Weekend Project

Posted by Mo Morsi on May 14, 2017 06:52 PM

Now that we have the 'mount' component of the PI Sw1tch, and an awesome way to playgames through the PI on our TV, we need a collection of games to play! It goes without saying that Nethack was installed (combined w/ ssh X11 forwarding = persistent graphical nethack anywhere = epicness!!!). But I also happen to have a huge box of Retro video games (dating back to my childhood), which would be good to load onto the device. But unfortunately cloning so many games would take some time, and there are already online databases of these backups, so I opted to write a small web app to download and manage the collection.

It can be found here and you can see some screens below:

My library Game info Game previews Game list

It was built as a Sinatra web service, simply acting as a frontend to a popular emulator database, allowing the user to navigate & preview app for various systems, and download / run them locally. The RetroFlix application itself is offered as a lightweight Microservice simply acting as a proxy to the required various underlying components. It's fairly simple to setup & install (see the README), and builds upon existing emulators & components the user has locally.

As with everything else, it's still a work in progress, but it already sufficing to relive classic memories!

<iframe frameborder="0" height="315" src="https://www.youtube.com/embed/QhH_iibOJv0" width="560"></iframe>

Approaches to enable persistent storage on ISOs

Posted by squimrel on May 11, 2017 12:58 PM

Basically just two configuration files needs to be modified. The question is how to modify the configuration file on an ISO using C++?

Ideally one would like to have an easy to port solution without any dependencies, no extra time spent and obviously it should be beautiful and extendable too.

I didn’t find such an approach yet. If you have a good approach in mind please query me on IRC.

Currently writing the ISO to disk works like this:

<iframe frameborder="0" height="0" scrolling="no" src="https://medium.com/feed/@squimrel" width="0">https://medium.com/media/440d363ac75d755193c7febc29e1abf6/href</iframe>

It’s great since it doesn’t need any dependencies. There’s a different implementation for every supported platform but that’s okay.

Short summary:

  1. Write ISO to disk.
  2. Calculate checksum of ISO written to disk to verify that everything worked correctly.

1. Create an ISO that is already configured in a way so that persistent storage is enabled

Unluckily ISO 9660 was designed to be read-only (correct me if I’m wrong). So one can’t just mount it and modify a config file.

Instead the program has to:

  1. Extract the files from the ISO.
  2. Modify the config file.
  3. Build a new ISO.

This would require at least the following two dependencies: libcdio (for iso9660) and syslinux (for isohybrid).

Yield the following overhead (n = Size of the ISO):

  • 2n disk space (ISO + extracted ISO)
  • 4n disk read (read ISO, read extracted ISO, read ISO, calculate checksum)
  • 3n disk write (extract ISO, pack ISO, write ISO to portable media)

and therefore it would take around 1.4 times longer than it already takes.

2. Modify the config file in the ISO through byte manipulation

Yes it’s a bad idea and it’s evil but maybe it works.

Why it’s a bad idea:

  • It’ll break the inner checksum of the ISO.
  • Stuff such as comments have to removed from the config file so that there’s enough space to add the corresponding flags.
  • It’s very likely that it’s unreliable and buggy in special cases.
  • It’s hard and messy to extend.

This would take just a little longer than it currently does because the manipulation could be done while writing to the portable media device.

3. Make the portable media device bootable from scratch and copy what’s needed from the ISO

This is very similar to livecd-iso-to-disk approach but it will be hard to implement in a way that it can be ported to platforms other than Linux.

Also for this approach even more dependencies than for #1 are required.

Note

Obviously the time it takes to create the overlay partition still has to be taken into account in every approach.

A New Site, A Fresh Start

Posted by Mo Morsi on April 29, 2017 03:59 AM

I started this blog 10 years ago. How the world has changed... (and yet is still the same...)

Recently I noticed my site was unaccessible. No 404, no error response, just a blank page. After a brief moment of panic, I ssh'd into my host provider and breathed a sign of relief upon discovering all db & fs entities intact, including the instance of Drupal which my site was based on (a horribly outdated instance mind you, and note I said was based). In any case, I suspect my (cheap) host provider updated their version of PHP or some critical library w/ the effect of my Drupal instance not working.

Dogefox

Having struggled w/ PHP & Drupal many times over the years, I was finally ready to go cold turkey, and migrated the blog to Middleman, which brings the awesomeness of Rails to static site generation. I am very much in love with Middleman right now, it's the perfect tool for this problem domain, it's incredibly easy to setup a new site, use any high level templating / markup / styling language to customize your frontend, throw in any js or other framework to handle dynamic interactions (including emscripten to run C in the browser), and you're good to go. Tailoring things on the fly is a cinch due to the convenient embedded webserver sporting live-reloading, and when you're ready to push to production it's a single command to build the static html. A quick rsync -azP synchronizes it w/ your webserver and now your site is available to the world at blazing speeds!

Anyways, enough Middleman gushing (but seriously check it out!). In addition to the port, I rethemed the site, be sure to also check out the new design if your reading this via rss. Note mobile browser UI's aren't currently supported, so no angry emails if you can't read things on your phone! (I know their coming...)

Be sure to stay subscribed to github for updates. I'm hoping virtfs-refs will see some love soon if I can figure out how to extend the current fs parsing mechanisms w/ file content retrieval. We've also been prototyping designs for the PI Switch project I mentioned a while back, more updates on that soon as things progress.

Keep surfing!!!

Compiling / Playing NetHack 3.6.0 on Fedora

Posted by Mo Morsi on April 26, 2017 08:43 PM

The following are the simplest instructions required to compile NetHack 3.6.0 for Fedora 25.

Why might you want to compile NetHack from source, instead of simply installing the package (sudo dnf install nethack)? For many reasons. Applying patches for custom game mechanics. Running an alternate frontend. And more!

While the official Linux instructions are complete, they are pretty involved and must be followed exactly for things to work. To give the dev team credit, they’ve been supporting a plethora of platforms and environments for 20+ years (and the number is still increasing). While a consolidated guide was written for compiling NetHack from scratch on Ubuntu/Debian but nothing exists for Fedora… until now!


# On a fresh Fedora installation (with updates) install the dependencies:

$ sudo dnf install ncurses-devel libXt-devel libXaw-devel byacc flex

# Download the NetHack (3.6.0) source tarball from the official site and unpack it:

$ tar xzvf [download]
$ cd nethack-3.6.0/

# Run the base setup utility for Linux:

$ cd sys/unix
$ ./setup.sh hints/linux
$ cd ../..

# Edit [include/unixconf.h] to uncomment the following line…

#define LINUX

# Edit [include/config.h] to uncomment the following line…

#define X11_GRAPHICS

# Edit [src/Makefile] and update the following lines…

WINSRC = $(WINTTYSRC)
WINOBJ = $(WINTTYOBJ)
WINLIB = $(WINTTYLIB)

# …to look like so

WINSRC = $(WINTTYSRC) $(WINX11SRC)
WINOBJ = $(WINTTYOBJ) $(WINX11OBJ)
WINLIB = $(WINTTYLIB) $(WINX11LIB)

# Edit [Makefile] to uncomment the following line

VARDATND = x11tiles NetHack.ad pet_mark.xbm pilemark.xpm rip.xpm

# In previous line, apply this bugfix by changing…

pilemark.xpm

# …to

pilemark.xbm

# Build and install the game

$ make all
$ make install

# Finally create [~/.nethackrc] config file and populate it with the following: OPTIONS=windowtype:x11


# To play:

$ ~/nh/install/games/nethack

Go get that Amulet!

Project Idea - PI Sw1tch

Posted by Mo Morsi on April 25, 2017 12:07 PM

While gaming is not high on my agenda anymore (... or rather at all), I have recently been mulling buying a new console, to act as much as a home entertainment center as a gaming system.

Having owned several generations PlayStation and Sega products, a few new consoles caught my eye. While the most "open" solution, the Steambox sort-of fizzled out, Nintendo's latest console Switch does seem to stand out of the crowd. The balance between power and portability looks like a good fit, and given Nintendo's previous successes, it wouldn't be surprising if it became a hit.

In addition to the separate home and mobile gaming markets, new entertainment mechanisms are needing to provide seamless integration between the two environments, as well as offer comprehensive data and information access capabilities. After all what'd be the point of a gaming tablet if you couldn't watch Youtube on it! Neal Stephenson recently touched on this at his latest TechCrunch talk, by expressing a vision of technology that is more integrated/synergized with our immediate environment. While mobile solutions these days offer a lot in terms of processing power, nothing quite offers the comfort or immersion that a console / home entertainment solution provides (not to mention mobile phones being horrendous interfaces for gaming purposes!)

Being the geek that I am, this naturally led me to thinking about developing a hybrid mechanism of my own, based on open / existing solutions, so that it could be prototyped and demonstrated quickly. Having recently bought a Raspeberry PI (after putting my Arduino to use in my last microcontroller project), and a few other odds and end pieces, I whipped up the following:

Pi sw1tch

The idea is simple, the Raspberry PI would act as the 'console', with a plethora of games and 'apps' available (via open repositories, steam, emulators, and many more... not to mention Nethack!). It would be anchorable to the wall, desk, or any other surface by using a 3D-printed mount, and made portable via a cheap wireless controller / LCD display / battery pack setup (tied together through another custom 3D printed bracket). The entire rig would be quickly assemblable and easy to use, simply snap the PI into the wall to play on your TV; remove and snap into the controller bracket to take it on the go.

I suspect the power component is going to be the most difficult to nail down, finding an affordable USB power source that is lightweight but offers sufficient juice to drive the Raspberry PI w/ LCD might be tricky. But if this is done correctly, all components will be interchangeable, and one can easily plug in a lower-power microcontroller and/or custom hardware component for a tailored experience.

If there is any interest, let me know via email. If 3 or so people commit, this could be done in a weekend! (stay tuned for updates!)

Nethack Encyclopedia Reduxd

Posted by Mo Morsi on April 24, 2017 05:23 PM

I've been working on way too many projects recently... Alas, I was able to slip in some time to update the NetHack Encyclopedia app on the Android MarketPlace (first released nearly 5 years ago!).

Version 5.3 brings several features including new useful tools. The first is the Message Searcher that allows the user to quickly query the many cryptic game messages by substring & context. Additionally the Game Tracker has been implemented, faciliting player, item, and level identification in a persistant manner. Simply enter entity attributes as they are discovered and the tracker will deduce the remaining missing information based on its internal alogrithm. This is ontop of many enhancements to the backend including the incorporation of a searchable item database.

The logic of the application has been highly refactored & cleaned up, the code has come along ways since first being written. In large, I feel pretty comfortable with the Android platform at the current time, it has its nuances, but all platorms do, and it's pretty easy to go from concept to implementation.

As far as the game itself, I have a ways to go before retrieving the Amulet! It's quite a challenge, but you learn with every replay, and thus you get closer. Ascension will be mine! (someday)

Nethack 5.3 screen1 Nethack 5.3 screen2 Nethack 5.3 screen3 Nethack 5.3 screen4

Lessons on Aikido and Life via Splix

Posted by Mo Morsi on April 24, 2017 05:23 PM

Recently, I've stumbled upon splix, a new obsession game, with simple mechanics that unfold into a complex competitive challenge requiring fast reflexes and dynamic tactics.

Splix intro

At the core the rule set is very simple: - surround territory to claim it - do not allow other players to hit your tail (you lose... game over)

Splix overextended

While in your territory you have no tail, rendering you invulnerable, but during battles territory is always changing, and you don't want to get caught deep on an attack just to be surrounded by an enemy who swaps the territory alignment to his!

Splix deception

The simple dynamic yields an unbelievable amount of strategy & tactics to excel at while at the same time requiring quick calculation and planning. A foolheardy player will just rush into enemy territory to attempt to capture squares and attack his opponent but a smart player will bait his opponent into his sphere of influence through tactful strikes and misdirections.

Splix bait

Furthermore we see age old adages such as "better to run and fight another day" and the wisdom of pitting opponents against each other. Alliances are always shifting in splix, it simply takes a single tap from any other player to end your game. So while you may be momentarily coordinating with another player to surround and obliterate a third, watch your back as the alliance may dissove at the first opportunity (not to mention the possiblity of outside players appearing anytime!)

Splix alliance

All in all, I've found careful observation and quick action to yield the most successful results on the battlefield. The ideal kill is from behind an opponent who has periously invaded your territory deeply. Beyond this, lurking at the border so as the goad the enemy into a foolheardy / reckless attack is a robust tactic provided you have built up the relfexes and coordination to quickly move in and out of territory which is constantly changing. Make sure you don't fall suspect to your own trick and overpenetrate the enemy border!

Splix bait2

Another tactic to deal w/ an overly aggressive opponent is to slightly fallback into your safe zone to quickly return to the front afterwords, perhaps at a different angle or via a different route. Often a novice opponent will see the retreat as a sign of fear or weakness and become over confident, penetrating deep into your territory in the hopes of securing a large portion quickly. By returning to the front at an unexpected moment, you will catch the opponents off guard and be able to destroy them before they have a chance to retreat to their safe zone.

Splix draw out

Of course if the opponent employs the same strategy, a player can take a calculated risk and drive a distance into the enemy territory before returning to the safe zone. By paying attention to the percentage of visible territory which the player's vulnerability zone occupies and the relative position of the opponent, they should be able to safely guage the safe distance to which they can extend so as to ensure a safe return. Taking large amounts of territory quickly is psychologically damaging to an opponent, especially one undergoing attacks on multiple fronts.

Splix draw out2

If all else fails to overcome a strong opponent, a reasonable retreat followed by an alternate attack vector may result in success. Since in splix we know that an safe zone corresponds to only one enemy, if we can guage / guess where they are, we can attempt to alter the dynamics of the battle accordingly. If we see that an opponent has stretch far beyond the mass of his safe zone via a single / thin channel, we can attempt to cut them off, preventing a retreat without crossing your sphere of influence.

Splix changing

This dynamic becomes even more pronounced if we can encircle an opponent, and start slowly reducing his control of the board. By slowly but mechanically & gradually taking enemy territory we can drive an opponent in a desired direction, perhaps towards a wall or other player.

Splix tactics2

Regardless of the situation, the true strategist will always be shuffling his tactics and actions to adapt to the board and setup the conditions for guaranteed victory. At no point should another player be underestimated or trusted. Even a new player with little territory can pose a threat to the top of the leader board given the right conditions and timing. The victorious will stay clam in the heat of the the battle, and use careful observations, timing, and quick reflexes to win the game.

(<endnote> the game *requires* a keyboard, it can be played via smartphone (swapping) but the arrow keys yields the fastest feedback</endnode>)

Search and Replace The VIM Way

Posted by Mo Morsi on April 24, 2017 04:18 PM

Did you know that it is 2017 and the VIM editor still does not have a decent multi-file search and replacement mechanism?! While you can always roll your own, it’s rather cumbersome, and even though some would say this isn’t in the spirit of an editor such as VIM, a large community has emerged around extending it in ways to behave more like a traditional IDE.

Having written about doing something similar to this via the cmd line a while back, and having refactored a large amount of code recently that involved lots of renaming, I figured it was time to write a plugin to do just that, rename strings across source files, using grep and sed


Before we begin, it should be noted that this is of most use with a ‘rooting’ plugin like vim-rooter. By using this, you will ensure vim is always running in the root directory of the project you are working on, regardless of the file being modified. Thus all search & replace commands will be run relative to the top project dir.

To install vsearch, we use Vundle. Setup & installation of that is out of scope for this article, but I highly recommend familiarizing yourself with Vundle as it’s the best Vim plugin management system (in my opinion).

Once Vundle is installed, using vsearch is as simple as adding the following to your ~/.vim/vimrc:

Plugin ‘movitto/vim-vsearch’

Restart Vim and run :PluginInstall to install vsearch from github. Now you’re good to go!


vsearch provides two commands :VSearch and :VReplace.

VSearch simply runs grep and displays the results, without interrupting the buffer you are currently editing.

VReplace runs a search in a similar manner to VSearch but also performs and in-memory string replacement using the specified args. This is displayed to the user who is prompted for comfirmation. Upon receiving it, the plugin then executes sed and reports the results.

VirtFS New Plugin Guide

Posted by Mo Morsi on April 24, 2017 03:27 PM

Having recently extracted much of the FS interface from MiQ into virtfs plugins, it was a good time to write a guide on how to write a new plugin from scratch. It is attached below.


This document details the process of writing a new VirtFS plugin from scratch.

Plugins may be written for many targets, from traditional filesystems (EXT, FAT, XFS), to filesystem-like entities, such as databases and object repositories, to things completely unrelated all together. Once written, VirtFS will use the plugin to expose the underlying component via the Ruby Filesystem API. Simply issue File & Dir calls to files under the specified mountpoint, and VirtFS will take care of the remaining details.

This guide assumes basic familiarity with the Ruby language and gem project format, in this tutorial we will be creating a new gem called virtfs-hellofs for our ‘hello’ filesystem, based on a simple JSON map.

Note, the end result can be seen at virtfs-hellofs


Initial Project Layout

Create a new working directory with the following contents:

  virtfs-hellofs/
                 lib/
                     virtfs-hellofs.rb
                     virtfs/
                            hellofs.rb
                            hellofs/
                                    fs/
                                    version.rb
                 virtfs-hellofs.gemspec
                 Gemfile

TODO: a generator [patches are welcome!]


Required Components

The following components are required to define a full-fledged filesystem plugin:

  • A ‘mounting’ mechanism - Allows VirtFS to load your FS at the specified filesystem path / mountpoint.

  • Core File and Dir classes and class methods - VirtFS maps standard Ruby FS operations to their equivalent plugin calls

  • FS specific representations - the internal representation of filesystem constructs being implemented so as to satisfy the core class calls

Upon instantiation, a fs-specific ‘blocklike device’ is often required so as to provide block-level seek/read/write operations (such as from a physical disk, disk image, or other).

Eventually this will be implemented via a separate abstraction hierarchy, but for the time being virt-disk provides basic functionality to read simple file-based “devices”. Since we are only using a simply in-memory JSON based fs, we do not need to pull in virt_disk here.


Core functionality

First we will define the FS class providing our filesystem interface:

lib/virtfs/hellofs/fs.rb

  module VirtFS::HelloFS
    class FS
      include DirClassMethods
      include FileClassMethods

      attr_accessor :mount_point, :superblock

      # Return bool indicating if device contains
      # a HelloFS instance
      def self.match?(device)
        begin
          Superblock.new(self, device)
          return true
        rescue => err
          return false
        end
      end

      # Initialze new HelloFS instance w/ the
      # specified device
      def initialize(device)
        @superblock  = Superblock.new(self, device)
      end

      # Return root directory of the filesystem
      def root_dir
        superblock.root_dir
      end

      def thin_interface?
        true
      end

      def umount
        @mount_point = nil
      end
    end # class FS
  end # module VirtFS::HelloFS

Here we see a few things, particularly the inclusion of the Directory and File class methods satisfying the VirtFS API (more on those later) and the instantiation of a HelloFS specific Superblock construct.

In the #match? method, We verify the superblock of the underlying device matches that required by hellofs and we specify various core callbacks needed by VirtFS (particularly the #unmount and #thin_interface? methods, see this for more details on thin vs. thick interfaces).

The superblock class for HelloFS is simple, we implement our ‘filesystem’ through a simple json map, passed into virtfs on instantiation

lib/virtfs/hellofs/superblock.rb

module VirtFS::HelloFS
  # Top level filesystem construct.
  #
  # In our case, we simply create a new
  # root directory from the HelloFS
  # json hash, but in most cases this
  # would parse / read top level metadata
  class Superblock
    attr_accessor :device

    def initialize(fs, device)
      @fs     = fs
      @device = device
    end

    def root_dir
      Dir.new(self, device)
    end
  end # class SuperBlock
end # module VirtFS::Hello

VirtFS API

In the previous section the core fs class included two mixins, DirClassMethods and FileClassMethods implementing the VirtFS filesystem interface.

lib/virtfs/hellofs/fs/dir_class_methods.rb

module VirtFS::HelloFS
  class FS
    # VirtFS Dir API implementation, dispatches
    # calls to underlying HelloFS constructs
    module DirClassMethods
      def dir_delete(p)
      end

      def dir_entries(p)
        dir = get_dir(p)
        return nil if dir.nil?
        dir.glob_names
      end

      def dir_exist?(p)
        begin
          !get_dir(p).nil?
        rescue
          false
        end
      end

      def dir_foreach(p, &block)
        r = get_dir(p).try(:glob_names)
                      .try(:each, &block)
        block.nil? ? r : nil
      end

      def dir_mkdir(p, permissions)
      end

      def dir_new(fs_rel_path, hash_args, _open_path, _cwd)
        get_dir(fs_rel_path)
      end

      private

      def get_dir(p)
        names = p.split(/[\\\/]/)
        names.shift

        dir = get_dir_r(names)
        raise "Directory '#{p}' not found" if dir.nil?
        dir
      end

      def get_dir_r(names)
        return root_dir if names.empty?

        # Check for this path in the cache.
        fname = names.join('/')

        name = names.pop
        pdir = get_dir_r(names)
        return nil if pdir.nil?

        de = pdir.find_entry(name)
        return nil if de.nil?

        Directory.new(self, superblock, de.inode)
      end
    end # module DirClassMethods
  end # class FS
end # module VirtFS::HelloFS

This module implements the standard Ruby Dir Class operations including retrieving & modifying directory contents, and checking for file existence.

Particularly noteworthy is the get_dir method which returns the FS specific dir instance.

lib/virtfs/hellofs/fs/file_class_methods.rb

module VirtFS::HelloFS
  class FS
    # VirtFS file class implemention, dispatches requests
    # to underlying HelloFS constructs
    module FileClassMethods
      def file_atime(p)
      end

      def file_blockdev?(p)
      end

      def file_chardev?(p)
      end

      def file_chmod(permission, p)
        raise "writes not supported"
      end

      def file_chown(owner, group, p)
        raise "writes not supported"
      end

      def file_ctime(p)
      end

      def file_delete(p)
      end

      def file_directory?(p)
        f = get_file(p)
        !f.nil? && f.dir?
      end

      def file_executable?(p)
      end

      def file_executable_real?(p)
      end

      def file_exist?(p)
        !get_file(p).nil?
      end

      def file_file?(p)
        f = get_file(p)
        !f.nil? && f.file?
      end

      def file_ftype(p)
      end

      def file_grpowned?(p)
      end

      def file_identical?(p1, p2)
      end

      def file_lchmod(permission, p)
      end

      def file_lchown(owner, group, p)
      end

      def file_link(p1, p2)
      end

      def file_lstat(p)
      end

      def file_mtime(p)
      end

      def file_owned?(p)
      end

      def file_pipe?(p)
      end

      def file_readable?(p)
      end

      def file_readable_real?(p)
      end

      def file_readlink(p)
      end

      def file_rename(p1, p2)
      end

      def file_setgid?(p)
      end

      def file_setuid?(p)
      end

      def file_size(p)
      end

      def file_socket?(p)
      end

      def file_stat(p)
      end

      def file_sticky?(p)
      end

      def file_symlink(oname, p)
      end

      def file_symlink?(p)
        get_file(p).try(:symlink?)
      end

      def file_truncate(p, len)
      end

      def file_utime(atime, mtime, p)
      end

      def file_world_readable?(p)
      end

      def file_world_writable?(p)
      end

      def file_writable?(p)
      end

      def file_writable_real?(p)
      end

      def file_new(f, parsed_args, _open_path, _cwd)
        file = get_file(f)
        raise Errno::ENOENT, "No such file or directory" if file.nil?
        File.new(file, superblock)
      end

      private

        def get_file(p)
          dir, fname = VfsRealFile.split(p)

          begin
            dir_obj = get_dir(dir)
            dir_entry = dir_obj.nil? ? nil : dir_obj.find_entry(fname)
          rescue RuntimeError
            nil
          end
        end
    end # module FileClassMethods
  end # class FS
end # module VirtFS::HelloFS

The FileClassMethods module provides all the FS-specific funcality needed by Ruby to dispatch File Class calls (which contains a larger footprint than Dir, hence the need for more methods here).

Here we see many methods are not yet implemented. This is OK for the purposes of use in VirtFS but note any calls to the corresponding methods on a mounted filesystem will fail.


File and Dir classes

The final missing piece of the puzzle is the File and Dir classes. These provide standard interfaces which VirtFS can extract file and dir information.

lib/virtfs/hello/file.rb

module VirtFS::HelloFS
  # File class representation, responsible for
  # managing corresponding dir_entry attributes
  # and file content.
  #
  # For HelloFS, files are simple in memory strings
  class File
    attr_accessor :superblock, :dir_entry

    def initialize(superblock, dir_entry)
      @sb        = superblock
      @dir_entry = dir_entry
    end

    def to_h
      { :directory? => dir?,
        :file?      => file?,
        :symlink?   => false }
    end

    def dir?
      dir_entry.is_a?(Hash)
    end

    def file?
      dir_entry.is_a?(String)
    end

    def fs
      @sb.fs
    end

    def size
      dir? ? 0 : dir_entry.size
    end

    def close
    end
  end # class File
end # module VirtFS::HelloFS

lib/virtfs/hello/dir.rb

module VirtFS::HelloFS
  # Dir class representation, responsible
  # for managing corresponding dir_entry
  # attributes
  #
  # For HelloFS, dirs are simply nested
  # json maps
  class Dir
    attr_accessor :sb, :dir_entry

    def initialize(sb, dir_entry)
      @sb        = sb
      @dir_entry = dir_entry
    end

    def close
    end

    def glob_names
      dir_entry.keys
    end

    def find_entry(name, type = nil)
      dir = type == :dir
      fle = type == :file

      return nil unless glob_names.include?(name)
      return nil if (dir && !dir_entry[name].is_a?(Hash)) ||
                    (fle && !dir_entry[name].is_a?(String))
      dir ? Dir.new(sb, dir_entry[name]) :
            File.new(sb, dir_entry[name])
    end
  end # class Directory
end # module VirtFS::HelloFS

Again these are fairly straightforward, providing access to the underlying JSON map in a filesystem-like manner.


Polish

To finish, we’ll populate the project components required by every rubygem:

lib/virtfs-hellofs.rb

require "virtfs/hellofs.rb"

lib/virtfs/hellofs.rb

require "virtfs/hellofs/version"
require_relative 'hellofs/fs.rb'
require_relative 'hellofs/dir'
require_relative 'hellofs/file'
require_relative 'hellofs/superblock'

lib/virtfs/hellofs/version.rb

module VirtFS
  module HelloFS
    VERSION = "0.1.0"
  end
end

virtfs-hellofs.gemspec:

lib = File.expand_path('../lib', __FILE__)
$LOAD_PATH.unshift(lib) unless $LOAD_PATH.include?(lib)
require 'virtfs/hellofs/version'

Gem::Specification.new do |spec|
  spec.name          = "virtfs-hellofs"
  spec.version       = VirtFS::HelloFS::VERSION
  spec.authors       = ["Cool Developers"]

  spec.summary       = %q{An HELLO based filesystem module for VirtFS}
  spec.description   = %q{An HELLO based filesystem module for VirtFS}
  spec.homepage      = "https://github.com/ManageIQ/virtfs-hellofs"
  spec.license       = "Apache 2.0"

  spec.files         = `git ls-files -z`.split("\x0").reject { |f| f.match(%r{^(test|spec|features)/}) }
  spec.bindir        = "exe"
  spec.executables   = spec.files.grep(%r{^exe/}) { |f| File.basename(f) }
  spec.require_paths = ["lib"]

  spec.add_dependency "activesupport"
  spec.add_development_dependency "bundler"
  spec.add_development_dependency "rake", "~> 10.0"
  spec.add_development_dependency "rspec", "~> 3.0"
  spec.add_development_dependency "factory_girl"
end

Gemfile:

source 'https://rubygems.org'

gem 'virtfs', "~> 0.0.1",
    :git => "https://github.com/ManageIQ/virtfs.git",
    :branch => "master"

# Specify your gem's dependencies in virtfs-hellofs.gemspec
gemspec

group :test do
  gem 'virt_disk', "~> 0.0.1",
      :git => "https://github.com/ManageIQ/virt_disk.git",
      :branch => "initial"
end

Rakefile:

require "bundler/gem_tasks"
require "rspec/core/rake_task"

RSpec::Core::RakeTask.new(:spec)

task :default => :spec

Packaging It Up

Building virtfs-hellofs.gem is as simple as running:

rake build

in the project directory.

The gem will be written to the ‘pkg’ subdir and is ready for subsequent use / upload to rubygems.


Verification

To verify the plugin, create a test module which simply mounts a FS instance and dumps the directory contents:

test.rb

require 'json'
require 'virtfs'
require 'virtfs/hellofs'

PATH = JSON.parse(File.read('hello.fs'))

exit 1 unless VirtFS::HelloFS::FS.match?(PATH)
fs = VirtFS::HelloFS::FS.new(PATH)

VirtFS.mount fs, '/'
puts VirtFS::VDir.entries('/')

We can create a simple JSON filesystem for testing purposes:

hello.fs

{
  "f1" : "foobar",
  "f2" : "barfoo",
  "d1" : { "sf1" : "fignewton",
           "sd1" : { "t" : "s" } }
}

Run the script, and if the directory contents are printed, you verified your FS!


Testing

rspec and factory_girl were added as development dependencies to the project and testing the new filesystem is as simple as adding new unit tests.

For ‘real’ filesystems, the plugin author will need to generate a ‘blocklike device’ image and populate it w/ the necessary test data.

Because large block image files are not condusive to source repository systems and automated build systems, virtfs-camcorderfs can be used to record and playback disk interactions in local dev environment, recording text based ‘cassettes’ which may be used to replicate disk interactions. See virtfs-camcorderfs for usage details.


Next Steps

We added barebones basic VirtFS functionality for our hellofs filesystem backend. From here, we can continue expanding upon this, providing read, write, and query support. Once implemented, VirtFS will use this filesystem like every other, providing seamless interchangeabilty!

Summer of Code — The waiting period

Posted by squimrel on April 02, 2017 12:10 AM

After I spent so much time to look at what I want to work on I’d love to jump right in and start to work on the project.

Until now I’ve only done tiny improvements and bug fixes.

<figure><figcaption>My Pull Request to MediaWriter on Github</figcaption></figure>

I can’t wait to really get going!

Poorly I’ll have to wait one more month before I know if I have been accepted or not.

What I’ll do in the waiting period

I could obviously use all my spare time to work on other parts of the Fedora Media Writer that are not related to the project I hope I’ll be sponsored to work on anyways.

But since other things in life also are of importance I’ll focus my time on my studies and on my own projects for a couple weeks before I’ll return to the final preparations I’d like to do before I start with the project.

If you wish to know more about the project I’ll work on you may read my proposal on the Fedora Project Wiki.

I applied for summer of code

Posted by squimrel on March 24, 2017 01:33 AM

Since February 27 17:00 UTC I’ve been looking at the project ideas which were published by around 200 organisations that got accepted to Google Summer of code.

In particular I wanted to work on a project that used python, C or C++, weren’t related to web development and did have source code that looks good and makes sense.

As a C++ and Rust advocate who never really worked with C apart from learning assembly I’ve always thought about how crazy those C developers must be to take care of all the deallocations manually. I really wanted to see what it’s like to work with C. I might write a separate article about this in the future but let’s just say it’s actually pretty fun and simple. That’s why I looked at PostgreSQL and Git.

PostgreSQL — Foreign key for Array elements

I thought this project was interesting because I wanted to use this yet non-existent feature myself at some point.

I started of reading a little about this on the mailing list but got bored pretty quickly and moved on to reading the documentation and source code.

It was quite interesting but in such a large code base I had a hard time to figure out what belongs where and it wasn’t immediately clear to me how things should be done even though I had tools such as cscope and 2000 pages of documentation to help me out and I’ve even attended a course on database design before.

The reason why I didn’t spend more time on it was that I eventually went back to read everything about the project on the mailing list and it wasn’t even clear what syntax they wanted to use, what exact feature they wanted and how it should behave since there were serious doubts on how this would perform in various situations.

Git — Script to builtin

Git looked great. There seemed to be more than enough work for everyone and I liked the source code much more than the source code of PostgreSQL.

They require every student to complete a tiny microproject which seems like a great idea to me and so that’s what I did. Poorly only after four days I eventually read the part that said that they’ll only take a maximum of two students and there were already a lot of great students applying for it and since I didn’t want to take their spot away I abandoned the project a couple days later.

Fedora Project — Media Writer support for persistent storage

Since I’m a Fedora user and fan, Fedora definitely also was on my list with their python and C++ projects. But when I found out that you have to give away your identity to them right away via the FAS account I closed their site without looking at most of the projects.

When I finally looked at the Fedora Media Writer project after I returned from Git I fell in love. The author of the project knew what he was doing. It was cross-platform (Linux, Windows, Mac) and even the commit messages were written properly.

After looking at this project for a while it seems like implementing persistent storage isn’t even that easy because currently on every supported platform the downloaded iso is simply dd-like copied over to the portable media device. It’s not clear what the best solution is, especially since it should preferably be a solution that adds no extra dependencies to external programs and should work on three different platforms so I’ll probably have a lot of fun experimenting with different approaches.

One could say though that I haven’t even applied yet since I only spoke to my potential mentor and shared the draft of my application with Fedora Project so far. I’ll properly wait for my potential mentor to give me some feedback on my application before I’ll post it on the wiki. Also I only have access to the wiki since a couple hours.

This is the project I’ll settle for and I won’t look at any other project anymore. If won’t be taken for it then I’ll not take part in Summer of Code but one thing is for sure I’ve learned already quite a lot during this application period since I’ve setup a new mail stack for myself: Exim + PostgreSQL + SpamAssasin + Dovecot + offlineimap + notmuch + msmtp. With this stack and git format-patch working with mailing list was fun! Also I learned how it’s like to read a lot of source code and documentation just to write a single line of code.

To be honest I only write this blog post because I want to get used to blogging every week since that’s something I’ll have to do when I get accepted for summer of code with Fedora Project, sponsored by Google.

2016 – My Year in Review

Posted by Justin W. Flory on February 17, 2017 08:30 AM

Before looking too far ahead to the future, it’s important to spend time to reflect over the past year’s events, identify successes and failures, and devise ways to improve. Describing my 2016 is a challenge for me to find the right words for. This post continues a habit I started last year with my 2015 Year in Review. One thing I discover nearly every day is that I’m always learning new things from various people and circumstances. Even though 2017 is already getting started, I want to reflect back on some of these experiences and opportunities of the past year.

Preface

When I started writing this in January, I read freenode‘s “Happy New Year!” announcement. Even though their recollection of the year began as a negative reflection, the freenode team did not fail to find some of the positives of this year as well. The attitude reflected in their blog post is reflective of the attitude of many others today. 2016 has brought more than its share of sadness, fear, and a bleak unknown, but the colors of radiance, happiness, and hope have not faded either. Even though some of us celebrated the end of 2016 and its tragedies, two thoughts stay in my mind.

One, it is fundamentally important for all of us to stay vigilant and aware of what is happening in the world around us. The changing political atmosphere of the world has brought a shroud of unknowing, and the changing of a number does not and will not signify the end of these doubts and fears. 2017 brings its own series of unexpected events. I don’t consider this a negative, but in order for it not to become a negative, we must constantly remain active and aware.

Secondly, despite the more bleak moments of this year, there has never been a more important time to embrace the positives of the past year. For every hardship faced, there is an equal and opposite reaction. Love is all around us and sometimes where we least expect it. Spend extra time this new year remembering the things that brought you happiness in the past year. Hold them close, but share that light of happiness with others too. You might not know how much it’s needed.

First year of university: complete!

Many things changed since I decided to pack up my life and go to a school a thousand miles away from my hometown. In May, I officially finished my first year at the Rochester Institute of Technology, finishing the full year on dean’s list. Even though it was only a single year, the changes from my decision to make the move are incomparable. Rochester exposed me to amazing, brilliant people. I’m connected to organizations and groups based on my interests like I never imagined. My courses are challenging, but interesting. If there is anything I am appreciative of in 2016, it is for the opportunities that have presented themselves to me in Rochester.

Adventures into FOSS@MAGIC

On 2016 Dec. 10th, the "FOSS Family" went to dinner at a local restaurant to celebrate the semester

On 2016 Dec. 10th, the “FOSS Family” went to dinner at a local restaurant to celebrate the semester

My involvement with the Free and Open Source Software (FOSS) community at RIT has grown exponentially since I began participating in 2015. I took my first course in the FOSS minor, Humanitarian Free and Open Source Software Development in spring 2016. In the following fall 2016 semester, I became the teaching assistant for the course. I helped show our community’s projects at Imagine RIT. I helped carry the RIT FOSS flag in California (more on that later). The FOSS@MAGIC initiative was an influencing factor for my decision to attend RIT and continues to play an impact in my life as a student.

I eagerly look forward to future opportunities for the FOSS projects and initiatives at RIT to grow and expand. Bringing open source into more students’ hands excites me!

I <3 WiC

With a new schedule, the fall 2016 semester marked the beginning of my active involvement with the Women in Computing (WiC) program at RIT, as part of the Allies committee. Together with other members of the RIT community, we work together to find issues in our community, discuss them and share experiences, and find ways to grow the WiC mission: to promote the success and advancement of women in their academic and professional careers.

WiCHacks 2016 Opening CeremonyIn spring 2016, I participated as a volunteer for WiCHacks, the annual all-female hackathon hosted at RIT. My first experience with WiCHacks left me impressed by all the hard work by the organizers and the entire atmosphere and environment of the event. After participating as a volunteer, I knew I wanted to become more involved with the organization. Fortunately, fall 2016 enabled me to become more active and engaged with the community. Even though I will be unable to attend WiCHacks 2017, I hope to help support the event in any way I can.

Also, hey! If you’re a female high school or university student in the Rochester area (or willing to do some travel), you should seriously check this out!

Google Summer of Code

Google Summer of Code, abbreviated to GSoC, is an annual program run by Google every year. Google works with open source projects to offer stipends for them to pay students to work on projects over the summer. In a last-minute decision to apply, I was accepted as a contributing student to the Fedora Project. My proposal was to work within the Fedora Infrastructure team to help automate the WordPress platforms with Ansible. My mentor, Patrick Uiterwijk, provided much of the motivation for the proposal and worked with me throughout the summer as I began learning Ansible for the first time. Over the course of the summer, my learned knowledge began to turn into practical experience.

It would be unfair for a reflection to count successes but not failures. GSoC was one of the most challenging and stressful activities I’ve ever participated in. It was a complete learning experience for me. One area I noted that I needed to improve on was communication. My failing point was not regularly communicating what I was working through or stuck on with my mentor and the rest of the Fedora GSoC community. GSoC taught me the value of asking questions often when you’re stuck, especially in an online contribution format.

On the positive side, GSoC helped formally introduce me to Ansible, and to a lesser extent, the value of automation in operations work. My work in GSoC helped enable me to become a sponsored sysadmin of Fedora, where I mostly focus my time contributing to the Badges site. Additionally, my experience in GSoC helped me when interviewing for summer internships (also more on this later).

Google Summer of Code came with many ups and downs. But I made it and passed the program. I’m happy and fortunate to have received this opportunity from the Fedora Project and Google. I learned several valuable lessons that have and will impact going forward into my career. I look forward to participating either as a mentor or organizer for GSoC 2017 with the Fedora Project this year.

Flock 2016

Group photo of all Flock 2016 attendees outside of the conference venue (Photo courtesy of Joe Brockmeier)

Group photo of all Flock 2016 attendees outside of the conference venue (Photo courtesy of Joe Brockmeier)

Towards the end of summer, in the beginning of August, I was accepted as a speaker to the annual Fedora Project contributor conference, Flock. As a speaker, my travel and accommodation were sponsored to the event venue in Kraków, Poland.

Months after Flock, I am still incredibly grateful for receiving the opportunity to attend the conference. I am appreciative and thankful to Red Hat for helping cover my costs to attend, which is something I would never be able to do on my own. Outside of the real work and productivity that happened during the conference, I am happy to have mapped names to faces. I met incredible people from all corners of the world and have made new lifelong friends (who I was fortunate to see again in 2017)! Flock introduced me in-person to the diverse and brilliant community behind the Fedora Project. It is an experience that will stay with me forever.

To read a more in-depth analysis of my time in Poland, you can read my full write-up of Flock 2016.

To Kraków for Flock with Bee, Amita, Jona, and Giannis

On a bus to the Kraków city center with Bee Padalkar, Amita Sharma, Jona Azizaj, and Giannis Konstantinidis (left to right).

Maryland (Bitcamp), Massachusetts (HackMIT), California (MINECON)

Bitcamp 2016: The Fedora Ambassadors of Bitcamp 2016

The Fedora Ambassadors at Bitcamp 2016. Left to right: Chaoyi Zha (cydrobolt), Justin W. Flory (jflory7), Mike DePaulo (mikedep333), Corey Sheldon (linuxmodder)

2016 provided me the opportunity to explore various parts of my country. Throughout the year, I attended various conferences to represent the Fedora Project, the SpigotMC project, and the RIT open source community.

There are three distinct events that stand out in my memory. For the first time, I visited the University of Maryland for Bitcamp as a Fedora Ambassador. It also provided me an opportunity to see my nation’s capitol for the first time. I also visited Boston for the first time this year as well for HackMIT, MIT’s annual hackathon event. I also participated as a Fedora Ambassador and met brilliant students from around the country (and even the world, with one student I met flying in from India for the weekend).

"Team Ubuntu" shows off their project to Charles Profitt before the project deadline for HackMIT 2016

“Team Ubuntu” shows off their project to Charles Profitt before the project deadline for HackMIT 2016

Lastly, I also took my first journey to the US west coast for MINECON 2016, the annual Minecraft convention. I attended as a staff member of the SpigotMC project and a representative of the open source community at RIT.

All three of these events have their own event reports to go with them. More info and plenty of pictures are in the full reports.

Vermont 2016 with Matt

Shortly after I arrived, Matt Coutu took me around to see the sights and find coffee

Shortly after I arrived, Matt took me around to see the sights and find coffee.

Some trips happen without prior arrangements and planning. Sometimes, the best memories are made by not saying no. I remember the phone call with one of my closest friends, Matt Coutu, at some point in October. On a sudden whim, we planned my first visit to Vermont to visit him. Some of the things he told me to expect made me excited to explore Vermont! And then in the pre-dawn hours of November 4th, I made the trek out to Vermont to see him.

50 feet up into the air atop Spruce Mountain was colder than we expected

50 feet up into the air atop Spruce Mountain was colder than we expected.

Instantly when crossing over the state border, I knew this was one of the most beautiful states I ever visited. During the weekend, the two of us did things that I think only the two of us would enjoy. We climbed a snowy mountain to reach an abandoned fire watchtower, where we endured a mini blizzard. We walked through a city without a specific destination in mind, but to go wherever the moment took us.

We visited a quiet dirt road that led to a meditation house and cavern maintained by monks, where we meditated and drank in the experience. I wouldn’t classify the trip has a high-energy or engaging trip, but for me, it was one of the most enjoyable trips I’ve embarked on yet. There are many things that I still hold on to from that weekend for remembering or reflecting back on.

A big shout-out to Matt for always supporting me with everything I do and always being there when we need each other.

Martin Bridge may not be one of your top places to visit in Vermont, but if you keep going, you'll find a one-of-a-kind view

Martin Bridge may not be one of your top places to visit in Vermont, but if you keep going, you’ll find a one-of-a-kind view.

Finally seeing NYC with Nolski

Mike Nolan and Justin W. Flory venture through New York City early on a Sunday evening

Mike Nolan and I venture through New York City early on a Sunday evening

In no short time after the Vermont trip, I purchased tickets for my favorite band, El Ten Eleven, in New York City on November 12th. What turned into a one-day trip to see the band turned into an all-weekend trip to see the band, see New York City, and spend some time catching up with two of my favorite people, Mike Nolan (nolski) and Remy DeCausemaker (decause). During the weekend, I saw the World Trade Center memorial site for the first time, tried some amazing bagels, explored virtual reality in Samsung’s HQ, and got an exclusive inside look at the Giphy office.

This was my third time in New York City, but my first time to explore the city. Another shout-out goes to Mike for letting me crash on his couch and stealing his Sunday to walk through his metaphorical backyard. Hopefully it isn’t my last time to visit the city either!

Finalizing study abroad

This may be cheating since it was taken in 2017, but this is one of my favorite photos from Dubrovnik, Croatia so far

This may be cheating since it was taken in 2017, but this is one of my favorite photos from Dubrovnik, Croatia so far. You can find more like this on my 500px gallery!

At the end of 2016, I finalized a plan that was more than a year in the making. I applied and was accepted to study abroad at the Rochester Institute of Technology campus in Dubrovnik, Croatia. RIT has a few satellite campuses across the world: two in Croatia (Zagreb and Dubrovnik) and one in Dubai, UAE. In addition to being accepted, the university provided me a grant to further my education abroad. I am fortunate to have received this opportunity and can’t wait to spend the next few months of my life in Croatia. I am currently studying in Dubrovnik since January until the end of May.

During my time here, I will be taking 12 credit hours of courses. I am taking ISTE-230 (Introduction to Database and Data Modeling), ENGL-361 (Technical Writing), ENVS-150 (Ecology of the Dalmatian Coast), and lastly, FOOD-161 (Wines of the World). The last one was a fun one that I took for myself to try broadening my experiences while abroad.

Additionally, one of my personal goals for 2017 is to practice my photography skills. During my time abroad, I have created a gallery on 500px where I upload my top photos from every week. I welcome feedback and opinions about my pictures, and if you have criticism for how I can improve, I’d love to hear about it!

Accepting my first co-op

The last big break that I had in 2016 was accepting my first co-op position. Starting in June, I will be a Production Engineering Intern at Jump Trading, LLC. I started interviewing with Jump Trading in October and even had an on-site interview that brought me to their headquarters in Chicago at the beginning of December. After meeting the people and understanding the culture of the company, I am happy to accept a place at the team. I look forward to learning from some of the best in the industry and hope to contribute to some of the fascinating projects going on there.

From June until late August, I will be starting full-time at their Chicago office. If you are in the area or ever want to say hello, let me know and I’d be happy to grab coffee, once I figure out where all the best coffee shops in Chicago are!

In summary

2015 felt like a difficult year to follow, but 2016 exceeded my expectations. I acknowledge and I’m grateful for the opportunities this year presented to me. Most importantly, I am thankful for the people who have touched my life in a unique way. I met many new people and strengthened my friendships and bonds with many old faces too. All of the great things from the past year would not be possible without the influence, mentorship, guidance, friendship, and comradery these people have given me. My mission is to always pay it forward to others in any way that I can, so that others are able to experience the same opportunities (or better).

2017 is starting off hot and moving quickly, so I hope I can keep up! I can’t wait to see what this year brings and hope that I have the chance to meet more amazing people, and also meet many of my old friends again, wherever that may be.

Keep the FOSS flag high.

The post 2016 – My Year in Review appeared first on Justin W. Flory's Blog.

How Minecraft got me involved in the open source community

Posted by Justin W. Flory on October 10, 2016 09:30 AM

This post was originally published on OpenSource.com.


When people first think of “open source”, their mind probably first goes to code. Something technical that requires an intermediate understanding of computers or programming languages. But open source is a broad concept that goes beyond only binary bits and bytes. Open source projects hold great regard for community participation. The community is a fundamental piece of a successful open source project. For my experience getting involved with open source, I began in the community and worked my way around from there. At the age of fifteen, I was beginning my open source journey and I didn’t even know it.

Gaming introduces open source

One of my strongest memories of a “gaming addiction” was when I was fifteen and a younger cousin introduced me to the game Minecraft. The game was in beta then, but I remember the sandbox-style of the game entertaining the two of us for hours. But what I discovered was that playing the game alone became boring. Playing and mining with others made the experience more fun and meaningful. In order to do this, I learned I would have to host a server for my friends to connect to play with me.

I originally used the “vanilla” Minecraft server software at first, but it was limited to what it could do, and didn’t compare to other multiplayer servers in existence. They all seemed to be using something that offered more, so players could play games, cast spells, or do other unique things that would normally not be possible in the game. After digging, I discovered Bukkit, an open source Minecraft server software with an extensible API to let developers change the multiplayer experience. I soon became wrapped up with Bukkit like a child with a new toy. Except this toy had me digging through my computer to set up “port forwarding”, “setting NAT records”, and “creating static IP addresses”. I was teaching myself the basics of computer networking in the guise of creating a game server for my friends.

Over time, my Minecraft server hobby began to take up more and more time. More people began playing on my server and I began searching for ways to improve the performance of my server. After doing some digging, I discovered the SpigotMC project, shortened to just Spigot. Spigot was a fork of the Bukkit project that made specific enhancements to performance. After trialing it on my server, I discovered the performance gains were measurable and I would commit to using Spigot from then on.

Participating in SpigotMC

Before long, I began running into new challenges with managing my Minecraft server community, whether it was finding ways to scale or finding the best ways to build a community up. In October 2013, I registered an account on the Spigot forums to talk with other server owners and seek advice on ways I could improve. I found the community welcoming and accepting to helping me learn and improve. Several people in the community were owners of larger servers or developers of unique plugins to Spigot. In response to my detailed inquisitions, they responded with genuine and helpful feedback and support. Within a week, I was already in love with the people and helpfulness of the Spigot community.

I became an active participant in the forum community in Spigot. Through the project, I was introduced to IRC and how to use it for communicating with other server owners and developers. What I didn’t realize was a trend in my behavior. Over time, I began shifting away from asking all the questions. Almost as if in a role reversal, I became the one answering questions and helping support other new server owners or developers. I became the one in an advisory role instead of the one always asking.

SpigotMC team at annual Minecraft convention, MINECON, in 2015

SpigotMC team at annual Minecraft convention, MINECON, in 2015

In April 2014, the project lead of Spigot reached out to me asking if I would consider a role as a community staff member. Part of my responsibilities would be responding to reports, encouraging a helpful and friendly community, and maintaining the atmosphere of the community. With as much prestige and honor as my sixteen-year-old self could muster, I accepted and began serving as a community moderator. I remember feeling privileged to serve the position – I would finally get to help the community that had done so much to help me.

Expanding the open source horizon

Through 2014 and 2015, I actively served as a moderator of the community, both in the forums and the IRC network for Spigot. I remained in the Spigot community as the project steadily grew. It was incredible to see how the project was attracting more and more users.

However, my open source journey did not end there. After receiving my high school diploma in May 2015, I had set my sights on the Rochester Institute of Technology, a school I noted as having the country’s only Free and Open Source Software minor. By coincidence, I also noticed that my preferred Linux distribution, Fedora, was holding its annual contributor conference in Rochester, a week before I would move in for classes. I decided I would make the move up early to see what it was all about.

Flock 2015 introduces Fedora

The summer passed, and before I knew I was ready, I was packing up from my home outside of Atlanta, Georgia to leave for Rochester, New York. After fourteen hours of driving, I finally arrived and began moving into my new home. A day after I arrived, Flock was slated to begin, marking my first journey in Rochester.

Group photo of Fedora Flock 2015 attendees at the Strong Museum of Play

Group photo of Fedora Flock 2015 attendees at the Strong Museum of Play

At Flock, I entered as an outsider. I was in an unfamiliar city with unfamiliar people and an open source project I was only mildly familiar with. It was all new to me. But during that week, I discovered a community of people who were united around four common ideals. Freedom, Friends, Features, First: the Four Foundations of the Fedora Project were made clear to me. The community members at Flock worked passionately towards advancing their project during the talks and workshops. And after the talks finished, they gathered together for hallway discussions, sharing drinks, and enjoying the presence of their (usually) internationally dispersed team. Without having ever attended a Fedora event before, I knew that the Four Foundations and the community behind Fedora were the real deal. Leaving Flock that year, I vowed to pursue becoming a part of this incredible community.

Pen to paper, keyboard to post

The first major step I took towards contributing to the Fedora Project was in September 2015, during Software Freedom Day. Then Fedora Community Action and Impact Coordinator Remy DeCausemaker was in attendance representing Fedora. During the event, I reached out to the Fedora Magazine editorial team asking to become involved as a writer. By the end of September, I penned my first article for the Fedora Magazine, tying in my experience in the Spigot community to Fedora: run a Minecraft server using Spigot.

My first step getting involved with the Fedora community was an exciting one. I remember feeling proud and excited to see my first article published on the front page, not only helping Fedora, but also helping Spigot. I realized then that it was relatively straightforward to contribute this kind of content, and I would keep writing about software I was familiar with for the Magazine.

As I continued writing posts for the Fedora Magazine, I became aware of another team forming up in Fedora: the Community Operations, or CommOps, team. I subscribed to their mailing list, joined the IRC channel, and attended the first meetings. Over time, I became wrapped up and involved with the community efforts within Fedora. I slowly found one thing leading to another.

Today in Fedora, I am the leading member of the Community Operations (CommOps) team, the editor-in-chief of the Fedora Magazine, a Marketing team member, an Ambassador of North America, a leading member of the Diversity Team, and a few other things.

Advice for other students

When you’re first getting started, it can sometimes be tough and a little confusing. As students getting involved with FOSS, there are a few challenges that we might have to face. A lot of this can be with making the first steps into a new project. There are countless open source projects of various sizes and they all do things a bit differently from others, so the process changes from project to project.

One of the most obvious challenges with getting involved is your personal experience level. Especially when getting started, it can be easy to look at a large project or well-known project and see all the work devoted there. There are smart and active people working on these projects, and many times their contributions are quite impressive! One of the many concerns I’ve seen other students here face (including myself at first) is wondering how someone with beginning to moderate experience or knowledge can get involved, in comparison to some of these contributions from active contributors. If it’s a large project, like Fedora, it can be intimidating to think where to start when there’s so many things to do and areas to get involved with. But if you think of it all as one big project, it is intimidating and difficult for you to make that first step.

Break a bigger project into smaller pieces. Start small and look for something you can help with. A healthy open source project usually will have things like easyfix bugs that are good ones to start with if it’s your first time contributing. Keep an eye out for those if you’re getting started.

Another challenge you might face as a student or beginner to open source is something called imposter syndrome. For me, this was something I had identified with before I knew what it was. For a definition, I’ll pull straight from Wikipedia first: ” Term referring to high-achieving individuals marked by an inability to internalize their accomplishments and a persistent fear of being exposed as a “fraud”.

Imposter syndrome can be a common feeling as you get involved with open source, especially if comparing yourself to some of those active and smart contributors that you meet as you become involved. But you should also remember you are a student – comparing yourself or your contributions to a professional or someone with years of experience isn’t fair to yourself! It’s not apple-to-apples. Your contributions as you get involved with open source are worthy and valuable to an open source project regardless of how deep, how many, or how much time you spend on the project. Even if it’s a couple of hours in the week, that’s saving others those couple of hours and it’s adding something into the project. A contribution is a contribution – it’s a bad idea to rate the worth of contributions to other contributions.

Those are some of the challenges that are useful to know and understand as you become more involved with FOSS. If you know the challenges you are up against, it makes it easier to handle them as they come.

There are also benefits to contributing to open source as a student as well. Contributing to open source is a great way for you to take knowledge and info you have learned from classes and begin applying it to real-world projects and gain experience. It’s one thing to take you to the next level as a student. If you are contributing to a project in the real world, that is unique experience that is helpful for you for future career outlooks as well.

It’s also a great networking opportunity. In open source, you meet many incredible and smart people. In my time in Fedora, I’ve met many contributors and had various mentors help me get involved. I’ve made new friends and met people who I normally would never have had the opportunity to meet.

River boat cruise dinner with Fedora friends at Flock 2016

River boat cruise dinner with Fedora friends at Flock 2016

There are also opportunities for leadership in open source projects. Whether it’s just one task, one bug, or even a role, you might find that sometimes all it takes is someone willing to say, “I’ll do this!” to have leadership on something. It might be challenging or difficult at first, but it’s a great way for you to understand working in team environments, how to work effectively even if you’re remote, and how to break down a task and work on finding solutions for complex problems.

Lastly, it’s important for younger people to become more involved with open source communities. As students and younger community members, we add unique perspective and ideas to open source projects. It’s important to a healthy community for an open source project and any open source project worth contributing to should be welcoming and accepting to students who are willing to spend time working on the project and helping solve those problems, whether they’re bugs, tasks, or other things. In short, there is absolutely a role for students getting involved with open source!

The post How Minecraft got me involved in the open source community appeared first on Justin W. Flory's Blog.

GSoC 2016: That’s a wrap!

Posted by Justin W. Flory on August 21, 2016 08:35 PM

Tomorrow, August 22, 2016, marks the end of the Google Summer of Code 2016 program. This year, I participated as a student for the Fedora Project working on my proposal, “Ansible and the Community (or automation improving innovation)“. You can read my original project proposal on the Fedora wiki. Over the summer, I spent time learning more about Ansible, applying the knowledge to real-world applications, and then taking that experience and writing my final deliverable. The last deliverable items, closing plans, and thoughts on the journey are detailed as follows.

Deliverable items

The last deliverable items from my project are two (2) git patches, one (1) git repository, and seven (7) blog posts (including this one).

Closing plans

At the end of the summer, I was using a private cloud instance in Fedora’s infrastructure for testing my playbooks and other resources. One of the challenges towards the end of my project was moving my changes from my local development instance into a more permanent part of Fedora’s infrastructure. For these reasons, I had some issues with running them in a context and workflow specific to Fedora’s infrastructure and set-up (since I am not a sponsored member of the Fedora system administration group).

My current two patches were submitted to my mentor, Patrick. Together, we worked through some small problems with running my playbook in the context of Fedora’s infrastructure. There may still be some small remaining hoops to jump through for running it in production, but any remaining changes to be made should be minor. The majority of the work and preparation for moving to production is complete. This is also something I plan to follow up on past the end of the GSoC 2016 program as a member of the Fedora Infrastructure Apprentice program.

My patches should be merged into the ansible.git and infra-docs.git repositories soon.

Reflection on GSoC 2016

As the program comes to a close, there’s a lot of valuable lessons I’ve learned and opportunities I’m thankful to have received. I want to share some of my own personal observations and thoughts in the hopes that future students or mentors might find it useful for later years.

Planning your timeline

In my case, I spent a large amount of time planning my timeline for the project before the summer. Once the summer began, my original timeline was too broad for having smaller milestones to work towards. My timeline on the student application was more broad and general, and while it covered the big points, it was difficult to work towards those at first. Creating smaller milestones and goals for the bigger tasks makes them easier to work through on a day-by-day basis and helps add a sense of accomplishment to the work you are doing. It also helps shape direction for your work in the short-term and not just the long-term.

For an incoming Google Summer of Code student for Fedora (or any project), I would recommend creating the general, “big picture” timeline for your project before the summer. Then, if you are accepted and beginning your proposal, spend a full day creating small milestones for the bigger items. Try to map out accomplishments every week and break down how you want to reach those milestones throughout the week. I started using TaskWarrior with an Inthe.AM Taskserver to help me manage weekly tasks going into my project. But it’s important to find a tool that works for you. You should reach out to your mentor about ideas for tools. If possible, your mentor should also have a way to view your agenda and weekly tasks. This will help make sure your goals are aligned to the right kind of work you are doing for an on-time completion.

I think this kind of short-term planning or task management is essential for hitting the big milestones and being timely with your progress.

Regular communication

Consistent and frequent communication is also essential for your success in Google Summer of Code. This can be different depending on the context of how you are contributing to the project. For a normal student, this might just be communicating about your proposal with your mentor regularly. If you’re already an active contributor and working in other areas of the project, this might be spending extra time on communicating your progress on the GSoC project (but more on that specifically in the next section).

Regardless of the type of contributor you are, one thing is common and universal – be noisy! Ultimately, project mentors and GSoC program administrators want to be sure that you are spending the time on your project and making progress towards accomplishing your goals. If you are not communicating, you will run the highest risk of failing. How to communicate can vary from project to project, but for Fedora, here’s my personal recommendations.

Blog posts

Even for someone like me who spends a lot of time writing already, this can be a difficult thing to do. But no matter how hard it is to do it, this is the cornerstone for communicating your progress and leaving a trail for future students to learn from you as well. Even if you’ve had a difficult week or haven’t had much progress, take the time to sit down and write a post. If you’re stuck, share your challenges and share what you’re stuck on. Focus on any success or breakthroughs you’ve made, but also reflect on issues or frustrations you have had.

Taking the time to reflect on triumphs and failures is important not only for Google Summer of Code, but even looking past that into the real world. Not everything will go your way and there will be times where you will be face challenges that you don’t know how to resolve. Don’t burn yourself out trying to solve those kinds of problems alone! Communicate about them, ask for help from your mentors and peers, and make it an open process.

IRC check-ins

Whether in a public channel, a meeting, or a private one-on-one chat with your mentor, make sure you are both active and present in IRC. Make sure you are talking and communicating with your mentor on a regular basis (at a minimum, weekly). Taking the time to talk with your mentor about your challenges or progress is helpful for them so they know what you’re up to or where you are in the project. It also provides a chance for them to offer advice and oversight into your direction and potentially steer you away from making a mistake or going into the wrong direction. It is demotivating when you’ve spent a lot of time on something and then later discovered it either wasn’t necessary or had a simpler solution than you realized.

Make sure you are communicating often with your mentor over IRC to make your progress transparent and to also offer the chance for you to avoid any pitfalls or traps that can be avoided.

Hang out in the development channels

As a Fedora Google Summer of Code student, there are a few channels that you should be present in on a regular basis (a daily presence is best).

  • #fedora-admin
  • #fedora-apps
  • #fedora-summer-coding
  • Any specific channel for your project, e.g. #fedora-hubs

A lot of development action happens in this channels, or people who can help you with problems are available here. This also provides you the opportunity to gain insight into what the communication in an active open source project looks like. You should at least be present and reading the activity in these channels during the summer. Participation is definitely encouraged as well.

Balancing project with open source contributions

I think my single, most difficult challenge with Google Summer of Code was balancing my proposal-specific contributions with the rest of contributions and work in the Fedora Project. I believe I was a minority of Google Summer of Code students who applied for the program as an active member of the project almost a full year before the program began. Additionally, my areas of contribution in Fedora before GSoC were mostly unrelated to my project proposal. My project proposal mostly aligned with my intended degree and education I am pursuing. A lot of the technology I would be working with was new to me and I had minimal knowledge about it before beginning the summer. As a result, this presented a unique set of challenges and problems I would face throughout my project.

The consequences of this were that I had to spend a lot more time researching and becoming familiar with the technology before advancing with creating the deliverable items. A great resource for me to learn about Ansible was Ansible for DevOps by Jeff Geerling. But I spent more time on learning and “trying out the tech” than I had anticipated.

This extra time spent on research and experimentation were in tandem to my ongoing contributions in other areas of the project like Community Operations, Marketing, Ambassadors, the Diversity Team, and as of recently, the Games SIG. Balancing my time between these different areas, including GSoC, was the biggest challenge to me over the summer (along with a separate, part-time job on weekends). A separation of time to different areas of Fedora became essential for making progress on my project. What worked well for me was setting short-term goals (by the hour or day) that I wanted to hit and carry out. Until those goals were reached, I wouldn’t focus on anything other than those tasks.

Special thanks

I’m both thankful and grateful to those who have offered their mentorship, time, and guidance for me to be a member of the GSoC Class of 2016. Special thanks go to Patrick Uiterwijk, my mentor for the program. I’ve learned a lot from Patrick through these past few months and enjoyed our conversations. Even though we were both running around the entire week, I’m glad I had the chance to meet him at Flock 2016 (and hope to see him soon at FOSDEM or DevConf)! Another thanks goes to one of my former supporting mentors and program administrator Remy DeCausemaker.

I’m looking forward to another year and beyond of Fedora contributions, and can’t wait to see what’s next!

The post GSoC 2016: That’s a wrap! appeared first on Justin W. Flory's Blog.

The final week - GSoC Wrap Up

Posted by Sachin S. Kamath on August 15, 2016 09:05 AM

Happy Independence day, India!

Also, today marks the beginning of the GSoC deadline week. This post will wrap-up what I have done during my intern period.

Community Bonding period

  • Figure out how fedmsg works

    fedmsg (FEDerated MeSsaGe bus) is a python package and API defining a brokerless messaging architecture to send and receive messages to and from applications.
    

fedmsg was used to gather messages for statistics generation.

Documentation

  • Figure out how datagrepper works.

    Datagrepper is a web-app to retrieve historical information about messages on the fedmsg bus. It is a JSON API for the datanommer message store.
    

Datagrepper queries were made to retrieve messages for users and was letter compiled into one bigger JSON file and rendered into other forms of output.

Reference

  • Familiarize with all the tools in Toolbox.

    CommOps Toolbox is a set of tools that aims at automating tedious tasks. 
    

I had to deliver a tool which could be combined with the existing tools for CommOps storytelling and Metrics process.

Reference

Coding Period - Mid Term



1st Quarter:

  • Onboarding Series - Badge Identification

    • Onboarding is really important for large communities like Fedora. Until Fedora Hubs arrives, badges were decided to be an ideal way to track progress.

    • Started digging information on badges and how it works.

  • Automated GSoC Reports

    • A tool was to be delivered that could initially give statistics of all the Fedora / RedHat / Outreachy interns that would automatically generate CSV's and graphs based on a user's activity. Scaling the tool was pushed for later.

    • Repo for data : https://github.com/sachinkamath/fedstats-data/

  • Badges .yaml definitions

    • This was pushed for as the tool had to added into the toolbox before the mid-term.
  • Automate Events Report Analysis

    • Bee's Script (to be uploaded) as a start

    • Parse a csv, give rudimentary stats about users/fedmsgs: Using stats-tool : PyCon Data ( PyCon US Statistics was generated using this tool)

2nd Quarter:

  • Work on adding more features to the tool

    • More output options such as markdown, gource and csv was added during this period
  • Generate mid-term reports

Blog Posts for this period

Summer with Fedora

Let the Coding Begin

Getting fedstats production ready

Digging deep into datagrepper

Mid term Overview

Mid-term to Finals

Blog Posts for this period

Journey So Far

Understanding statscache

Improving statistics using python-fedora API

Final touches and road ahead

Identifying Fedora Contributors

Post GSOC


Working with the amazing people over at Fedora was indeed a really good experience. In this 3 months, I collected around 51 badges.

One badge which I am really proud of this the black and white cookie badge, given to users who have helped 25 Fedorans. It has been only awarded to 31 times so far. Cookies! \o/

Current badges rank, Gotta Badge 'Em All!

Current Repository statistics :

Identifying Fedora Contributors - Stats for Flock

Posted by Sachin S. Kamath on August 06, 2016 01:06 PM

Quoting the Fedora Wiki :

Flock is an annual conference for Fedora contributors to come together, discuss new ideas, work to make those ideas a reality, and continue to promote the core values of the Fedora Community: Freedom, Friends, Features, and First.

I was working on generating statistics for Flock this week. Bhagyashree (bee2502), my GSoC mentor, had delivered a talk on Fedora Contributors and Newcomers Onboarding and I was assigned the task of generating statistics of the whole Fedora Community. At first thought, this was a pretty hectic thing to do. To accomplish this, I will need data of all the contributors from the beginning of fedmsg -i.e from 2012. And, I will have to find when a user had signed up for a FAS account and track his/her activity. Phew!

Now let's crunch the numbers :

Estimating Users :

Fedora Badges Statistics

It was pretty simple, the Fedora Badges front-end (Tahrir) suggested that there were around 41,000 FAS Accounts which fedmsg was tracking and had logged into the badges website. I assumed that if an account was to be of a contributor, he/she should have logged into the badges system. Okay, so I have the count, what now?

Making sense out of the mess :

I fired up my tool and added an extra element to it, i.e : the topic field of the requests (as org.fedoraproject.prod.fas.user.create) and set the --start and --end to match the starting and ending date of every year.

In simple terms, I am pulling the usernames of all those people who made their FAS account in between the years 201x - 201(x+1) (from 2012 to 2016), one year at a time. This will give me the total count of FAS accounts made every year. I could have just taken the count value from the JSON for this, but I needed the usernames for later. Along with this, I also dumped the usernames into a file in the format {username : timestamp_of_creation} into a file.

It looked something like this. I did this for all the years until I had 2012.json to 2016.json.

This gave me the count of FAS accounts being made every year - and with some pygal magic, I got this :

Right click and select View Image for an interactive graph

Yearwise FAS Accounts

Pull, pull, pull :

Now that we have the usernames and the timestamp of creation, we can check if the user was active for a certain period or not. I did this by pulling the data of a user with the --start and the --end arguments in 3 different ways.

1) Check if the user was active immediately, for that - the --start was set as T1 = (time of account creation) and end was set as T2 = (time of account creation + timedelta of 2 weeks).

If the user had count > 10, then the user was checked if the user was active between T2 and T2 + time delta of one month. If the user did not have any, a variable called slow_start was set to True for that user and was subsequently checked for 6+ months activity. Why? Because, there are a lot of people who created a FAS account early and started contributing after an year or so. Again, if count was less than 0, then the user was marked inactive. If the user had activity during this period, he/she was marked as a slow starter. And this is what I got after running the script :

<script src="https://gist.github.com/sachinkamath/95cdd1f5587d5581f25938ead5a8ceeb.js"></script>

Identifying long-term and short-term contributors :

The following set of rules were set followed for differentiating users :

Users considered inactive :

1) Users who have less than 10 fedmsg activity count
2) Users who have only created FAS
3) Users who made very few wiki edits + created a FAS account
4) not_category was set as fedbadges for such messages - so that the fedmsg activity won't exceed 10.

Users considered short term :

1) Users who have activity < 3 months.
2) People who have considerable amount of fedmsg activity and don't have any activity after a month.
3) not_category is again set as fedbadges here.

Users considered long term:

1) Users who have 3+ months of activity
2) Even if a user hasn't contributed in 6 months after creating the FAS account and then has considerable amount of fedmsg activity after another 6 months or an year.
3) Don't call it a comeback badge is considered https://badges.fedoraproject.org/badge/dont-call-it-a-comeback

After running this, I ended up with the following graph:

Right click and select View Image for an interactive graph

All this ended up in Flock in the form of a presentation and not to mention, had a good sleep after crunching up the stats :)

fedstats - Final touches and road ahead

Posted by Sachin S. Kamath on July 30, 2016 02:26 AM

GSoC Deadline is coming!

This week was meant for me to add the final touches to the tool and getting statistics for Flock ready by tweaking it.

Final Touches

The only thing remaining was categorization of output files. This had to be done because the files generated earlier where cluttering the main folder if too many users were pulled for. A very elegant solution was to categorize them into folders. by usernames - and then a .gitignore entry for all the outputs. Although I had .gitignore entries earlier, I organized files only this week.

This is how it basically works :

If the tool is called with the --group argument, the output is stored in <group>/<username>/<output_filename> and if the tool is called using the --user argument, then the output is to <username>/<output_filename>. Also, to avoid confusion - and overwriting of files - the default filename of every file is now <username>_main.<extension>. If a duplicate entry is found, a numeric number is automatically appended to the end.

I also started prettifying the code and scrubbing it. There were performance issues while grabbing group members but now, the JSON can be locally cached by using --mode json argument.

As of now, the develop branch stands at around 48 commits.


Road Ahead

The tool is written in the format of a script and this needs to be addressed. I have started working on packaging this tool and am currently trying to split up the script into modules to get it ready for packaging. Although it is not on my GSoC timeline, I am anyway going ahead with it.

Post-GSoC Goals :

  • More powerful stats

    Comparison graphs, multi-threading, caching and more..

  • Package the tool

    The tool needs to get ready for PyPI and needs to be modularized.

  • Implement missing features in statscache

    Statscache does not have the graph features (yet). Also, it'll be great to combine it with FAS features for more powerful analytics of data like Count by group and so forth. There was a discussion on whether the tool should be migrated to statscache or not but considering the target audience, initial plan and the timeline of GSoC, it was scheduled for after GSoC.

  • Continue work with Onboarding

    Onboarding is a really long process and I'm looking forward to bettering the Onboarding and Join Process of Fedora

GSoC 2016: Moving towards staging

Posted by Justin W. Flory on July 29, 2016 03:50 PM

This week wraps up for July and the last period of Google Summer of Code (GSoC 2016) is almost here. As the summer comes to a close, I’m working on the last steps for preparing my project for deployment into Fedora’s Ansible infrastructure. Once it checks out in a staging instance, it can make the move to production.

Next steps for GSoC 2016

My last steps for the project are moving closer to production. Earlier this summer, the best plan of action was to use my development cloud instance for quick, experimental testing. Once a point of stability is reached, it would be tested on a staging instance of the real Fedora Magazine or Community Blog. Once reviewed and tested, it would work its way to production for managing future installations and upgrades for any WordPress platform in Fedora.

When the time comes to move it to production, I will file a ticket in the Infrastructure Trac with my patch file to the Ansible repository.

One last correction

One sudden difficulty I’ve found is using the synchronize module in my upgrade playbook. Originally, I was copying and replacing the files using the copy module to carry out this, but I found synchronize to offer a better solution, using rsync. However, after switching, I ran into a small error that had me hung up.

When running the upgrade playbook, it would trigger an issue with rsync requiring a TTY session to work as a privileged user. I found a filed bug for this in the Ansible repository. Fixing it required setting a specific flag in the server configuration when using rsync. To avoid doing this, I altered my upgrade playbook to not avoid dependence on a root user for running, and instead using user and group permissions for the wordpress user. I’m working through smoothing out a few minor hiccups with the synchronize module during today, mostly dealing with the directory not being found when executing the module, even though it exists.

Flock 2016

On Sunday, I’ll be flying out to Poland for Flock 2016, Fedora’s annual contributor conference. During Flock, I’ll meet several other Fedora contributors in person, including my mentor. We plan to set up the staging instance either later tonight or during Flock, depending on how time ends up going.

I’ll also be delivering a talk and hosting a workshop during the week as well! One of the workshops I’m hoping to attend is the Ansible best practice working session. I’ll be seeing if there’s anything I can glean to build into the last week of the project during the workshop.

The post GSoC 2016: Moving towards staging appeared first on Justin W. Flory's Blog.

Improving statistics using python-fedora API

Posted by Sachin S. Kamath on July 24, 2016 12:09 PM

I was working on adding the group scraping feature this week. This is one thing that was proposed in a recent meeting of Commops, originally for CommOps retrospective wiki.

I was initially thinking of using statscache for this, but came across a few things that stopped me from doing so. Firstly, statscache is not deployed anywhere, which basically meant that I'll have to pull the historic fedmsg data using fedmsg-hub for the first run, and anyone who wanted to use the tool will also have to do the same. This tool is to be used by anyone as is, and not everyone will have the resources/bandwidth to download all the fedmsg messages. Also, statscache lacks the feature of grouping users. I could only find a by_user count of messages. It made more sense to run the tool on FedoraInfracloud and grab data from it.

Now that I could not use statscache, I initially tried scraping using Selenium and requests. After receiving some quality time on it, i realized that I was getting bad responses from the server. (Sigh, CSRF Token Issues). After some research and IRC discussions, I came across python-fedora API. It is an amazing API that does almost anything. Using it, one can log into FAS and perform a lot of actions like editing profile, getting user info,etc.

In fedora-python, the FAS Modules can handle logins, session caching and user handling. I wrote a function that'd pull all the users from a specific group, which looks something like this :

<script src="https://gist.github.com/sachinkamath/7f5a458a8793aaecc6fd472f40fa999d.js"></script>

And guess what, it worked like a charm. Okay - now for the login part - I had two choices; either prompt the password / get it from a config file. I chose the latter because it'd make automation easier. I ended up using ConfigParser to pull data from a cfg file.

During this, I noticed a very interesting thing - The next time I ran the script, I modified my password a bit (the hacker in me prompted me to :p) and surprisingly - it worked. Session caching is amazing- isn't it. Shoutout to #fedora-admin for helping me understand that :)

And finally, I integrated it in the main script and added the argparse argument of --group / -g to specify a group for which the data has to be generated. Of course, this should not be paired up with --user, or it will throw an error. Also, all the internal errors like, bad group name and incorrect credentials is handled by python-fedora itself! Hurray. The script looks much better now! :)

Fig : Data being pulled for the CommOps group

And right now, the develop branch has about 45 commits.

Current repo commit count

I am looking forward to working with Onboarding Series badges and yml files next week and cleaning up and organizing the script files. By next week, the tool should be pycodestyle ready :)

GSoC 2016 Weekly Rundown: Documentation and upgrades

Posted by Justin W. Flory on July 18, 2016 03:37 AM

This week and the last were busy, but I’ve made some more progress towards creating the last, idempotent product for managing WordPress installations in Fedora’s Infrastructure for GSoC 2016. The past two weeks had me mostly working on writing the standard operating procedure / documentation for my final product as well as diving more into handling upgrades with WordPress. My primary playbook for installing WordPress is mostly complete, pending one last annoyance.

Documentation

The first complete draft of my documentation for managing WordPress installations in Fedora’s infrastructure is available on my Pagure repository. The guide covers deployment, including upgrades, as well as more notes about working with the playbooks. As my project work begins to finish, the documented procedure is an outline for the final work. It will also be expanded as I close out the project.

Installing new WordPress site

After testing on my development instance in the Fedora cloud, my playbook is able to successfully install multiple WordPress sites to various hosts (pending one caveat for automatically setting up MySQL databases). I was able to spin up multiple sites quickly and easily to a point where I was satisfied with how it worked.

A few challenges I faced in this part were figuring out templating the right information into the WordPress configuration file. I was originally going to try using a variable file, but due to the issue of storing private information, I was trying to use external variables. After revisiting the idea with Patrick, I’m going to use a variables file with the information for each hypothetical installation. This file will then be stored in the private Ansible repository that holds server and application credentials.

Determining SELinux flags and contexts was also challenging. I had to learn which ones to apply to WordPress for basic functionality to still work (particularly for things like uploading media files to the server and letting WordPress cron work as expected). I’m not wholly satisfied with how I implemented it yet, as I want to dig more into setting the contexts with different parts of modules like unarchive and file, if possible.

Upgrading and master

The last significant task to handle is writing the playbook for handling upgrades for WordPress installations. There were two options originally available. The first option would be to allow upgrading via the WordPress admin panel. The second option would be writing a playbook to handle the upgrade. We opted for the second method as this will allow the files on the web server to be read-only, which will serve as an extra measure of hardened security.

I hope to have a playbook created in the next week to tackle upgrading an existing WordPress installation to a newer version. This will be the last significant task of my proposal, before I begin taking what I have so far and finding ways to integrate it into Fedora’s infrastructure.

One of these smaller but important tasks will be writing a “master” playbook to orchestrate the entire process of setting up a machine to run it (and referring to the necessary roles). Some of these roles I’ll be referring to are the httpd and mariadb roles.

Moving towards Flock

With Flock fast on approach, I’m hoping to have the majority of my project work finished and completed before that time frame. Anything past Flock should mostly be tidying up or fully documenting any changes made in the last stretch. This is my target goal at the moment! I’m looking forward to being a part of Flock again this year and meeting many members of the Fedora community.

The post GSoC 2016 Weekly Rundown: Documentation and upgrades appeared first on Justin W. Flory's Blog.

Understanding the statscache daemon

Posted by Sachin S. Kamath on July 17, 2016 11:17 AM

The last two weeks were pretty hectic. I had to read up a lot of documentation, code, fight spam and recover from a failed Fedora Upgrade. Phew, glad to have myself finally back up.

To start with, I was working on the stats tool very less this month and was concentrating more on the new things I have on list. If you had been following my GSoC Posts, you would probably know that I have been working on a statistics tool for the Summer Interns. During the last CommOps meeting, we had a crazy idea - Scaling the tool for an entire group/team and later for the community stats. That really sounds ambitious and it is. The tool currently uses datagrepper from which HTTP requests can be made to retrieve historic fedmsg data. This method worked fine for the interns as the weekly/monthly data of each of them did not cross 10 pages. However, it will be really slow to pull data of more than, say 50 people from statscache (especially of those who have been doing a lot on koji and copr)

To solve this issue, statscache was built. Statscache is a daemon to build and keep fedmsg statistics. It basically listens to the fedmsg hub waiting for messages. As soon as it encounters a message, it is passed onto the plugins which evaluates the topic and stores the statistics and the relevant parts of the raw_message locally. For statscache to function as intended, it requires the statscache-plugins. It is the plugins that does all the hard work of maintaining statistics. You could say statscache and statscache-plugins are made for each other :)

Deploying statscache locally is fairly simple. As simple as :
$ git clone https://github.com/fedora-infra/statscache; cd statscache $ python setup.py develop

And plugins like :
$ git clone https://github.com/fedora-infra/statscache_plugins; cd statscache_plugins $ python setup.py develop

After this is done, we need to gather the fedmsg messages. To do that, we will run fedmsg-hub in the main statscache repo.(To install fedmsg-hub, you need to do sudo dnf install fedmsg-hub.) You can stop fedsmg-hub anytime and statscache will have the statistics of all data you gathered before you exited. After this is done, the web Flask server can easily be started by running python statscache/app.py. This should fire up the web front-end on http://localhost:5000. If everything was done correctly, something like this should be on your screen :

You can now head over to the dashboard and see the plugins in action.For instance, you can see the volume of data each category received using the volume-by-category plugins, which looks something like this:

This is often identified using the topic name of fedmsg. Every category of fedmsg has a unique topic name assigned to it. For example, if someone opens a new issue on Pagure, the topic will be org.fedoraproject.prod.pagure.issue.new where org.fedoraproject.prod is common to all the fedmsg topics, whereas pagure says that the interaction was made on Pagure and the rest is self explanatory. You can see all the topics here.

Now, I am currently working on devising a way to auto-generate statistics of all users of a FAS group. I'll make a new post as soon as I make progress here. Till then, Happy Hacking to me :)

GSoC 2016 Weekly Rundown: Breaking down WordPress networks

Posted by Justin W. Flory on July 02, 2016 08:27 AM

This week, with an initial playbook for creating a WordPress installation created (albeit needing polish), my next focus was to look at the idea of creating a WordPress multi-site network. Creating a multi-site network would offer the benefits of only having to keep up a single base installation, with new sites extending from the same core of WordPress. Before making further refinements to the playbook, I wanted to investigate whether a WordPress network would be the best fit for Fedora.

Background for Fedora

Understanding the background context for how WordPress fits into the needs for Fedora is important. There are two sites powered by WordPress within Fedora: the Community Blog and the Fedora Magazine. Each site uses a different domain (communityblog.fedoraproject.org and fedoramagazine.org, respectively).

At the moment, there are not any plans to set up or offer a blog-hosting service to contributors (and for good reason). The only two websites that would receive the benefits of a multi-site network would be the Community Blog and the Magazine. For now, the intended scale of expanding WordPress into Fedora is to these two platforms.

Setting up the WordPress network

To test the possibilities of using a network for our needs, I used a development CentOS 7 machine for my project testing purposes. There are some guidelines on creating networks for reading first before proceeding. After reading these, it was clear the approach to take was the domain method. I moved to the installation guide on the development machine.GSoC 2016 - Adding sites to WordPress network

I wanted to document the process I was following for the multi-site network, so I created a short log file of my observations and information I found as I proceeded.

One of the time burners of this section was picking up Apache again. A few years ago, I switched my own personal web servers to nginx from Apache. Fedora’s infrastructure uses Apache for its web servers. It took me a little longer than I had hoped to get familiar with it again, mostly with virtual hosts and SELinux contexts for WordPress media uploads. Despite the extra time it took with Apache, I feel like this will save me time later when I am working on polishing the final deliverable or working with the Apache roles available.

In addition to this, I also picked out the dependencies for WordPress, such as the PHP packages needed and setting up a MariaDB database. After a while, I was able to get the WordPress network established and running on the development machine. It was convenient having a testable interface at my fingertips to work with.

WordPress network: Conclusion?

At the end of my testing and poking around, it appeared to me that there would not be an easy solution to using a WordPress network for Fedora. The network had the best ability when set up to use wildcard sub-domains, which wouldn’t be a plausible solution for us because of the two different domains. There were more manual ways of doing it (i.e. not in the WordPress interface) with Apache virtual hosts. However, I felt like it would be easier to write one playbook that handles a single WordPress installation, and can be run for both sites separately (or new sites).

Given that the factor of scale is two websites, I think maintaining two separate WordPress installations will be the easier method and save time and keep efficiency.

This week’s challenges

This week had a late start for me on Wednesday due to traveling on a short vacation with my family from Sunday to Tuesday. Coming back from the trip, I also have a new palette of responsibilities that I am assisting with in Community Operations and Marketing, following decause’s departure from Red Hat. I’m still working on finding a healthy balance of time and focus between other important tasks I am responsible for and my project work.

I’m hoping that having a full week will allow me to make further progress and continue to overcome some of the challenges that have arisen in the past few weeks.

Next week’s goals

For next week, I’m planning on focusing on my existing product and making it feel and run more like a “Fedora playbook”. I mostly want to work on saving unnecessary effort and being consistent by tapping into the existing Ansible roles in Fedora Infrastructure. This would make setting up an Apache web server, MySQL database, and a few other tasks more automated. It keeps the tasks and organization in a consistent manner as well since they are across Fedora’s infrastructure already.

By next Friday, the plan is to have a more idempotent product that runs effectively and as expected in my development server. Beyond that, the next step would be to work on getting my site into a staging instance.

The post GSoC 2016 Weekly Rundown: Breaking down WordPress networks appeared first on Justin W. Flory's Blog.

GSoC - Journey So Far ( Badges, Milestones and more..)

Posted by Sachin S. Kamath on June 29, 2016 07:37 AM

2 days ago, I woke up to a mail from Google saying that I passed the mid term evaluations of GSoC and could continue working towards my final evaluation. "What a wonderful way to kick start a day, I thought".

Google Summer of Code Mid Term E-Mail

Image : E-mail from Google Summer of Code


Working on the statistics tool was an amazing experience. You can browse my previous posts for a very detailed idea of what I've been working on. Apart from all the code written, I also got an opportunity to communicate with a lot of amazing people who are part of the Fedora Community as well get bootstrapped to the fedora-infrastructure team (and got an awesome badge for it)

Getting sponsored to the fi-apprentice group allows one to access the Fedora Infrastructure machines and setup in read only mode (via SSH) and understand how things are done. However, write access is only given to those in the sysadmin group, who are generally the l33t people in #fedora-admin on Freenode ;)

Apart from that, I got the opportunity to attend a lot of CommOps Hack sessions and IRC Meetings where we discussed and tackled various pending tickets from the CommOps Trac. We are currently working on Onboarding Badges Series for various Fedora groups. Onboarding badges are generally awarded to all those who complete a set of specific tasks (pre-requisites) and get sponsored into the respective Fedora group. One such badge is to be born very soon - take a look at the CommOps Onboarding ticket here.

Life-Cycle of a badge :

Getting a new badge ready is not a very easy task. A badge is born when someone files a badge idea in the Fedora Badges Trac. The person who files a ticket is expected to describe the badge as accurately as possible - including the description, name and particulars. After that is done, a person needs to triage the badge and suggest changes to it (if required). The triager is also expected to fill in additional details about the badge so that the YAML can be made to automate the badge awarding process. The next step is to write yaml definitions for the badge and attach initial concept artwork of the badge. This is reviewed by the fedora-design team and is either approved or further hacked upon.After the approval, the badge is all set to be shipped. QR codes might be printed to manually award the badge, especially when it is an event badge.

Having talked about the badges, I was awarded the following badges during my GSoC Period :

Image : Coding period badges (and counting ..)

Badges are a great way to track your progress in the community. It is also super-useful for the new-contributors as they can work keeping the badges as goals. Read bee's blog post about it here.

To keep a check on myself, I have compiled all my data over here. This repo has all the things I have done inside the community and also has SVG graphs that holds the metrics of it. Hoping to have a great summer ahead.

Useful Links for new contributors :

You can also find me hanging out in #fedora-commops on Freenode. Feel free to drop-in and say hello :)

GSoC Mid Term Evaluation

Posted by Tummala Dhanvi on June 29, 2016 05:16 AM

tldr; I have failed the midterm evaluation of GSoC but I am continuing to complete the project

I am sorry to let you guys know that I have failed the midterm evaluation, I have reached my goals but I should have been ahead and doing more! It was my mistake I should have done more work! and obviously didn’t communicate well but IMHO it shouldn’t have made me fail as I have done at-least some good work (I thought of contacting the google member but had put the idea down because there is mistake on my side!)

Here is the review for my project given by mentor zach to me.

Tummala, it has been recommend that you fail at the midterm because of your lack of communication with myself as your mentor and the rest of the community. When we started this I gave you a list of requirements and goals, you did not follow thru on the requirements even after we talked about it several times, while you managed to meet the goals of the first half, you wasted a lot of time and should be much further along. We talked during the bonding phase and I explained to you that packaging the source code required was your minimum goal, but that it was only the goal to make sure you got up to speed and did not have any issues with this complex part of the process. From your reports to me, and your communication with others it is clear that you spent very little time working over the first half and had you applied yourself you should have been able to make much more progress. In order to be a successful member of an open source community you need to learn and appreciate the importance of communication with the wider community. You lack of communication left many people questioning what you did during this period, and had you communicated better with myself and others we could have identified issues and helped you to stay engaged. I appreciate the work you have done, and have enjoyed working with you. Please do not let this deter you from continuing to engage in open source communities including Fedora, but as you do keep in mind how important open communication is to the success of a project.

As zach mentioned I am talking this as a learning process and am continuing the project.

I could take this one as the example of “FAIL FAST, FAIL OFTEN” it’s better that I have failed as a student and can learn a lot from my mistakes, than failing at my first job or not doing anything!

But I will continue to work on the project.

 


Filed under: fedora, GSOC, gsoc2016, Uncategorized

GSoC 2016 Weekly Rundown: Assembling the orchestra

Posted by Justin W. Flory on June 24, 2016 04:34 PM

This week is the Google Summer of Code 2016 midterm evaluation week. Over the past month since the program started, I’ve learned more about the technology I’m working with, implementing it within my infrastructure, and moving closer to completing my proposal. My original project proposal details how I am working with Ansible to bring improved automation for WordPress platforms within Fedora, particularly to the Fedora Community Blog and the Fedora Magazine.

Understanding background

My project proposal originated from a discussion based on an observation about managing the Fedora Magazine. Fedora’s infrastructure is entirely automated in some form, often times using Ansible playbooks to “conduct” the Fedora orchestra of services, applications, and servers. However, all the WordPress platforms within Fedora are absent from this automated setup. This has to do with the original context of setting up the platforms.

However, now that automation is present in so much of the Infrastructure through a variety of tasks and roles, it makes sense to merge the two existing WordPress platforms in Fedora into the automation. This was the grounds for my proposal back in March, and I’ve made progress towards learning a completely new technology and learning it by example.

Initial research

GSoC 2016: "Ansible For DevOps" as a learning resourceFrom the beginning, I’ve used two resources as guides and instructions for GSoC 2016. “Ansible For DevOps“, a book by Jeff Geerling, has played a significant part in helping bootstrap me with Ansible and the in’s and out’s. I’m about halfway through the book so far, and it has helped profoundly with learning the technology. Special thanks to Alex Wacker for introducing me to the book!

The second resource is, as one would expect, the Ansible documentation. The documentation for Ansible is complete and fully explanatory. Usually if there is an Ansible-specific concept I am struggling with learning, or finding a module for accomplishing a task, the Ansible documentation helps point me in the right direction quickly.

Research into practice

After making some strides through the book and the documentation, I began turning the different concepts into practical playbooks for my own personal infrastructure. I run a handful of machines for different purposes, ranging from my Minecraft server, a ZNC bouncer, some PHP forum websites, and more. Ever since I began using headless Linux servers, I’ve never explored automation too deeply. Every time I set up a new machine or a service, I would configure it all manually, file by file.

First playbook

After reading more about Ansible, I began seeing ways I could try automating things in my “normal” setup. This helped give a way to ease myself into Ansible without overwhelming myself with too large of tasks. I created repositories on Pagure for my personal playbooks and Minecraft playbooks. The very first one I wrote was my “first 30 minutes” on a new machine. This playbook sets up a RHEL / CentOS 7 machine with basic security measures and a few personal preferences ready to go. It’s nothing fancy, but it was a satisfying moment to run it in my Vagrant machine and see it do all of my usual tasks on a new machine instantly.

For more information on using Ansible in a Vagrant testing environment, check out my blog post about it below.

Setting up Vagrant for testing Ansible

<iframe class="wp-embedded-content" data-secret="aI0cn9NNEG" frameborder="0" height="338" marginheight="0" marginwidth="0" sandbox="allow-scripts" scrolling="no" security="restricted" src="https://blog.justinwflory.com/2016/06/setting-vagrant-testing-ansible/embed/#?secret=aI0cn9NNEG" title="“Setting up Vagrant for testing Ansible” — Justin W. Flory's Blog" width="600"></iframe>

Moving to Minecraft

After writing the first playbook, I tried moving to focusing on some other areas I could try automating to improve my “Ansible chops”. Managing my Minecraft server network is one place where I recognized I could improve automation. I spend a lot of time repeating the same sort of tasks, and having an automated way to do these tasks would make sense.

I started writing playbooks in the adding and restarting Minecraft servers based on the popular open source server software, Spigot. Writing these playbooks helped introduce me to different core modules in Ansible, like lineinfile, template, copy, get_url, and more.

I have also been using sites like ServerFault to find answers for any starting questions I have. Some of the changes between Ansible 1.x and 2.x caused some hiccups in one case for me.

Using Infrastructure resources

After getting a better feel for the basics, I started focusing less on my infrastructure and more on the project proposal. One of the key differences from me writing playbooks, roles, and tasks for my infrastructure is that there are already countless Ansible resources available from Fedora Infrastructure. For example, to create a WordPress playbook for Fedora Infrastructure, I would want to use the mariadb_server role for setting up a database for the site. Doing that in my playbook (or writing a separate role for it just for WordPress) would increase the difficulty of maintaining the playbooks and make it inconvenient for other members of Fedora Infrastructure.

Creating a deliverable

In my personal Ansible repository, I have begun constructing the deliverable product for the end of the summer. So far, I have a playbook that creates a basic, single-site WordPress installation. The intention for the final deliverable is to have a playbook for creating a “base” installation of a WordPress network, and then any other tasks for creating extra sites added to the network. This will make sure that any WordPress sites in Fedora are running the same core version, receive the same updates, and are consistent in administration.

I also intend to write documentation for standing up a WordPress site in Fedora based on my deliverable product. Fortunately, there is already a guide on writing a new SOP, so after talking with my mentor, Patrick Uiterwijk, on documentation expectations and needs next week, I will be referring back to this document as a guide for writing my own.

Reflection on GSoC 2016 so far

I was hoping to have advanced farther by this point, but due to learning bumps and other tasks, I wasn’t able to move at a pace as I hoped. However, since starting GSoC 2016, I’ve made some personal observations about the project and how I can improve.

  • Despite being behind from where I wanted to be, I feel I am at a point where I am mostly on track and able to work towards completing my project proposal on schedule.
  • I recognize communication on my progress has not been handled well, and I am making plans to make sure shorter, more frequent updates are happening at a consistent and regular basis. This includes a consistent, weekly (if not twice every week) blog post about my findings, progress, commits, and more.
  • After talking with Patrick this week, we are going to begin doing more frequent check-ins about where I am in the project and making sure I am on track for where I should be.

Excerpt from GSoC 2016 evaluation form

As one last bit, I thought it would be helpful to share my answers from Google’s official midterm evaluation form from the experience section.

“What is your favorite part of participating in GSoC?”

“Participating in GSoC gave me a means to continue contributing to an open source community I was still getting involved in. I began contributing to Fedora in September 2015, and up until the point when I applied for GSoC, I had anticipated having to give up my activity levels of contributing to open source while I maintained a job over the summer. GSoC enabled me to remain active and engaged with the Fedora Project community and it has kept me involved with Fedora.

The Fedora Project is also a strong user of Ansible, which is what my project proposal mostly deals with. My proposal gives me a lot of experience and the opportunity to learn new technology that not only allows me to complete my proposal, but also understand different levels and depths of contributing to the project far beyond the end of the summer. With the skills I am learning, I am being enabled as a contributor for the present and the future. To me, this is exciting as the area that I am contributing in has always been one that’s interested to me, and this project is jump-starting me with the skills and abilities needed to be a successful contributor in the future.

GSoC is also actively teaching me lessons about time management and overcoming challenges of working remote (which I will detail in the next question). I believe the experience I am getting now from participating in GSoC allows me to improve on myself as an open source developer and contributor and learn important skills about working remotely with others on shared projects.”

“What is the most challenging part of participating in GSoC?”

“The hardest part for me was (is) learning how to work remotely. In the past, when I was contributing at school, I had resources available to me where I could reach out to others nearby for assistance, places I could leave to focus, and a more consistent schedule. Working from home has required me to reach out for help either by improving how well I can search for something or reaching out to others in the project community about how to accomplish an objective.

There are also different responsibilities at home, and creating a focused, constructive space for me to focus on project work is an extremely important part of helping me accomplish my work. Learning to be consistent in my own work and setting my own deadlines is a large part of what I’m working on doing now. Learning the ability to follow and set personal goals for working on the project was a hard lesson to learn at first, but finding that balance quickly and swiftly is something that is helping me move forward.”

The post GSoC 2016 Weekly Rundown: Assembling the orchestra appeared first on Justin W. Flory's Blog.