September 26, 2015

Bitcoin Aware Barber's Pole

A few months back the guild hosted an Arduino Day workshop. The event was a great success, there was a large turnout, many neat presentations & demos, and much more. My project for the day was an Arduino controlled Barber's Pole that would poll data from the network and activate/deactivate multiple EL wires attached to it. Unfortunately due to a few technical issues the project took a bit longer than originally planned and had to be sidelined. But having recently finished it up, I present the Bitcoin Aware Barber's Pole, now on display at the guild!

The premise was straightforward, an Arduino Diecimila would be used in combination with the Ethernet Shield to retrieve data from the Internet and activate one of two el-wires mounted to a pole like object. For that several materials were considered but we ended up using pvc as it was easiest to work with and was what we were going for aesthetically. Since the EL wire is driven from an AC source we used two SPDT relays to activate the circuit based on the state of the Arduino's digital pin output. The constructed circuit was simple, incorporating the necessary components to handle flyback current.

The software component of this project is what took the most time, due to several setbacks. Perhaps the biggest was the shortage of address space we had to work with, micro-controller platforms are notorious for this, but the Diecimila only gave us 16KB of flash memory to use, which after what I'm assuming is space for the bootloader, shared libraries, and other logic is reserved, amounts to ~14KB of memory for the user program and data. Contrast this to modern general purpose PCs, where you'd be hard pressed to find a system with less than 2GB of memory! This had many side effects including not having enough address space to load and use the Arduino HttpClient or Json libraries. Thus a very rudimentary HTTP parsing and request implementation was devised so as to serve the application's needs. All and all it was very simple but specialized, only handling the edge cases we needed an nothing else.

Of course the limited address space meant we were also limited in the amount of constants and variables we could use. Using the heap (also small on this platform) always introduces additional complexities / logic of its own so was avoided. Since each data source would require the metadata needed to access it, we decided to only poll one location and use it to activate either of the two el-wires depending on its state.

In all of this you may be asking why we just didn't order a newer Arduino chip with a bigger address space to which I reply what would I do with the one that I had?!?! Plus developing for platforms with memory & other restrictions introduces fun challenges of its own.

At one point we tried splitting the sketch into multiple modules via the Arduino IDE interface. This was done to try and better organize the project, in a more OOD fashion, but introduced more complexities than it was worth. From what I gather, most sketches are single module implementations, perhaps incorporating some external libraries via the standard mechanisms. When we attempted to deviate from this we noticed so weird behavior, perhaps as a result of the includes from centralized Arduino & supporting libraries being pulled into multiple modules. We didn't debug too far, as overall the application isn't that complex.

One of the last challenges we had to face was selecting the data to poll. Again due to the limited memory space, we could only store so much http response data. Additionally even any rudimentary parsing of JSON or other format would take a bit of logic which we didn't have the space for. Luckily we found Bitcoin Average which provides an awesome API for getting up-to-date Bitcoin market data. Not only do they provide a rich JSON over REST interface, but fields can be polled individually for their flat text values, for which we retrieve the BTC/USD market average every 5 minutes. When bitcoin goes up, the blue light is activated, when it goes down, the red light is turned on. Of course this value is a decimal and enabling floating point arithmetic consumes more memory. To avoid this, we parsed the integer and decimal portions of the currency separately and ran the comparisons individually (in sequence).

But unfortunately there was one last hiccup! While the Bitcoin Average documentation stated that HTTP was supported, but in fact querying their server via port 80 just resulted in a 301 redirect to HTTPS running on 443. Since even w/ more modern Arduino platforms w/ larger address spaces, HTTPS/SSL handling proves to be outright impossible due to the complexity of the algorithms, we had to devise a solution to be able to communicate with the server via http in order to retrieve the data. To do so we wrote & deployed a proxy that listens for http requests, issue a https request to bitcoin average and returned the result. This was simple enough to do w/ the Sinatra micro-framework as you can see below:

# HTTP -> HTTPS proxy
# Written to query the via http (only accessible by https).
# Run as a standard Rack / Sinatra application
# Author: Mo Morsi <>
# License: MIT
require 'sinatra'
require 'open-uri'
URL = ""
get '/' do
  open(URL) do |content|

The final result was hosted on this server and the Arduino sketch was updated to use it. All in all the logic behind the Barber's Pole can be seen below:

//// Bitcoin Barber Shop Pole
//// Author: Mo Morsi <>
//// Arduino Controller Sketch
//// License: MIT
//// For use at the Syracuse Innovators Guild (
#include <SPI.h>
#include <Ethernet.h>
//// sketch parameters
byte mac[]                           = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 };
int port                             = 80;
char server[]                        = "";
char host[]                          = "Host:";
char request[]                       = "GET /barber/ HTTP/1.1";
char user_agent[]                    = "User-Agent: arduino-ethernet";
char close_connection[]              = "Connection: close";
char content_length_header[]         = "Content-Length";
char CR                              = '\r';
char NL                              = '\n';
unsigned long lastConnectionTime     = 0;
const unsigned long postingInterval  = 300000; // - every 5 mins
boolean lastConnected                = false;
const int  max_data                  = 32;
int  data_buffer_pos                 = 0;
char data_buffer[max_data];
int  content_length                  = -1;
boolean in_body                      = false;
int current_btc                      = 0;
int current_btc_decimal              = 0; // since were not using floats
const int blue_pin                   = 5;
const int red_pin                    = 7;
unsigned long lastLightingTime       = -1;
const unsigned long lightingInterval = 5000;
// arduino hook in points & config
EthernetClient client;
void setup() {
void loop() {
void pins_config(){
  pinMode(blue_pin, OUTPUT);
  pinMode(red_pin, OUTPUT);
void serial_config(){
  while (!Serial) { ; } // this check is only needed on the Leonardo
// network operations
void net(){
  else if(should_issue_request())
  lastConnected = client.connected();
void block(){
  for(;;) { ; }
boolean should_reset(){
  return !client.connected() && lastConnected;
void net_reset(){
boolean should_issue_request(){
  return !client.connected() && (millis() - lastConnectionTime > postingInterval);
void net_config(){
  if (Ethernet.begin(mac) == 0) {
    Serial.println("net failed");
void net_read(){
  if(client.available()) {
    char c =;
void net_request(){
  if (client.connect(server, port)) {
    lastConnectionTime = millis();
  }else {
// data buffer management
void buffer_append(char c){
  data_buffer[data_buffer_pos] = c;
  data_buffer_pos += 1;
  if(data_buffer_pos >= max_data)
    data_buffer_pos = 0;
void buffer_reset(){
  data_buffer_pos = 0;
// moves last char in buffer to first, sets pos after
void buffer_cycle(){
  data_buffer[0]  = data_buffer[data_buffer_pos-1];
  data_buffer_pos = 1;
void buffer_print(){
  Serial.print("buf ");
  Serial.print(": ");
  for(int p = 0; p < data_buffer_pos; p++)
// http parsing / handling
int char_pos(char ch){
  for(int p = 1; p < data_buffer_pos; p++)
    if(data_buffer[p] == ch)
      return p;
  return -1;
int seperator_pos(){
  return char_pos(':');
int decimal_pos(){
  return char_pos('.');
boolean status_detected(){
  if(data_buffer_pos < 4) return false;
  int cr_pos    = data_buffer_pos - 3;
  int lf_pos    = data_buffer_pos - 2;
  int alpha_pos = data_buffer_pos - 1;
  // only upper case letters
  int alpha_begin = 65;
  int alpha_end   = 90;
  return data_buffer[cr_pos]    == CR          &&
         data_buffer[lf_pos]    == NL          &&
         data_buffer[alpha_pos] >= alpha_begin &&
         data_buffer[alpha_pos] <= alpha_end;
boolean header_detected(){
  if(data_buffer_pos < 5) return false;
  int cr_pos     = data_buffer_pos - 2;
  int lf_pos     = data_buffer_pos - 1;
  return seperator_pos()     != -1   &&
         data_buffer[cr_pos] == CR   &&
         data_buffer[lf_pos] == NL;
boolean is_header(char* name){
  int pos = 0;
  while(name[pos] != '\0'){
    if(name[pos] != data_buffer[pos])
      return false;
  return true;
boolean body_detected(){
  if(data_buffer_pos < 4) return false;
  int first_cr  = data_buffer_pos - 4;
  int first_lf  = data_buffer_pos - 3;
  int second_cr = data_buffer_pos - 2;
  int second_lf = data_buffer_pos - 1;
  return (data_buffer[first_cr]  == CR &&
          data_buffer[first_lf]  == NL &&
          data_buffer[second_cr] == CR &&
          data_buffer[second_lf] == NL);
int extract_content_length(){
  int value_pos = seperator_pos() + 1;
  char content[data_buffer_pos - value_pos];
  for(int p = value_pos; p < data_buffer_pos; p++)
    content[p-value_pos] = data_buffer[p];
  return atoi(content);
void process_headers(){
  else if(header_detected()){
      content_length = extract_content_length();
  else if(body_detected()){
    in_body = true;
int extract_new_btc(){
  int decimal  = decimal_pos();
  int buf_size = decimal == -1 ? data_buffer_pos - 1 : decimal;
  int iter_end = decimal == -1 ? data_buffer_pos     : decimal;
  char value[buf_size];
  for(int p = 0; p < iter_end; p++)
    value[p] = data_buffer[p];
  return atoi(value);
int extract_new_btc_decimal(){
  int decimal  = decimal_pos();
  if(decimal == -1 || decimal == data_buffer_pos - 1) return 0;
  int buf_size = data_buffer_pos - decimal - 1;
  int iter_start = decimal + 1;
  char value[buf_size];
  for(int p = iter_start; p < data_buffer_pos; p++)
    value[p - iter_start] = data_buffer[p];
  return atoi(value);
void process_body(){
  if(!in_body || data_buffer_pos < content_length) return;
  process_new_btc(extract_new_btc(), extract_new_btc_decimal());
  content_length = -1;
  in_body = false;
void process_response(){
// target specific data processing
void print_btc(int btc, int btc_decimal){
boolean value_increased(int new_btc, int new_btc_decimal){
  return new_btc > current_btc || (new_btc == current_btc && new_btc_decimal > current_btc_decimal);
boolean value_decreased(int new_btc, int new_btc_decimal){
  return new_btc < current_btc || (new_btc == current_btc && new_btc_decimal < current_btc_decimal);
void process_new_btc(int new_btc, int new_btc_decimal){
  //print_btc(current_btc, current_btc_decimal);
  //print_btc(new_btc, new_btc_decimal);
  if(value_increased(new_btc, new_btc_decimal)){
  else if(value_decreased(new_btc, new_btc_decimal)){
  current_btc = new_btc;
  current_btc_decimal = new_btc_decimal;
// pin output handling
boolean should_turn_off(){
  return lastLightingTime != -1 && (millis() - lastLightingTime > lightingInterval);
void lights(){
    lastLightingTime = -1;
void turn_on_blue(){
  lastLightingTime = millis();
  digitalWrite(blue_pin, HIGH);
void turn_off_blue(){
  digitalWrite(blue_pin, LOW);
void turn_on_red(){
  lastLightingTime = millis();
  digitalWrite(red_pin, HIGH);
void turn_off_red(){
  digitalWrite(red_pin, LOW);
void turn_on_both(){
void turn_off_both(){

The actual construction of the pole consists of a short length of PVC pipe capped at both ends. The text was spray painted over and a small hole drilled in the back for the power & network cables. The circuity was simply placed flat inside the pvc, no special mounting or attachments were used or needed.

The final setup was placed near the enterance of the Guild where anyone walking in / out could see it.

All in all it was a fun project that took a bit longer than originally planned, but when is that not the case?! Microcontrollers always prove to be unique environments, and although in this case it just amounted to some C++ development, the restricted platform presented several interesting challenges I hadn't encountered since grad school. Going forward I'm contemplating looking into the Raspberry Pi platform for my next project as it seems to be a bit more flexible & has more address space, while still available at a great price point.


read more

September 13, 2015

CloudForms v2 (MiQ) DB - 08/2015

Now that's a db! Created using Dia. Relevant table reference / listing can be found here

Modeling is the first step towards Optimization.

read more

August 01, 2015

Polished to a Resilience

Long time since the last post, it's been quite an interesting year! We all move forward as do the efforts on various fronts. Just a quick update today on two projects previously discussed as things ramp up after some downtime (look for more updates going into the fall / winter).

Polisher has received alot of work in the domains of refactoring and new tooling. The codebase is more modular and robust, test coverage has been greatly expanded, and as far as the new utilities:

  • gem_mapper.rb: Lists all gem / gemfile dependencies & the versions available downstream
  • missing_deps.rb: Highlights dependencies missing downstream as well as any alternate versions available
  • gems2update.rb: Cross references dependencies downstream w/ updates available upstream and recommends specific versions to update to. This facilitates a consistent update across dependencies which may impose different requirements on the same gems. If a unified update strategy cannot be deduced gems2update will highlight the conflicts.

These can be seen in action via the screencasts referenced above.

Resilience, our expiremental REFS parser, has also been polished. Various utilities previously written have been renamed, refactored, and expanded; and new tooling written to continue the analysis. Of particular notability are:

  • fcomp.rb - A file metadata comparison tool, that runs a binary diff on file metadata in the fs
  • axe.rb - The attribute extractor, pulls file specific metadata out of the refs filesystem and dumps it into a local file. Additional analysis will be of this metadata (in part)
  • rarser.rb - The complete filesystem parser / file extractor, this pulls files and directories off the image and dumps it into local files

Also worthy to note are other ongoing efforts including updating ruby to 2.3 in rawhide and updating rails to 4.x in EPEL.

Finally, the SIG has been going through (another) transitionary period. While membership is still growing there are many logistical matters currently up in the air that need to be resolved. Look for updates on that front as well as many others in the near future.

read more

July 14, 2015

Event report: FUDCon APAC 2015, Pune

I’m writing a blog post after very long. Somewhere between the last post and this one, I’ve graduated and started working for Mesitis Capital as the Product Designer. On the open source community front, I haven’t programmed much recently, but I have been mentoring a couple of students over this year’s GSoC. Two weeks ago, I was at FUDCon in Pune. Here’s a quick summary.

Day 0 - Arrival Day

For the first evening, it was mostly just people arriving and us meeting up over dinner at Kushal’s place. I really enjoyed meeting Suchakra after long - we had a quick discussion around our AskFedora student Anuradha, since mid-term evaluations were around the corner. I met Harish and Danishka who live in Singapore - they shared with me tips around housing, transport, expenses, hackerspaces - all the things I’ll need when I move later this year.

I had a workshop the next day, so I wanted to sleep “early”, but it got pretty late as usual ;-)

Day 1 - First Workshop Day

FUDCon at MITCOE - picture stolen from Suchakra's blog

The morning was mostly meeting folks who arrived that day - Gnokii, Tuan and the rest. Come afternoon, and it was time for my workshop on building responsive front end. This was my first attempt doing few things - conducting a session without slides, programming stuff on the stage, the topic itself - and I think a lot of those choices were great because I ended up heavily modifying what I had wanted to show. I do regret that I couldn’t get around to teaching the stuff I really wanted to, but given a beginner audience, I’m happy they picked up some key ideas. A couple of them also emailed me after the event asking for further resources, so it does look like it was handy.

In the evening, we had a sort of mini FUDPub - most of us speakers & volunteers staying at the hotel went to a nearby Pub. Gnokii, Somvandda, Yogi, Danishka and I got on a table and we were discussing breweries and food - pretty interesting stuff. It turns out Charul and Sinny were neighbors - so Suchakra and I ended up chatting about work, projects, college life, etc - again sleeping quite late.

Day 2 - Meeting students

I didn’t have any sessions scheduled for the second day, so I took the opportunity to hang out with students. I learned that many students from Amrita University, Kollam were in town, so we headed up for lunch together, discussing projects and scope for them to contribute to some FOSS projects. Later during the day, some students from MITCOE spent quite some time with me; we talked about how the Fedora Project is organized, who does what, and how one gets into areas that interests them. There were two students interesting in contributing to the Design team, so I explained them about various things Design team does, the people involved, tools they use, and encouraged them to attend the workshops from the Design track on the final day.

In the evening, we had the social event at bluO in Phoenix MarketCity, a large shopping complex. There was bowling organized, great food, and a very energetic environment.

Day 3 - Final Day

I had an early joint workshop session on how Git works with Mayur. Once again, catering to the audience, we decided to focus on what it is, and how to fiddle with it. While Mayur took the stage and maintained an overall flow around the session, I went around looking at people’s screens and ensuring everyone was doing the right thing. There were lots of questions popping around Git server centric infrastructure - it was fun answering them. There were also a couple of people who weren’t new to Git but didn’t like merge conflicts, so we sat down and helped them around it.

Harish soon followed with a key signing party. I’m happy I attended it - it was great refresher material and I got some concepts cleared in my head around the whole GPG process. As it always is the case, I learn better by doing, so I’ll try to teach it to somebody and hopefully become more clear that way.

For the night, we had dinner at the hotel - once again, it was fun recommending Indian dishes to my non local friends, and it does look like they enjoyed it.

Overall, amazing time at my first FUDCon. I look forward to it next year! :-)

Picture credits: Suchakra’s blog at

June 02, 2015

Update on CentOS GSoC 2015

Here’s an update on the CentOS Project Google Summer of Code for 2015 posted on the CentOS Seven blog:

This might be of interest to the Fedora Project community, so I’m pushing my own reference here to appear on the Fedora Planet. Much of the work happening in the CentOS GSoC effort may be useful as-is or as elements within Fedora work. (In at least one case, the RootFS build factory for Arm, the work is also happening partially in Fedora, so it’s a triple-win.)

April 08, 2015

Trabalhando com Objetos
Trabalhando com Objetos

Criando objetos

Antes de iniciar nosso exemplo eu gostaria de mostrar um pouco do que o Parse nos oferece.

A maioria dos sistemas grava, altera e busca informações em uma base de dados. Neste sentido o Parse facilitou muito a vida do desenvolvedor. Suponha que eu queira criar um objeto chamado Pessoa.

var Pessoa = Parse.Object.extend(“Pessoa”);

Primeiro eu preciso criar uma referência a um novo objeto do parse, para isso eu utilizo a função Parse.Object.extend. Criada a referência, posso começar a criar instancias:

var pessoa = new Pessoa();
var outraPessoa = new Pessoa();

Viu? Bem simples.


Com nossos objetos criados e devidamente instanciados, podemos criar e alterar atributos. Os atributos podem ser criados e/ou valorados utilizando a método set. Suponha que eu queira criar um atributo chamado nome no meu objeto pessoa:

pessoa.set(“nome”, “Fulano”);

Para recuperar este atributo utilizamos o método get:

var nome = pessoa.get(“nome”);


Além dos atributos, podemos criar métodos em nosso objetos:

var Pessoa = Parse.Object.extend(“Pessoa”, {
// Métodos de Instância
falar : function (frase) {

     dormir : function () {
alert(this.get(“nome”) + “ dormiu”);
   // Métodos da Classe
      create : function (nome) {
var pessoa = new Pessoa();
           pessoa.set(“nome”, nome);
return pessoa;

Note que definimos dois tipos de métodos: Métodos de instância e métodos da classe, todos estão disponíveis de acordo com o contexto:

var pessoa = new Pessoa();
pessoa.set(“nome”, “Gohan”);
pessoa.falar(“Oi, eu sou o Gohan”);
pessoa.dormir(); // Gohan dormiu

var outraPessoa = Pessoa.create(“Goku”);
pessoa.falar(“Oi, eu sou o Goku”);
outraPessoa.dormir(); // Goku dormiu

Persistindo Objetos

Agora que já sabemos como criar um objeto, vamos aprender como persisti-lo no Parse. Basta utilizar a função save. Para esta exemplo eu vou aproveitar a classe pessoa criada acima.

var pessoa = new Pessoa();
pessoa.set(“nome”, “Goku”);
pessoa.set(“ki”, 9000);;

O método save aceita callbacks para tratar eventos, os eventos disponíveis são o success e o error. Acompanhe o código abaixo:,{
                 success : function (pessoa) {
            } ,
error : function (message) {

O que acontece se tudo correr bem? O Parse vai procurar uma entidade chamada Pessoa e caso não encontre ele vai cria-la pra você. Você vai notar que sua entidade terá os seguintes atributos:

  • objectId, que é a chave identificadora do objeto;
  • nome, atributo que criamos;
  • ki, atributo que criamos;
  • createAt: criado automáticamente;
  • updatedAt: criado automaticamente.

Viu como é fácil?

O o que é o null que passamos? Simples, podemos inicializar os atributos da classe usando os métodos get e set ou podemos opcionalmente passar tudo no método save, assim:{nome: “Goku”, ki: “9000”} , {success: …, error: …});

Recuperando, Atualizando e Excluindo Objetos

Recuperar um objeto é tão simples quando salva-lo. precisamos apenas usar o objeto Parse.Query. O meio mais fácil é recuperar pela objectId:

var Pessoa = Parse.Object.extend(“Pessoa”);
var query = new Parse.Query(Pessoa);

Agora vamos supor que a pessoa que queremos tenha o objectId igual a “xWMyz4YEGZ”. Para recupera-lo precisamos fazer o seguinte:

query.get(“xWMyz4YEGZ”, {
success: function(pessoa) {
Note que da mesma forma que o método save, o método get também possui os callbacks success e error.

Se tudo correu bem, a pessoa será retornada e seus dados serão impressos no console. Viu como é fácil?

Agora e se eu quiser alterar alguma coisa? Simples, basta alterar os atributos desejados e chamar o método save novamente, assim:

query.get(“xWMyz4YEGZ”, {
success: function(pessoa) {
pessoa.set(“nome”, “Gohan”);;

Pronto! O objeto está atualizado! Se você quiser desfazer qualquer alteração que ainda não foi salva, basta usar o método fetch.

success: function(pessoa){
error: function (message) {


E para excluir? Basta usar o método destroy.

query.get(“xWMyz4YEGZ”, {
success: function(pessoa) {
pessoa.set(“nome”, “Gohan”);

Chamar o método unset exclui um atributo:

query.get(“xWMyz4YEGZ”, {
success: function(pessoa) {

Métodos utilitários

Os objetos do parse oferecem alguns métodos utilitários dependendo do tipo de dado que estamos usando. São eles:

Increment e Decrement

Métodos muito úteis para atualizar campos numéricos sem se preocupar com a concorrência.




Estava pensando que não teria nenhum utilitário para trabalhar com arrays? Está enganado! O Parse nos oferece os seguintes métodos:

  • add, que inclui um objeto no fim da lista;
  • addUnique, que inclui um objeto caso ele não exista. É importante salientar que a posição em que ele é armazenado pode não ser necessáriamente a última;
  • remove, que remove todas as instâncias de determinado objeto.

Para usa-los é bem simples:

pessoa.addUnique(“mestres”, “Mestre Kame”);
pessoa.addUnique(“mestres”, “Senhor Kayo”);;

Tudo é muito fácil com o Parse, e simplesmente funciona!

No próximo post vamos iniciar nosso projeto Parse Social.

Introdução ao Parse
Introdução ao Parse

Conhecendo um pouco do Parse

O Parse nos oferece uma pequena interface de administração. Apesar de ser bem intuitiva eu vou apresentá-la.
Na imagem acima está seu Dashboard. Aqui você pode criar aplicativos ou ver um resumo dos que já existem.
Ao selecionar um aplicativo o parse nos apresenta mais informações sobre ele. O mais importante é o Core, apresentado na figura acima. Aqui você pode ver todos os objetos gravados, pode criar novos objetos, alterar objetos existentes e até exclui-los.

As outras seções não serão abordadas pois não fazem parte do escopo deste material. Mas fique a vontade para explorar um pouco mais sobre elas.

Iniciando um Projeto

Primeiro precisamos criar um app. Então acesse sua conta (caso não tenha criado, crie. É grátis) e clique em Create a New App em seu dashboard.:
Para estes exemplos vamos criar um app chamado Parse Social. Este app será uma pequena rede social usando o parse.
Aplicativo criado! Agora precisamos das chaves de acesso. Clique em keys, você será redirecionado para a página abaixo:
O application ID é a chave identificadora do aplicativo que acabamos de criar. As demais chaves são usadas dependendo da api que vamos usar. Como vamos usar javascript copie o Application ID e o JavaScript Key e cole em algum lugar.

Chegou a hora de iniciar nosso projeto. Neste exemplo eu vou utilizar o SublimeText, que é um editor que uso no meu dia-a-dia, o navegador Google Chrome e o servidor IIS. Você pode usar qualquer editor de texto e o navegador de sua preferência. O servidor web é opcional, mas eu recomento que utilize. Eu uso o IIS pois já vem instalado e configurado no Windows, mas você pode usar o apache ou outro servidor de sua preferência.

Esta será nossa estrutura de diretórios:
Para este projeto utilizaremos o padrão single-page, pois gosto de fazer desta forma. Vamos agora para o index.html. Ele deve começar mais ou menos como na figura abaixo:

Note que neste projeto, além do Parse, vamos utilizar o Bootstrap e o font-awesome, pois gosto de trabalhar com eles. Vamos também usar uma fonte do google. Não vou falar muito deles pois eu não sou especialista e o assunto não está no nosso escopo.

Observe que nada é embarcado no projeto, tudo é importado da nuvem. Eu gosto de trabalhar assim porque é mais fácil, mas se quiser embarcar tudo no seu aplicativo fique a vontade.

Então vamos entender os imports:

  • Nas linhas 6, 7, 8 e 9 temos os imports para o bootsrap;
  • linha 11 é o import do font-awesome;
  • linha 12 é o import da fonte que vamos usar;
  • linha 14, finalmente é a biblioteca do parse.

Repare que na linha 25 estamos importando um arquivo chamado app.js, vamos cria-lo e começar a codificar. A priori vamos precisar apenas incluir o seguinte:


Lembra das chaves que pedi para você guardar? Então, substitua APPLICATION_ID e JAVASCRIPT_KEY pelas respectivas chaves. Com isso estamos prontos para começar!


Não posso finalizar este capítulo sem explicar um pouco sobre o comando que acabamos de usar. Sempre que houver a necessidade de escrever ou recuperar algum objeto do parse é importante se conectar a ele primeiro. É isto que o comando initialize faz.

Esta operação é cara, então um dos principais motivos de eu gostar muito de trabalhar com single page é que eu a chamo apenas uma vez. Não tem problema estar sempre conectado ao parse, isto é até interessante.

April 07, 2015

Curso - Desenvolvimento de Aplicativos usando Parse - O básico

Você já ouviu falar do Parse? Desde que o conheci minha experiência com desenvolvimento de aplicativos mudou muito. Mudou tanto que resolvi dedicar parte do meu tempo escrevendo um pouco sobre ele. Apesar da documentação ser bem completa eu senti falta de material em Português, com um nível um pouco mais introdutório.

Então o quê é o parse?

O Parse é um backend desenvolvido inicialmente para o desenvolvimento de aplicativos. Ele abstrai muitas funcionalidades comuns, tais como Autenticação, CRUD de objetos, Manipular Arquivos e Envio de Push.

Com o Parse você pode desenvolver para as seguintes plataformas: Android, iOS, Windows Phone, Web e Arduino.

As linguagens suportadas são: .Net, Java, Javascript, PHP, Objective C, Swift e C.

Este curso será focado no Parse para Javascript, pois é o mais acessível para todos.

A primeira parte eu vou focar no básico. Aquelas coisinhas que você vai usar o tempo todo.

Até o próximo post.

March 15, 2015

The Right Mind And The Confused Mind

From The Unfettered Mind which offers some great advice on the subject of meditation (take or leave what you will):

The Right Mind And The Confused Mind
The Right Mind is the mind that does not remain in one place. It is the mind that stretches throughout the entire body and self. The Confused Mind is the mind that, thinking something over, congeals in one place. When the Right Mind congeals and settles in one place, it becomes what is called the Confused Mind. When the Right Mind is lost, it is lacking in function here and there. For this reason, it is important not to lose it. In not remaining in one place, the Right Mind is like water. The Confused Mind is like ice, and ice is unable to wash hands or head. When ice is melted, it becomes water and flows everywhere, and it can wash the hands, the feet or anything else. If the mind congeals in one place and remains with one thing, it is like frozen water and is unable to be used freely: ice that can wash neither hands nor feet. When the mind is melted and is used like water, extending throughout the body, it can be sent wherever one wants to send it. This is the Right Mind.

The Mind Of The Existent Mind And The Mind Of No-Mind
The Existent Mind is the same as the Confused Mind and is literally read as the "mind that exists." It is the mind that thinks in one direction, regardless of subject. When there is an object of thought in the mind, discrimination and thoughts will arise. Thus it is known as the Existent Mind.

The No-Mind is the same as the Right Mind. It neither congeals nor fixes itself in one place. It is called No-Mind when the mind has neither discrimination nor thought but wanders about the entire body and extends throughout the entire self.

The No-Mind is placed nowhere. Yet it is not like wood or stone. Where there is no stopping place, it is called No-Mind. When it stops, there is something in the mind. When there is nothing in the mind, it is called the mind of No-Mind. It is also called No-Mind-No-Thought.

When this No-Mind has been well developed, the mind does not stop with one thing nor does it lack any one thing. It is like water overflowing and exists within itself. It appears appropriately when facing a time of need. The mind that becomes fixed and stops in one place does not function freely. Similarly, the wheels of a cart go around because they are not rigidly in place. If they were to stick tight, they would not go around. The mind is also something that does not function if it becomes attached to a single situation. If there is some thought within the mind, though you listen to the words spoken by another, you will not really be able to hear him. This is because your mind has stopped with your own thoughts.

If your mind leans in the directions of these thoughts, though you listen, you will not hear; and though you look, you will not see. This is because there is something in your mind. What is there is thought. If you are able to remove this thing that is there, your mind will become No-Mind, it will function when needed, and it will be appropriate to its use.

The mind that thinks about removing what is within it will by the very act be occupied. If one will not think about it, the mind will remove these thoughts by itself and of itself become No-Mind. If one always approaches his mind in this way, at a later date it will suddenly come to this condition by itself. If one tries to achieve this suddenly, it will never get there.

An old poem says:

To think, "I will not think"-
This, too, is something in one's thoughts.
Simply do not think
About not thinking at all.

read more

February 07, 2015

Desk Headphones

Recently, I replaced the headphones I've had for a long time with some new ones. I've used Beyerdynamic DT 770 for years (now discontinued). On a flight last year, someone leaned the chair back in front of me suddenly, the cable got caught, and the jack bent really bad. The cut in out a lot. I realize I could just replace the jack, but I thought it was a good excuse to go nuts.

My whole setup with new headphones, case, DAC, preamp, and cables was a little under $400 (half for the headphones and half for all of the toys). You could definitely just get the headphones for $200.

If you're looking for something on the cheap, I recommend a pair of Sony MDR7506. Standard issue studio headphones. Can't go wrong for only $70.


Beyerdynamic Custom One Pro

I ended up getting a pair of Beyerdynamic Custom One Pro since I liked my DT 770s so much. So far I'm a big fan. They only run $200. I didn't want to go overboard. My friend Bryn Jackson recommended PSB M4U, V-MODA M-100, and Master & Dynamic MH40 as well. They are all really solid choices, but I wanted to stick with Beyerdynamic and stay a bit on the cheaper side.

You can also customize them:

My headphones

I ordered replacement cushions directly from Beyerdynamic. They have all kinds of things you can change. Definitely check it out.


I also picked up a hard case to protect my new investment.

Slappa HardBody PRO Headphone Case

Ended up getting Slappa HardBody PRO Headphone Case for only $30. Not bad. It's a bit bulky but it should fit nicely in a backpack. It's a little too big to fit comfortably in a messager bag unfortunately.


Next is the DAC (digital-to-analog converter). This takes the digital audio from your computer (via USB) and converts it to analog audio that your headphones can actuall produce. Your computer, phone, etc. has one of these built in since it has a headphone jack. Most stock ones are pretty low quality for audio nerds.

FiiO E10K

I ended up getting a FiiO E10K (also a recommendation from Bryn) which I'm super happy with. It sounds really great. Especially for being only $70. It's also powered by USB which is really nice.

Headphone Amp

Finally, I got a tube headphone amp. I'm a big fan of tubes. They make everything sound warm and full. My friend, Sam McDonald, recommended the one he had.

Bravo Audio V2 Class A 12AU7 Tube Multi-Hybrid Headphone Amplifier

I've been really happy with the Bravo Audio V2 Class A 12AU7 Tube Multi-Hybrid Headphone Amplifier. My only complaints are the knob is a little close to the headphone jack and the input is on the side instead of the back, but those are just minor nitpicks. There's also a power cable you need to plug into the wall. That's expected for a tube preamp though. It sounds really good. Especially for only $70.

Setting It Up

The only other thing I got was a fancy 1/8" to RCA cable for $9.

  1. Connect the FiiO with the included USB cable to your computer
  2. Plug in the 1/8" end of the 1/8" to RCA cable into the line out on the back of the FiiO
  3. Set the gain switch on the back of the FiiO to "L"
  4. Plug in the preamp's power
  5. Plug the RCA end of the 1/8" to RCA cable into the preamp on the right side
  6. Turn the volume on the preamp all the way down and connect your headphones
  7. Turn the preamp on with the switch on the back and turn the FiiO on with the dial on the front. Since we're using the line out of the FiiO, the volume knob won't do anything so we can control it with the preamp instead.

That's it! You can experiement with the bass switch on the front of the FiiO. For my setup, I've enjoyed having it on most of the time.

Easy enough! Let me know if you try this out on Twitter!

January 29, 2015

Event report: Design FAD, Westford

We had a fantastic Design team FAD between 16-18 January at Red Hat’s Westford office. For me, it turned out to be an opportunity to (finally!) meet in person with my mentor Emily, and Mo, two people I’ve been in touch with over IRC/email like forever. Among others physically present were Marie, Sirko, Suchakra, Chris, Prima, Zach, Samuel, Langdon, Paul, Luya and Ryan. Kushal showed up remotely albeit the odd hours in India.

Mo on the whiteboard

Mo did a great job outlining topics we needed to discuss on the whiteboard the first day. At first it looked like a lot to me and honestly I felt like we’d never get to half of them. At the end of the day, to my (pleasant) surprise, we had covered most, if not all of the planned topics. We spent quality time evaluating what the team’s goals are and prioritizing them. We revised our ticket flow into a more structured and well-defined one. We discussed newbie management and how to deal with design assets.

Random discussions

Suchakra, Zach and I worked on redesigning askfedora. What was supposed to be a low-fidelity mockup winded up being pretty hi-fi, since I wanted to take Inkscape lessons from Suchakra and we dug into the details. Suchakra has blogged twice about it, so if you’d like to learn more, find the first one here and the second here.

Askfedora mockup - photo courtesy Suchakra's blog

If we manage to squeeze in time, we’d like to work on the redesign in the weekends. Another group focused on cleaning tickets, so as you’d imagine, lots of trac emails getting tossed around. When I had a look at the design trac after they were done, it seemed like another trac altogether!

Ticket discussions

GlitterGallery was also brought up. What I took back for the GG team from the FAD was that our main priorities are improving the file history view and SparkleShare integration. On my return, I’ve already started work on a new branch.

Quick GG status demo

Emily and I intended to do a GG hackfest once everyone leaves on the final day, but we had transportation issues and couldn’t continue. To make up for that, we held an IRC meeting yesterday to assign tasks to Paul, Emily, Shubham (new kid on the block), and I. I’m excited about how the repo is active again!

Productive FAD for everyone :) Thanks to the local organizers and Gnokii, super worthwhile.

(Gnokii, sorry I sucked at gaming!)

Gnokii playing Champions of Regnum

(Photos courtesy Prima).

January 25, 2015

Event report: IIT Madras Hackfest & Release Party

This year started for me with a 3 nights Hackfest workshop at Indian Institute of Technology, Madras. While the workshop strayed completely off my goals, the post event commentary seems to indicate that attendees had a good time.

Students were screened for attending based on a general-FOSS questionnaire, followed by their submissions to a set of programming tests set by the mentors. I mentored on behalf of the Fedora Project. Other mentors included Anoop & Kunal (Drupal), Kunal (Wikimedia Foundation) and Nalin (iBus Sharda Project).

Mentors group photo

I began to worry because almost everyone showed up with Windows machines initially, and I had planned intensive exercises with no time allocated for setting up a Linux distribution. However, it wouldn’t have made a lot of sense to dive into programming activity when students were new to the idea of a distribution, command line and installing packages. Which is why I decided to dedicate a whole lot of time explaining all of those things with patience; from my experience, I’ve always had folks quit eventually once they get back home because they couldn’t set up their development environment. At least I got to distribute some fresh Fedora 21 DVDs that way ;)

Kids happy with their DVDs

Half of first night was spent explaining software philosophy, what it means for a project to be FOSS, what it means to be part of a community - that kind of thing, after which I had students install packages required for the rest of the event. I followed it up by an extensive workshop on Git. Most of them picked it up rather well. I would have gone ahead further with explaining colloboration over GitHub and the general workflow, but they seemed too sleepy for another hour of devspeak. 5am!

By this time, I realized that goals I had set weren’t going to be met, so I made a change in plan. Originally, I had thought I’d introduce them to Python and Flask while I pick it up myself (since that’s the stack used in most of Fedora’s infra projects), but this was a complete newbie crowd. I stuck with what I’m comfortable with. After spending time collaborating over GitHub on some projects we started, I had the students pick up Ruby the second night. I explained the concept of programming libraries, how they’re organized and shared, and how they’re hackable. A ruby library I once wrote would solve one of their screening process problems, I showed them how. The second day got me wondering what it’d have been like to have had a mentor help me when I got started, because I remember installing and understanding RVM/Ruby the first time took me two weeks (these kids had it set in minutes). It wasn’t until for GlitterGallery that I tried it again!

Whiteboard Musings

On the way from the airport to the Uni, I thought I’d showcase Shonku, but for the same reasons as I stuck with Ruby, I chose Jekyll. I was a little furious when I learned I’d even have to explain what a blog is, but given that everyone had a Jekyll blog running in a couple of hours, complete with some theme-hacks, I’d guess it was worthwhile.

Happy about the productive second night, I spent the following afternoon arranging cake for the release party. I was dissapointed at most of the major Chennai cake shops not having colors other than pink and green, I definitely didn’t want a Fedora Cake with the wrong colors! As a result, I had to overshoot the requested budget few dollars but I landed a nice one from Cakewalk, complete with a photoprint. Samosas and juice was courtesy IITM.


Last night was Release Party and final night. All of us mentors got together in the larger lab to talk about things that were common across any community. I explained students what IRC is, had them lurk around our channels for a bit (and make a complete mess!), and showed them what it means to write proper emails to a mailing list (no top posting, etc). I did a brief introduction to and what it means to the community.

Speaking about

We had an exchange of thoughts, people shared their experiences getting to know about Free Software projects, and the overall atmosphere was pleasant. Our Fedora group left to our meeting room, where I had everyone create a FAS account, showed them around some of our wikipages, and provided them with tips on getting involved better. Finally, in a hope to get them started with Rails, I started talking about designing databases, how APIs talk to each other, and how web apps are structured in general. Well, we did end up cloning GG and setting it up, but I can’t tell how much of that they really understood ;)

All, in all, good fun.

Students: friendly group photo

(Thanks to Abhishek Ahuja for the great photos).

December 23, 2014

NSRegularExpression Notes

I spent awhile today trying to convert a regular expression from Ruby to NSRegularExpression. It was being dumb and took me awhile to figure it out.

The main this is NSRegularExpression's options. By default Ruby, has AnchorsMatchLines on and NSRegularExpression doesn't. I simply turned that on and had good luck.

Here's my specific case (Jekyll front-matter):




NSRegularExpression(pattern: "\\A(---\\s*\\n.*?\\n?)^(---\\s*$\\n?)", options: .DotMatchesLineSeparators | .AnchorsMatchLines, error: nil)!  

December 05, 2014

Personal Sam
Personal Sam

After watching Particle Fever, I got inspired to do a daily video journal thing. Particle Fever is a documentary about the Large Hadron Collider (which is super interesting). They had video from some of the scientists’ daily video journals over the years of working on it. It was really cool to watch.

I got inspired and thought it would be fun to do my own. Not that any of my work is anywhere near as meaningful as theirs, it’s still fun to just do it. I’ve found it’s really enjoyable to summarize what I’m doing each day. I was surprised how much it effect my focus day to day.

Personal Sam is named after a Twitter account I used to have. My friend Aaron Marshall, Over’s founder, used to have @personalaaron. It was just him complaining about his boss and whatnot. I thought it was awesome and started @personalsam. It was mainly me winning about girls and how emo my life was back in 2008. Anyway, it seemed like a fitting name for my podcast thing and I already had the domain.


I usually shoot it in QuickTime Player on my iMac in the mornings and upload straight to Vimeo from there. I always do it one take and just hit share. On Vimeo’s website, I fill out all of the metadata. From there, all I have to do is add that video’s ID to the little Rails app I made for it.


The web app hits Vimeo’s API and pulls down all of the meta data to populate the podcast feed. It’s all very simple. The download links hit a proxy service I wrote. That just tells Mixpanel that someone watched the video and redirects to the actual file.


<mark>It’s funny to me that people actually watch it.</mark> I started it just for me. It does feel pretty good that people care enough about the boring parts of my life to watch me talk at my computer. A nice side effect of doing this is I’m more motivated to get up early and get it out of the way. Anyway, if you’re interested, check out

November 17, 2014

Fedora comes to University - Coimbatore Contribution Camp Report

At my University in Coimbatore, we run a tech{know}logy club, where we try to talk about interesting things in technology that normally isn’t covered in the classroom. We had a set of freshers join our club in August through the induction program. On Software Freedom Day in September, they were introduced to the idea of FOSS, open source communities and how it’s possible to contribute to them. When I went to Hanoi for our Ambassadors meeting, I decided to host a contribution camp in Uni sometime this year. Here’s the wikipage which has all the essential bits.

Background week

My friend Manjush and awesome (fresher) junior Sachin did a great job gathering a bunch of interested freshers and other students in our digital library for a half week before the camp took place. On the first day, they helped with installing Fedora (and other distributions of choice) onto the participants’ computers. They spent another day explaining what packages are and helping install the important ones. I showed up for the last two days and helped with Git and Jekyll.

Day One - Thursday

All of us agreed that the best way to motivate folks towards the camp was to screen a movie at our Auditorium. We were expecting 70, but were delighted to be able to host 180 students for Internet’s Own Boy: The Story of Aaron Swartz .

Movie Screening Poster, courtesy Anirudh Menon

For those initiated to the world of Free Software, the Startup Community, and DRM oriented arguments, the movie was a reminder of Swartz and the role he played in shaping part of our world. For the rest, they got to hear about terminologies, ideas and people they could later go and Google. Overall, the silence in the hall towards the end of the movie touched me. We invited everyone to join us for Day Two, and many did.

Day Two - Friday

We wanted to be less theoretic, so we structured our sessions that way. We expected 40 people, 45 showed up. I think if not for Google Club’s mindless discussion about landing jobs by marketing Google products, we would have had a larger attendance. Abhishek Ahuja started the day, speaking about FOSS in general - what it is, why bother, how it affects him. He followed it up with FOSS alternatives to popular software.

Ahuja talking about an interesting GIMP plugin he once discovered Sachin went next, he provided a rather neat introduction to the popular GNU/Linux distributions and the history/community behind them. One interesting thing he did was talk about desktop environments - something people get to hear about often but don’t really understand. From what I could understand, the audience was confusing distributions and desktop environments.

Sachin presenting various desktop environments

I’m actually quite proud of those who attended - the sessions were held after class because we didn’t have much choice, and it’s tiring for freshers who have to wake up early for Yoga classes and walk all the way to our session hall. I didn’t want them to sit and listen to us in hunger, so I arranged for snacks with the money I had asked Tuan to allocate for this event. Anyway, after a quick break, we were back to the sessions.

I’ve seen Manjush try the most distributions, so we had him speak about his timeline of the various GNU/Linux distributions he tried. At least to me who’s spent enough time doing tech support for my peers with respect to installing distributions, it was an entertaining talk. He spoke about problems with installation, problems with lack of language support, problems with community, problems with bundled software, problems with licenses and every other kind of problem one can think of. I was proud when he said he eventually settled on Fedora since it gave him everything he wanted.

Manjush talking about his difficulties with distributions he previously used

I did the last session: Fedora A-Z Guide (partially due to time constraints). Now our University provides us with some hurdles: fresher’s don’t get to use laptops, lab usage is fairly restricted, and girls can’t hang out past 7.30pm (after which anything we want to do happens). So I tried to pick up the non-technical bunch of areas, or areas with less technical intensity, while making sure they have the opportunity to participate over their smartphone. I explained how Wikipedia is everyone’s encyclopaedia, and how they can host their own. Through this, I tried to excite them about the power of a collaborative community, and how they can start contributing with whatever existing skill they have. Some students seem to have gone back home and edited few Wikipedia pages as well :)

Yours truly running through the A-Z guide

Day Three - Monday

Come final day and we had a new set of faces. The attendance was 40. The demand seemed to be the Fedora A-Z guide, so I went over it once more, this time talking about fewer topics, but with more depth. For example, I showed them the badges project, traced a badge to the trac and showed them how the badges are designed and how they evolve. That seems to have gotten them pretty amused, because I met at least 3 people who said they’d like to contribute to badges.

Next up, I went over the hands-on bits from my FOSS 101 workshop at FOSSASIA Phnom Penh and SFD Hanoi. We had a brief look at Fedora and Mozilla’s contribute pages, OpenHatch and CodeTriage. I explained how we communicate - mailing lists, blogs, issue pages, IRC. I explained ettiquettes to follow when one is interacting with a community. It looked like a lot of people related with the usage of SMS lingo and hyper-exclamations (sigh, teens) - I got to see a lot of giggling and smiling around.

Good Procrastination and Bad Procrastination

After the usual snack break, it was time for my final presentation. When I asked a faculty for feedback on Day Two, he felt we were getting a little too technical for the freshers, and that we should do a funny/inspiring session. So I did one called “Good Nervous and Bad Nervous”, and it pretty much rocked :) I brought up lots of experiences from my personal life, what I learned from little things my friends in the Fedora and FOSS community taught me through their words and actions. I look forward to polish it and do the talk again sometime, or maybe even blog it.

So.. that’s the most of our camp, and we’re meeting again this evening to help people with any problems they have in getting started. I’ll be running a survey for the attendees later this week, and if the results seem interesting, I’ll share it.

Closing Notes & Thanks

  1. Although I’m excited about the enthusiasm everyone displayed, I wish the overall technical aptitude of the attendees was higher. I have another semester left here, I’ll try my best to fix that.
  2. I’ve started a reimbursement request on the apac trac (#161) for the food - I’ll upload bills and supply reports today.
  3. I’ve run out of swag now, so I need to figure out something before my Fedora Project workshops at IIT Madras in early Jan
  4. Thanks to: Manjush who kickstarted the sessions, the week prior to the camp. Sachin, our wizard first year who helped out pretty much everywhere. Proud of you! The University, for not making the permission process too much of a hassle to me. Everyone who attended, spoke or blogged.

November 12, 2014

Developments with Fedora Join Landing

If you went to Flock 2014 or attended it remotely, you’d remember I did a talk called Curious Case of Fedora Freshmen. One of the concerns raised in the session revolved around our current Join pages. Here’s the contribute wikipage, the first thing you’d land if you ran a Google search for Fedora Contribute. Then there’s the Join Page, which directs you to various sections within the Join wikipage. For the most of us who’re already involved with the project, these resources may seem as good enough guidelines for a newcomer to be on boarded, making efforts towards a new Join experience seem superfluous.

Over the course of the last year or so, I ended up participating in several meetups in my University, geared at getting people on boarded into FOSS communities. Out of curiousity, I’ve spent time researching how the first steps for several other communities are. Here’s Mozilla’s, my current favorite. Here’s Drupal’s. Here’s Wikimedia’s.

Our Join process could use some improvement. Inspired by some of the links I pointed out earlier, Gnokii’s slides on contributing to Fedora as a non-programmer, and my own experiences dealing with juniors in college and interested folks I meet at conferences, I’ve decided to go ahead and work on building a new boarding point for people who’d like to contribute to the Fedora Project. To start with, I have a couple of mockups for such a website, viewed on mobile. I also managed to find two enthusiastic students, who I’m mentoring with the development process. We had a super quick meeting the other day to spark things.

First Page Second Page

I wanted this post to serve as a notifier to the community, so in case you have any ideas/suggestions, feel free to add them into the comments section! :) I’ll try to keep sharing updates as the project progresses.

November 10, 2014

Questions — Part 2

I was recently trading a few emails with someone asking about working on projects you’re passionate about full-time. Thought it would be good to answer them publicly. Here we go:

#1 Have you found that you really can make a living just by working on projects that you’re interested in?

Sadly, no. I still do contract work to pay the bills. When I’m working on my stuff full-time, it’s on money I’ve saved up from clients. I someday hope to live solely off of income from my projects.

This has definitely cased a shift in my thinking for side projects. I now mainly focus on stuff that could make money instead of random things for fun. I still enjoy what I work on, but it’s a more focused approach.

#2 What do you think about trying to do contract work part time (to pay the bills) while working on your personal projects with your other time. Or do you think that personal project businesses require full attention?

This is what I do. I’m really bad at doing more than one thing at a time. I never book more than one client at a time anymore. It’s too stressful to manage everyone’s needs. If I do work on stuff on the side while I’m working on a client, it’s nights and weekends when I feel like it.

After I have enough saved up, I switch to my own projects full-time which is great. I have to treat it like work and do it no matter what. I have to work too hard to get this time so I respect it a lot.

#3 Have you found more success by putting all your effort into one larger project or splitting your time between many smaller projects

Regarding side projects, definitely. Focus is a good skill to have. Pouring all of your passion into one thing instead of a bunch of little things makes a huge difference.

This decision is somewhat motivated from wanting to optimize for future income. I still think I produce better work this way anyway.

If you want to ask something about whatever, email and I’ll try to get back to you. Hopefully these are helpful to some folks.

November 04, 2014

Value of Beta

I thought I had decided against doing betas of software to more than just close friends. A few friends assured me that most feedback would be useless. Their point as most just wanted to get it early to feel cool but didn’t actually use it or send feedback. I can definitely say for iOS betas in the past, this has been my experience as well.

The Whiskey beta has been great. I have a huge amount of things to build still. My list was a little overwhelming. Among things that still need to be built, there were lots of little bugs that needed some attention I’ve been putting off. No one likes to fix bugs.

Getting lots of email and tweets from people saying they love it and can see its potential is huge. People actually seeing it is good motivation. It also helps get me excited to fix little bugs. For example, several people reported this one thing that took me a minute to fix. I had just been forgetting about it because it wasn’t something I used a lot personally.

I’m a fan of showing your work early. I wish I would have done it sooner. Now I just need to finish this thing and ship it!

Checkout the Whiskey beta, this markdown text editor I’m working on for Mac and iOS.

November 02, 2014

ReFS Part II: May the Resilience Be With You

Not too long after the last post, it became aparent that the disk I was analyzing wasn't a valid filesystem. Possibly due to a transfer error, several bytes were missing resulting in disk structures that weren't aligned with the addresses where they should've resided.

After generating a new image I was able to make alot of headway on the analysis. To start off a valid metadata page address became immediately aparent on page 0x1e. Recall that page 0x1e is the first metadata page residing at a fixed / known location after the start of the partition:

  bytes 0xA0-A7: 90 19 00 00 00 00 00 00
        0xA8-AF: 67 31 01 00 00 00 00 00

Pages 0x1990 and 0x13167 are valid metadata pages containing similar contents. Most likely one is a backup of the other. Assuming the first record is the primary copy (0x1990).

Note this address appears at byte 0xA0 on page 0x1E. Byte 0xA0 is referenced earlier on in the page:

  byte 0x50: A0 00 00 00  02 00 00 00  B0 00 00 00  18 00 00 00

So it is possible that this page address is not stored at a static location but at a offset referenced earlier in the page.

The System Table

The word 6 value (previously refered to as virtual page number) of page 0x1990 is '0' indicating this is a critical table. Lets call this the System Table for the reasons found below.

This page contains 6 rows of 24-byte entries, each containing a valid metadata partition, some flags, and a 16-byte unique id/checksum of some sort.

Early on in the page the table header resides:

  byte 0x58: 06 00 00 00   98 00 00 00
             B0 00 00 00   C8 00 00 00
             E0 00 00 00   F8 00 00 00
             10 01 00 00

06 is the number of records and each dword after this contains the offset from the very start of the page to each table record:

  table offsets: 98, B0, C8, E0, F8, 110

Each table record has a page id, flags, and some other unique qword of some sort (perhaps an object id or checksum),

  page ids:      corresponding virtual page id values:
    2c2               2
     22               E
     28               D
     29               C
    2c8               1
    2c5               3

These correspond to the latest revisions of the critical system pages highlighted in previous analysis.


We've previously established Virtual Page 0x2 contains the object table and upon furthur examination of the keys (object id's) and values (page id's)' we see object 0x0000000000006000000000000000000 is the root directory (this is consistent across images).

The format of a directory page varies depending its type. Like all metadata-pages the first 0x30 bytes contains the page metadata. This is followed by a attribute of unknown purpose (seems to be related to the page's contents, perhaps a generic bucket / container descriptor).

This is followed by the table header attribute, 0x20 bytes in length.

This attribute contains:

  • bytes 0x4-0x7: the total table length of the table. Note this length includes this attribute so 0x20 should be subtracted before parsing
  • 0xC-0xD: flags seems to indicate the intent of the table

Table Type Flags:

  • 00 02 - directory list
  • 01 03 - b+ tree

Table records here work like any other table consisting of

  • the length of the record, (4 bytes)
  • offset to the key, (2 bytes)
  • length of the key, (2 bytes)
  • flags, (2 bytes)
  • offset to the value, (2 bytes)
  • length of the value (2 bytes)
  • padding (2 bytes)

The semantics of the record values differ depending on the table type.

Directory lists contain:

  • keys: file names
  • values: file tables containing file timestamps and data pages

B+ trees contain:

  • keys: b+ node id (file names)
  • values: directory pages

When iterating over directory list records, the record flags seem to indicate record context. A value of '4' stored in the record flags seems to indicate a historical / old entry, for example an old directory name before it was renamed (eg 'New Folder'). The files / directories we are interested in contain '0' or '8' in the record flags.

The intent of each matching directory list record can be furthur deduced by the first 4 bytes in its key which may be:

  0x00000010 - directory information
  0x00020030 - subdirectory - name will be the rest of the key
  0x00010030 - file - name will be the rest of the key
  0x80000020 - ???

In the case of subdirectories, the first 16 bytes of the record value will contain the directory object id. The object table can be used to look this up to access its page.

For B+ trees the record values will contain the ids of pages containing directory records (and possibly more B+ levels though I didn't verify this). Full filesystem traversal can be implemented by iterating over the root tree, subdirs, and file records.

File Tables

File metadata is stored as a table embedded directly into the directory table which the file is under.

Each file table always starts with an attribute 0xA8 length containing the file timestamps (4 qwords starting at byte 0x28 of this attribute) & file length (starting at byte 0x68 of this attribute).

Note the actual units of time which the timestamps represent are still unknown.

After this there exists several related metadata attributes.

The second attribute (starting at byte 0xA8 of the file table):

  20 00 00 00 # length of this record
  A0 01 00 00 # length of this record + next record
  D4 00 00 00 # amount of padding after next record
  00 02 00 00 # table type / flags ?
  74 02 00 00 # next 'insert' address ?
  01 00 00 00 # number of records ?
  78 02 00 00 # offset to padding
  00 00 00 00

The next record looks like a standard table record as we've seen before:

  80 01 00 00 # length of this record, note this equals 2nd dword value of last record minus 0x20
  10 00 0E 00 # offset to key / key length
  08 00 20 00 # flags / offset to value
  60 01 00 00 # value length / padding

The key of this record starts at 0x10 of this attribute and is 0x0E length:

  60 01 00 00
  00 00 00 00
  80 00 00 00
  00 00 00

The value starts at attribute offset 0x20 and is of length 0x160. This value contains yet another embeded attribute:

  88 00 00 00 # length of attribute
  28 00 01 00
  01 00 00 00
  20 01 00 00
  20 01 00 00
  02 00 00 00
  00 00 00 00
  00 00 00 00
  00 00 00 00
  00 00 00 00
  01 00 00 00
  00 00 00 00
  00 00 00 00
  00 00 03 00
  00 00 00 00
  2C 05 02 00 # length of the file
  00 00 00 00
  2C 05 02 00 # length of the file
  00 00 00 00
  # 0's for the rest of this attribute

The file length is represented twice in this attribute (perhaps allocated & actual lengths)

The next attribute is as follows:

  20 00 00 00 # length of attribute
  50 00 00 00 # length of this attribute + length of next attribute
  84 00 00 00 # amount of padding after this attribute
  00 02 00 00 # ?
  D4 00 00 00 # next insert address
  01 00 00 00 # ?
  D8 00 00 00 # offset to padding
  00 00 00 00

The format of this attribute looks similar to the second in the file (see above) and seems to contain information about the next record(s). Perhaps related to the 'bucket' concept discussed here

At first glance the next attribute looks like another standard record but the key and value offsets are the same. This attribute contains the starting page # of the file content

 30 00 00 00 # length of this record
 10 00 10 00 # key offset / length ?
 00 00 10 00 # flags / value offset ?
 20 00 00 00 # value length / padding ? 
 00 00 00 00
 00 00 00 00
 0C 00 00 00
 00 00 00 00
 D8 01 00 00 # starting page of the file
 00 00 00 00
 00 00 00 08
 00 00 00 00 

For larger files there are more records following this attribute, each of 0x30 length, w/ the same record header. Many of the values contain the pages containing the file contents, though only some have the same format as the one above.

Other records may correspond to compressed / sparse attributes and have a different format.

The remainder of this attribute is zero and closes out the third attribute in the file record.

After this there is the amount of padding described by the second attribute in the file (see above) after which there are two more attributes of unknown purpose.


After investigation it seems the ReFS file system driver doesn't clear a page when copying / overwriting shadow pages. Old data was aparent after valid data on newer pages. Thus a parser cannot rely on 0'd out regions to acts as deliminators or end markers.

Using the above analysis I threw together a ReFS file lister that iterates over all directories and files from the root. It can be found on github here.

Use it like so:

ruby rels.rb --image foo.image --offset 123456789

Next Steps

Besides verifying all of the above, the next major action items are to extract the pages / clusters containing file data as well as all file metadata.

read more

October 13, 2014

New features in mock-1.2
You may noticed there's been a new release of mock in rawhide (only). It incorporates all the new features I've been working on during my Google Summer of Code project, so I'd like to summarize them here for the people who haven't been reading my blog. Note that there were some other new features that weren't implemented by me, so I don't mention them here. You can read more about the release at

LVM plugin
The usual way to cache already initialized buildroot is using tarballs. Mock can now also use LVM as a backend for caching buildroots which is a bit faster and enables efficient snapshotting (copy-on-write). This feature is intended to be used by people who maintain a lot packages and find themselves waiting for mock to install the same set of BuildRequires over and over again.
Mock uses LVM thin provisioning which means that one logical volume (called thinpool) can hold all thin logical volumes and snapshots used by all buildroots (you have to set it like that in the config) without each of them having fixed size. Thinpool is created by mock when it's starts initializing and after the buildroot is initialized, it creates a postinit snapshot which will be used as default. Default snapshot means that when you execute clean or start a new build without --no-clean option, mock will rollback to the state in default snapshot. As you install more packages you can create your own snapshots (usually for dependency chains that are common to many of your packages). I'm a Java packager and most of my packages BuildRequire maven-local which pulls in 100MB worth of packages. Therefore I can install maven-local just once and then make a snapshot with
mock --snapshot maven
and then it will be used as the default snapshot to which --clean will rollback whenever I build another package. When I want to rebuild a package that doesn't use maven-local, I can use
mock --rollback-to postinit
and the initial snapshot will be used for following builds. My maven snapshot will still exist, so I can get back to it later using --rollback-to maven. To get rid of it completely, I can use
mock --remove-snapshot maven
So how do you enable it?
The plugin is distributed as separate subpackage mock-lvm because it pulls in additional dependencies which are not available on RHEL6. So you first need to install it.
You need to specify a volume group which mock will use to create it's thinpool. Therefore you need to have some unoccupied space in your volume group, so you'll probably need to shrink some partition a bit. Mock won't touch anything else in the VG, so don't be afraid to use the VG you have for your system. It won't eat your data, I promise. The config for enabling it will look like this:
config_opts['plugin_conf']['root_cache_enable'] = False
config_opts['plugin_conf']['lvm_root_enable'] = True
config_opts['plugin_conf']['lvm_root_opts'] = {
    'volume_group': 'my-volume-group',
    'size': '8G',
    'pool_name': 'mock',

To explain it: You need to disable root_cache - having two caches with the same contents would just slow you down. You need to specify a size for the thinpool. It can be shared across all mock buildroots so make sure it's big enough. Ideally there will be just one thinpool. Then specify name for the thinpool - all configs which have the same pool_name will share the thinpool, thus being more space-efficient. Just make sure the name doesn't clash with existing volumes in your system (you can list existing volumes with lvs command). For information about more configuration options for LVM plugin see config documentation in /etc/mock/site-defaults.cfg.
Additional notes:
Mock leaves the volume mounted by default so you can easily acces the data. To conveniently unmount it, there's a command --umount. To remove all volumes use --scrub lvm. This will also remove the thinpool only if no other configuration has it's volumes there.
Make sure there's always enough space, overflown thinpool will stop working.

Nosync - better IO performance

One of the reasons why mock has always been quite slow is because installing a lot of packages generates heavy IO load. But the main bottleneck regarding IO is not unpacking files from packages to disk but writing Yum DB entries. Yum DB access (used by both yum and dnf) generates a lot of fsync(2) calls. Those don't really make sense in mock because people generally don't try to recover mock buildroots after hardware failure. We discovered that getting rid of fsync improves the package installation speed by almost a factor of 4. Mikolaj Izdebski developed small C library 'nosync' that is LD_PRELOADed and replaces fsync family of calls with (almost) empty implementations. I added support for it in mock.
How to activate it?
You need to install nosync package (available in rawhide) and for multilib systems (x86_64) you need version for both architectures. Then it can be enabled in mock by setting
config_opts['nosync'] = True
It is requires those extra steps to set up but it really pays off quickly.

DNF support
Mock now has support for using DNF as package manager instead of Yum. To enable it, set
config_opts['package_manager'] = 'dnf'
You need to have dnf and dnf-plugins-core installed. There are also commandline switches --yum and --dnf which you can use to choose the package manager without altering the config. The reason for this is that DNF is still not yet 100% mature and there may be a situation where you'd need to revert back to Yum to install something.
You can specify separate config for dnf with dnf.conf config option. If you omit it, mock will use the configuration you have for Yum (config_opts['yum.conf']). To use yum-cache with DNF you have to explicitly set
in the dnf.conf or yum.conf config option.
Otherwise, it should behave the same in most situations and also be a bit faster.

Printing more useful output on terminal
Mock will now print the output of Yum/DNF and rpmbuild. It also uses a pseudoterminal to trick it into believing it's attached to terminal directly and also get package downloading output including the progress bars. That way you know whether it's dowloading something or it cannot connect. You need to have debuglevel=2 in your yum.conf for this to work.

Concurrent shell acces to buildroot
Non-desctructive operations use just a shared lock intread of exclusive one. That means you can get shell even though there's a build running. Please use it with caution to not alter the environment of the running build. Destructive operations like clean still need exclusive lock.

Executing package management commands
Mock now has a switch --pm-cmd which you can use to execute arbitrary Yum/DNF command. Example:
mock --pm-cmd clean metadata-expire
There are also --yum-cmd and --dnf-cmd aliases which force using particular package manager.

--enablerepo and --disablerepo options
Passing --enablerepo/--disablerepo to package manager whenever mock invokes it. Now you can have a list of disabled repos in your mock config and enable them only when you need them.

rpmbuild has --short-circuit option that can skip certain stages of build. It can be very useful for debuging builds which fail in later stages. Mock now also has --short-circuit option which leverages it. It accepts a name of the stage that will be the first one to be executed. Available stages are: build, install and binary. (prep stage is also possible, but I'm not the one who added that and I have no idea what it's supposed to do :D). Example:
mock --short-circuit install foo.1.2-fc22.src.rpm
rpmbuild arguments
You can specify arbitrary options that will be passed to rpmbuild with --rpmbuild-opts. Mainly for build debugging purposes.

Configurable executable paths
Mock now also supports specifying paths to rpm, rpmbuild, yum, yum-builddep and dnf executables so you can use different than system-wide versions. This may be useful for Software Collections in the future.

Automatic initialization
You don't need to call --init, you can just do --rebuild and it will do init for you. It will also correctly detect when the initialization didn't finish succesfully and start over.

More thorough cleanup logic
There sould be no more mounted volumes left behind after you interrupted build by ^C. And if they are (i.e. because it was killed), it should handle it without crashing.

Python 3 support
Main part of mock should be fully Python 3 compatible. Python 2 is still used as default. Unported parts are LVM plugin and mockchain.

This is a feature that was already present for few releases, but it seems only a few people know about it, so I'd like to mention it even though it's not new. I quite often find myself in situation when I want to build a package with the same config, but there's some other build already running, so I cannot. A lot of people just copy the config and change the name of chroot, but that means additional work and most importantly it cannot use the same caches as the original config, because mock sees them as something different. Unique-ext provides a better way. It's a commandline switch that adds a suffix to chroot name, so mock creates different chroot, but it uses the same config and in turn also same caches. Caching mechanisms provide locking to make this work. Using unique-ext with LVM plugin means that the new chroot is based on the postinit snapshot. There's a lock that prevents the postinit snapshot being unnecessarily initialized twice.

If you have any questions, ping me on #fedora-devel (nick msimacek)

October 08, 2014

Software Freedom Day Hanoi: Fedora Report

I was in Hanoi last month to participate in the APAC ambassadors meeting, as well as the Software Freedom Day event. This post summarizes notes from the trip.

APAC Ambassadors Meeting

On the first day, we had a meeting set up, to go through the current year’s budget, and discuss concerns with our respective countries. Tuan, Thang, Alick, Somvannda and yours truly were physically present. Gnokii, Kushal and Ankur participated for significant portions of time, remotely over IRC.

Fedora Folks posing at the SFD banner Photo: Unknown

We started with a general discussion about the APAC situation. Some (not sic) moments:


We have a lack of physical meetups among APAC folks.

Tuan was my roomate in Prague (at Flock), and we had a brief discussion about this. For most APAC meetings, at least until a few weeks ago, there would be very few representatives from Asian countries. When the budget was to be made for the current FY, Tuan announced over the mailing lists, but nobody showed up.

We discussed how this situation is improving. In November this year, a FAD is planned where folks have been invited to help with the budget planning. The recent meetings have run over an hour and we regularly have irregular meetings these days ;) While that is indeed trouble, it indicates interest, which is a good thing.


We should stop people from treating Fedora as a travel agency.

For context, here’s a blogpost around the same concern. Many Indians just want to become ambassadors, because they think that warrants them funds to travel. It’s of course great if people have been contributing in volumes and want funds to travel and speak about it - in fact, that’s encouraged. But in the recent times, Kushal says he receives mentorship requests, where the person doesn’t want to go through the mentoring process and wants to gain ambassador status directly. Kushal quoted examples and how the mentors team in India dealt with it.

Us hard at work Photo: Somvannda

Next, we worked on the most important bit: the budget. Alick volunteered to review and update Q2 and I helped with Q1. Alick got lucky since most events planned in Q2 were cancelled and there wasn’t much to review. Thang helped me cross-check events, swag requests and travel tickets from Q1.

After a lunch break, we turned to discuss Ambassador Polos and FAD Phnom Penh. I had been working on cleaning up the entire APAC trac for two weeks, but was unable to complete it because people hardly respond. Finally, at the meeting, with help from everyone present, the APAC trac is now Sparkly!

Software Freedom Day at the Uni

This was my second SFD, first being the one I helped organize in school. The way this one was organized was definitely more colorful - it started off with a Tux dance!

Alick had some swag flown over from China, so we used them up at our booth - it disappeared quickly, even before we had a chance to grab some of the folks and do some Fedora preaching. Nonetheless, it was super fun. I think we managed to direct some of the students to our Fedora room for the afternoon sessions. With the swag all gone and not much agenda for the rest of the morning, we headed to the main hall.

Sponsors being felicitated Photo: Alick

I’m going to have to quote the following line from Alick’s report:


Sarup, Somvannda, and I are honored to be introduced as special international guests to the event (in English).

It was funny (although exciting) to attend the first few talks in the regional language. Well, we even attended the “How to contribute to Fedora without programming skills” keynote by Tuan in Vietnamese ;)

Come afternoon, we moved to our Fedora room. While Trang and I went around gathering folks to attend our sessions, Thang introduced the attendees to the Fedora Project - who we are, what we do, our goals, and why bother. He did his session in Vietnamese, and the attendees were visibly glued.

Next was Alick’s session on FOSS Software Defined Radio. I think he did a great job introducing the topic - it was a topic unfamiliar to me, but now I get the basics. I liked his idea of motivating through examples.

Finally, I did my mini workshop on FOSS 101. Prior to the event, we had a little debate around what I should talk about - GSoC? Git? Rails? From my understanding of the audience, I decided to do a diluted version of my FOSSASIA workshop. I introduced attendees to the idea of FOSS, put up quotes sent to me by Sumanah and Tatica (who I’ve always felt are great examples of our awesome lady FOSS activists) and showed them around IRC & the idea of mailing lists. I wrapped up with a basic introduction to Git (for which I should thank Alick for his help with the demos and Trang for the translation).


Day 0 was 18 September 2014. I was put up at the Hanoi Legacy Hotel near the Hoan Kiem lake. My roomate was Alick, who arrived later in the evening. Somvannda was at the hotel a day in prior. Tuan and Thang being the locals, were our awesome hosts. For dinner on all but the last day, we had street food near the hotel. On the last day, we had dinner with the VFOSSA folks, other organizers and volunteers.

The meeting was held on the first day, 19 September, at the VAIP office. The SFD event was held on the second day, 20 September at Hanoi University of Engineering and Technology.

Fun Memories

As you would guess, we had fun along the way! On Day 0, Somvannda and I went around Hanoi’s streets hunting for Egg Coffee.

For dinner, Tuan and Thang took the rest of us to a nearby food joint, where we tried out some rather interesting Vietnamese food. I (kinda) picked up how to use a chopstick too.

Newly acquired chopstick skills Photo: Me

On Day 1, after the meeting was over, we headed to the Water Puppet Theater - a unique concept. For dinner, we roamed the street for local food, followed by a brief trip to the Night Market in the Hanoi Old Quarter. I wish we could have revisited the place on the final day as well, but we couldn’t as the events ended late.

On the final day, we were joined by the awesome (hopefully significant future contributors) Trang and Phuong. Trang made us try “Corn in Fish Sauce” and we wrapped up with the usual beer :-)

It was definitely a weekend well spent and I’d like to thank everyone for the fun and productive time!

October 03, 2014

Software Freedom Day Hanoi: Fedora Report

I was in Hanoi last month to participate in the APAC ambassadors meeting, as well as the Software Freedom Day event. This post summarizes notes from the trip.

APAC Ambassadors Meeting

On the first day, we had a meeting set up, to go through the current year’s budget, and discuss concerns with our respective countries. Tuan, Thang, Alick, Somvannda and yours truly were physically present. Gnokii, Kushal and Ankur participated for significant portions of time, remotely over IRC.

Fedora Folks posing at the SFD banner Photo: Unknown

We started with a general discussion about the APAC situation. Some (not sic) moments:


We have a lack of physical meetups among APAC folks.

Tuan was my roomate in Prague (at Flock), and we had a brief discussion about this. For most APAC meetings, at least until a few weeks ago, there would be very few representatives from Asian countries. When the budget was to be made for the current FY, Tuan announced over the mailing lists, but nobody showed up.

We discussed how this situation is improving. In November this year, a FAD is planned where folks have been invited to help with the budget planning. The recent meetings have run over an hour and we regularly have irregular meetings these days ;) While that is indeed trouble, it indicates interest, which is a good thing.


We should stop people from treating Fedora as a travel agency.

For context, here’s a blogpost around the same concern. Many Indians just want to become ambassadors, because they think that warrants them funds to travel. It’s of course great if people have been contributing in volumes and want funds to travel and speak about it - in fact, that’s encouraged. But in the recent times, Kushal says he receives mentorship requests, where the person doesn’t want to go through the mentoring process and wants to gain ambassador status directly. Kushal quoted examples and how the mentors team in India dealt with it.

Us hard at work Photo: Somvannda

Next, we worked on the most important bit: the budget. Alick volunteered to review and update Q2 and I helped with Q1. Alick got lucky since most events planned in Q2 were cancelled and there wasn’t much to review. Thang helped me cross-check events, swag requests and travel tickets from Q1.

After a lunch break, we turned to discuss Ambassador Polos and FAD Phnom Penh. I had been working on cleaning up the entire APAC trac for two weeks, but was unable to complete it because people hardly respond. Finally, at the meeting, with help from everyone present, the APAC trac is now Sparkly!

Software Freedom Day at the Uni

This was my second SFD, first being the one I helped organize in school. The way this one was organized was definitely more colorful - it started off with a Tux dance!

Alick had some swag flown over from China, so we used them up at our booth - it disappeared quickly, even before we had a chance to grab some of the folks and do some Fedora preaching. Nonetheless, it was super fun. I think we managed to direct some of the students to our Fedora room for the afternoon sessions. With the swag all gone and not much agenda for the rest of the morning, we headed to the main hall.

Sponsors being felicitated Photo: Alick

I’m going to have to quote the following line from Alick’s report:


Sarup, Somvannda, and I are honored to be introduced as special international guests to the event (in English).

It was funny (although exciting) to attend the first few talks in the regional language. Well, we even attended the “How to contribute to Fedora without programming skills” keynote by Tuan in Vietnamese ;)

Come afternoon, we moved to our Fedora room. While Trang and I went around gathering folks to attend our sessions, Thang introduced the attendees to the Fedora Project - who we are, what we do, our goals, and why bother. He did his session in Vietnamese, and the attendees were visibly glued.

Next was Alick’s session on FOSS Software Defined Radio. I think he did a great job introducing the topic - it was a topic unfamiliar to me, but now I get the basics. I liked his idea of motivating through examples.

Finally, I did my mini workshop on FOSS 101. Prior to the event, we had a little debate around what I should talk about - GSoC? Git? Rails? From my understanding of the audience, I decided to do a diluted version of my FOSSASIA workshop. I introduced attendees to the idea of FOSS, put up quotes sent to me by Sumanah and Tatica (who I’ve always felt are great examples of our awesome lady FOSS activists) and showed them around IRC & the idea of mailing lists. I wrapped up with a basic introduction to Git (for which I should thank Alick for his help with the demos and Trang for the translation).


Day 0 was 18 September 2014. I was put up at the Hanoi Legacy Hotel near the Hoan Kiem lake. My roomate was Alick, who arrived later in the evening. Somvannda was at the hotel a day in prior. Tuan and Thang being the locals, were our awesome hosts. For dinner on all but the last day, we had street food near the hotel. On the last day, we had dinner with the VFOSSA folks, other organizers and volunteers.

The meeting was held on the first day, 19 September, at the VAIP office. The SFD event was held on the second day, 20 September at Hanoi University of Engineering and Technology.

Fun Memories

As you would guess, we had fun along the way! On Day 0, Somvannda and I went around Hanoi’s streets hunting for Egg Coffee.

For dinner, Tuan and Thang took the rest of us to a nearby food joint, where we tried out some rather interesting Vietnamese food. I (kinda) picked up how to use a chopstick too.

Newly acquired chopstick skills Photo: Me

On Day 1, after the meeting was over, we headed to the Water Puppet Theater - a unique concept. For dinner, we roamed the street for local food, followed by a brief trip to the Night Market in the Hanoi Old Quarter. I wish we could have revisited the place on the final day as well, but we couldn’t as the events ended late.

On the final day, we were joined by the awesome (hopefully significant future contributors) Trang and Phuong. Trang made us try “Corn in Fish Sauce” and we wrapped up with the usual beer :-)

It was definitely a weekend well spent and I’d like to thank everyone for the fun and productive time!

September 28, 2014

Four Questions

I recently got an email from a college sophomore that had some questions about getting started. Asked him if it would be okay to answer publicly and he was for it.

#1 How did you begin programming?

I started “programming” for the first time when I was 10 years old. My mom took me to an HTML class our local ISP was offering for free. I thought it was amazing you could type some stuff and make visual stuff happen. Started writing HTML in all of my free time in Notepad on our white Dell tower.

My first actual programming was in ActionScript when I was 13 in a Flash 5 demo. That was my first if statement. Shortly after, I started dabbling in PHP and took a C++ class in sophomore year of high school. That said, we didn’t use any of the ++ parts. Didn’t learn about object-oriented programming until the following summer.

A friend and I drove down to Atlanta, GA from Louisville, KY for an Apple tech talk on what was new in Mac OS X. We went purely as fans. I didn’t even know what Objective-C was at the time. After seeing all of the new stuff they added in Xcode (like a better debugger, etc.) I went back to the hotel room and purchased my first item on Amazon: Mac OS X Programming by Aaron Hillegass. I spend the rest of the summer working on a Mac app. So, somewhere in there was the real start.

#2 What advice can you give to someone just starting out like me?

This junk is hard. Don’t give up. Basically anything is possible if you just do it. The most important thing I learned in that C++ class was resourcefulness. Our teacher was a really great guy. He mainly taught Computer I (how to use Windows, Office, etc.) but wanted to offer programming. He took a class the previous summer and was excited to teach us.

He would work a few chapters ahead in his free time and then explain it to us in class. Didn’t take long for the few that tried in the class to get things quicker than he did. Whenever we’d get stuck though, he’d sit with us and try to help us figure it out. I remember the first time we got a linker error very vividly.

“What’s a linker error?” “I have no idea. Let’s try this:”

Then he copy and pasted the error into Google. Clicked the first result and there was our solution. Next time I got stuck with some mysterious error or wanted to do something I didn’t know how to do, I tried googling it. It turns out there is a lot of information on the Internet. This is basically how I learned everything with the exception of that first book I bought. You just have to get started.

Still to this day, my attitude is “sure, I can figure this out.” Being resourceful instead of giving up when you get stuck the most important thing you can. I think this is what makes the difference between a good programmer and a great programmer (resourcefulness and that attitude that you can figure it out that is).

#3 You worked on the Bible app!? I use it everyday!

Cool to hear. Bible was my first iPhone app. I got hired at, a mega-church in Oklahoma, to do PHP development. I moved from where I grew up in Kentucky to Oklahoma in December right after I graduated high school. This was 2007. iPhone came out after I graduated.

When Steve announced the iPhone SDK in March, I said to my boss, “We should make an app.” “Let’s make the Bible,” he replied. So I spent all of my time on the app until the App Store launched that July.

It was actually called “YouVersion” for awhile. I vividly remember typing the name into iTunes Connect—my boss and his boss standing behind me. “Hey, let’s see if Bible is taken.” backspace Then I typed it in and it never changed. Really cool memory.

Turns out this whole app thing was a big deal. At the time, we had no idea. We quickly reached 100,000 downloads. For a church, that is more people than they can ever hope to interact with. I got to work on it for a bit longer since it was doing so well. The last release I released was version 1.8. That came out a few months before iPhone OS 2.0 was announced.

After that, I spent a ton of time working on, the supporting website for the app. The little PHP script I threw together for the iPhone app was the first API I ever wrote too. In the shift to working on the website again. A few months later, I left to become a freelancer for the first time.

#4 What is life like as a freelancer?

It has it’s pros and cons. Most of the time, I love it. Toward the end of projects, I often hate it. Having flexible hours, getting paid a lot, and choosing what you work on is great. Clients are the worst sometimes though.

I’ve had the luxury of not having to worry about finding clients that much. When I first made the transition, it was pretty scary. A client approached me and asked if I wanted to help with their project. Funny enough, it was a PHP project. (Thankfully, the last PHP project I ever worked on.) They agreed to buy 100 hours of my time and paid up front. Got the check before my last paycheck at my full-time job. At the time, I was charging $125/hour. $12,500 was a huge amount of money to me at the time. My mortgage was only $740/month back then. (By the way, buying a house at 19 was cool. Only having it for 6 months was financially the worst thing ever.) I remember cashing the check at a Chase on 2nd Street in Edmond, OK. Formidable moment.

Anyway, having a bunch of runway helped ease the stress. Got another two clients after that one and horribly mismanaged my time. Ended up being really stressful because I didn’t set good deadlines for myself.

Since then, I’ve made the transition from full-time to freelance a few times. Having some runway or clients lined up is the key. It’s easy enough to get some clients while you have full time employment to try it out. Learning how to manage projects is really important. Then you can just get a big enough project and quit.

I’ve been full-time freelance for over a year now this time around. The biggest lesson this time is making sure your clients know what they wanted going in. Clients not knowing what they want is the biggest source of frustration as a freelancer.

Final Thoughts

Hopefully some of that was useful. I could go on and on about freelance work. If you take anything away from this, it should be to be resourceful, you can do it.

September 15, 2014

Don't stop at the summer project!

Note: this is work in progress. I’d like to improve this post to include universal opinions, so I’d appreciate any feedback!

Ouch, two summers with Fedora are over! As far as GlitterGallery news goes: Emily’s working on setting up a demo for design team, fantastic Paul is scrubbing up some final pre-release bugs and more potential contributors are now showing up. As far as GSoC itself is concerned: Google’s sent over money, the tshirt should be here soon enough and it doesn’t look like any more formalities are pending. Time to pack, find a job, and say goodbye to friends at Fedora project, right?


The other day, Kushal called me up and mentioned his concerns with students disappearing once their GSoC projects are over, and once they have their money. The experienced folks in most communities share the same disappointment. I couldn’t agree less, and promised to write about it, hence this post. I’m not sure who the target readers should be, but my best guess would be anyone aspiring to start contributing to a FLOSS project, especially students hoping do a GSoC next year :-)

Why bother contributing to a FLOSS project?

Let’s be done with the incentives first. Sure, there’s the geek-badge associated with it, and you’re helping make the world a better place. What other incentives do FLOSS communites offer? Here are the ones that attract me:

  • Something meaningful to work on: If you’re a student stuck in a place where they make you do things you aren’t motivated about (I hear jobs aren’t too different), then being involved in a community can make your spent time meaningful. It doesn’t really have to be a FLOSS community, but in my case, it seems to have worked out well. I would rather feel awesome about having built a small piece of software that does something for me, over mugging an outdated book on “Net Centric Programming”.

  • Jobs, money, opportunities: Depending on your case, you may not necessarily get paid, but typical FLOSS communities have participants from world over => you get exposed to a lot of opportunities you wouldn’t hear of otherwise. Many of my professors think the idea of writing FLOSS is stupid. As a result, their understanding of opportunities is limited to campus placements. It doesn’t really have to be! I have come to learn that there’s an entire industry of people who land jobs just based on links to stuff they’ve worked on.

  • Friends around the world: It’s embarrasing I didn’t know of a country by the name Czech Republic until about last December. Now I not only have friends from Cz who I speak to quite often, I actually was in Prague a month ago and even did a talk! My summer mentor is from the USA. My closest friend is a German. On a daily basis, I probably end up interacting with someone from each continent. It’s a lot of fun learning how things in other places work. If you’re from India like me, the idea of trains departing at times like 15:29 should impress you.

Why not contribute?

However much FLOSSy geeks will brag about their flawlessness, FLOSS communities aren’t for everyone. Some hints:

  • You need a certificate for showing up: I wish I could wrap two bold tags there. Please contribute only if you want to do it for the fun of it. Most people in any community exist because they want to improve or utilize a skill, not because they can stack up a bunch of certificates on their resume.

  • You need to be spoonfed Unfortunately, as much as everyone would like to help new contributors to a project, showing people around takes time. Sure, we’re willing to put in an hour or two every week finding links and emailing you. But if you aren’t going to read them, and learn to find more links, then you’re making things difficult.

  • You need to made ambassador the first thing, just so you have a tshirt: Here’s the thing about Ambassador programs - they were created to provide structure for contributors to show off the awesome stuff they’re building. If you aren’t contributing, you need to do that first. Ideally, if there are incentives coming up (swag, flight tickets, whatever), they go to the active folks first. Of course there are exceptions once in a while when new people are encouraged with incentives when they seem promising, that’s different. (I have had a junior ask me what organization offers the best perks so he could contribute there, and another one wanting to fly to a different continent at a community’s expense, because she wanted to attend a Django workshop).

In my case, I got involved with the Fedora community through a design team project I ended up co-authoring. But I’d say it was just a starting point! I don’t have unlimited time thanks to University classes, but with what I have, I contribute where I can. It really doesn’t have to be limited to my project (although that’s where I focus my efforts on) - it could be a random broken wiki page. These days I’m cleaning up expired requests on our request-tracking system. A while ago, I started with Inkscape and attempted logos. On other days I hang out on IRC channels geared at helping newbies. Even though Fedora infra doesn’t do ruby oriented projects, I sometimes hang out in their meetings to see what they’re up to. I don’t understand how Marketing works, so next I’m planning to give it a shot. Ultimately, the goal is to quickly pick up a skill, while improving Fedora as a community in whatever small way I can.

That’s something I’d request everyone to do. Being involved with a GSoC or a similar summer engagement is fun - you get to work on something large enough to be accountable for, while being small enough to pick up quickly. But try to look around - find projects that your project depends on. Fix them. Find projects that could use yours. Fix them. If they don’t exist, make them! I bet Kushal wants to convey the same message: just don’t stop with your project. A successful summer is a good thing - but if you’re simply going to disappear, then it’s purpose is defeated. You have to justify the time your mentor spent on you! :-)

On an ending note, how would you look for more areas to contribute? It’s simple - ask your mentor. Or just try to remember the inconvenience you had with library X compiling too slow. It was a good thing you overlooked it then because you had to keep track of the bigger picture. Now’s the time to return to it and fix it. Also, try to attend events relevant to what you’re working on. I’m really lucky Gnokii invited me to LGM in his country - I ended up finding another project to use within GlitterGallery, for a start.

There’s almost always everts happening around where you live. I’m in Coimbatore which is relatively sleepy, but I travel to Bangalore about every month to participate at an event. If you find an event that could benefit from you, try and ask the organizers if you could be funded. Just don’t stop!

September 13, 2014

ReFS: All Your Resilience Are Belong To Us

(grammer intentional) The last few months I've been looking into the Resilent File System (ReFS), which has only undergone limited analysis so far. Let's fix that shall we!

Before we begin, I've found these to be the best existing public resources so far concerning the FS, they've helped streamline the investigation greatly.

[1] - Straight from the source, a msdn blog post on various concepts around the FS internals.

[2] - An extended analysis of the high level layout and data structures in ReFS. I've verified alot of these findings using my image locally and expanded upon various points below. Aspects of the described memory structures can be seen in the images locally.

[3] - Another good analysis, of particular interest is the ReFS / NTFS comparison graphic (here).

Note in general it's good to be familiar w/ generic FS concepts and ones such as B+ trees and journaling.

Also familiarity w/ the NTFS filesystem helps.

Also note I'm not guaranteeing the accuracy of any of this, there could be mistakes in the data and/or algorithm analysis.

Volume / Partition Layout

The size of the image I analyzed was 92733440 bytes with the ReFS formatted partition starting at 0x2010000.

The first sector of this partition looks like:

byte 0x00: 00 00 00 52   65 46 53 00   00 00 00 00   00 00 00 00
byte 0x10: 46 53 52 53   00 02 12 E8   00 00 3E 01   00 00 00 00
byte 0x20: 00 02 00 00   80 00 00 00   01 02 00 00   0A 00 00 00
byte 0x30: 00 00 00 00   00 00 00 00   17 85 0A 9A   C4 0A 9A 32

Since assumably some size info needs to be here, it is possible that:

vbr bytes 0x20-0x23 : bytes per sector (0x0200)
vbr bytes 0x24-0x27 : sectors per cluster (0x0080)


1 sector = 0x200 bytes = 512 bytes
0x80 sectors/cluster * 0x200 bytes/sector = 0x10000 bytes/cluster = 65536 = 64KB/cluster

Clusters are broken down into pages which are 0x4000 bytes in size (see [2] for page id analysis).

In this case:

0x10000 (bytes / cluster) / 0x4000 (bytes/page) = 4 pages / cluster


0x4000 (bytes/page) / 0x200 (bytes/sector) = 0x20 = 32 sectors per page

VBR bytes 0-0x16 are the same for all the ReFS volumes I've seen.

This block is followed by 0's until the first page.


According to [1]:

"The roots of these allocators as well as that of the object table are reachable from a well-known location on the disk"

On the images I've seen the first page id always is 0x1e, starting 0x78000 bytes after the start of the partition.

Metadata pages all have a standard header which is 0x30 (48) bytes in length:

byte 0x00: XX XX 00 00   00 00 00 00   YY 00 00 00   00 00 00 00
byte 0x10: 00 00 00 00   00 00 00 00   ZZ ZZ 00 00   00 00 00 00
byte 0x20: 01 00 00 00   00 00 00 00   00 00 00 00   00 00 00 00

bytes 0/1 (XX XX) is the page id which is sequential and corresponds to the 0x4000 offset of the page
byte 2 (YY) is the sequence number
byte 0x18 (ZZ ZZ) is the virtual page number

The page id is unique for every page in the FS. The virtual page number will be the same between journals / shadow pages though the sequence is incremented between those.

From there the root page has a structure which is still unknown (likely a tree root as described [1] and indicated by the memory structures page on [2]).

The 0x1f page is skipped before pages resume at 0x20 and follow a consistent format.

Page Layout / Tables

After the page header, metadata pages consist of entries prefixed with their length. The meaning of these entities vary and are largely unknown but various fixed and relational byte values do show consistency and/or exhibit certain patterns.

To parse the entries (which might be refered to a records or attributes), one could:

  • parse the first 4 bytes following the page header to extract the first entry length
  • parse the remaining bytes from the entry (note the total length includes the first four bytes containing the length specification).
  • parse the next 4 bytes for the next entry length
  • repeat until the length is zero

The four bytes following the length often takes on one of two formats depending on the type of entity:

  • the first two bytes contain entity type with the other two containing flags (this hasn't been fully confirmed)
  • if the entity if a record in a table, these first two bytes will be the offset to the record key and the other two will be the key length.

If the entry is a table record,

  • the next two bytes are the record flags,
  • the next two bytes is the value offset
  • the next two bytes is the value length
  • the next two bytes is padding (0's)

These values can be seen in the memory structures described in [2]. An example record looks like:

bytes 0-3: 50 00 00 00 # attribute length
bytes 4-7: 10 00 10 00 # key offset / key length
bytes 8-B: 00 00 20 00 # flags / value offset
bytes C-F: 30 00 00 00 # value length / padding

bytes 10-1F: 00 00 00 00   00 00 00 00   20 05 00 00   00 00 00 00 # key (@ offset 0x10 and of length 0x10)
bytes 20-2F: E0 02 00 00   00 00 00 00   00 00 02 08   08 00 00 00 # -|
bytes 30-3F: 1F 42 82 34   7C 9B 41 52   00 00 00 00   00 00 00 00 #  |-value (@ offset 0x20 and length 0x30)
bytes 40-4F: 08 00 00 00   08 00 00 00   00 05 00 00   00 00 00 00 # -|


Various attributes and values in them take on particular meaning.

  • the first attribute (type 0x28) has information about the page contents,
  • Bytes 1C-1F of the first attribute seem to be a unique object-id / type which can idenitify the intent of the page (it is consistent between similar pages on different images). It is also repeated in bytes 0x20-0x23
  • Byte 0x20 of the first attribute contains the number of records in the table. This value is repeated in the record collection attribute. (see next bullet)
  • Before the table collection begins there is an 0x20 length attribute, containing the number of entries at byte 0x14. If the table gets too long this value will be 0x01 instead and there will be an additional entry before the collection of records (this entry doesn't seem to follow the conventional rules as there are an extra 40 bytes after the entry end indicated by its length)
  • The collection of table records is simply a series of attributes, all beginning w/ the same header containing key and value offset and length (see previous section)

Special Pages

Particular pages seem to take on specified connotations:

  • 0x1e is always the first / root page and contains a special format. 0x1f is skipped before pages start at 0x20
  • On the image I analyzed 0x20, 0x21, and 0x22 were individual pages containing various attributes and tables w/ records.
  • 0x28-0x38 were shadow pages of 0x20, 0x21, 0x22
  • 0x2c0-0x2c3 seemed to represent a single table with various pages being the table, continuation, and shadow pages. The records in this table have keys w/ a unique id of some sort as well as cluster id's and checksum so this could be the object table described in [1]
  • 0x2c4-0x2c7 represented another table w/ shadow pages. The records in this table consisted of two 16 byte values, both which refer to the keys in the 0x2c0 tables. If those are the object id's this could potentially be the object tree.
  • 0x2c8 represents yet another table, possibly a system table due to it's low virtual page number (01)
  • 0x2cc-0x2cf - consisted of a metadata table and it's shadow pages, the 'ReFs Volume' volume name could be seen in the UTF there.

The rest of the pages were either filled with 0's or non-metadata pages containing content. Of particular note is pages 0x2d0 - 0x2d7 containing the upcase table (as seen in ntfs).


I've thrown together a simple ReFS parser using the above assumpions and threw it upon github via a gist.

To utilize download it, and run it using ruby:

ruby resilience.rb -i foo.image --offset 123456789 --table --tree

You should get output similar to the following:

Of course if it doesn't work it could be because there are differences between our images that are unaccounted for, in which case if you drop me a line we can tackle the issue together!

Next Steps

The next steps on the analysis roadmap are to continue diving into the page allocation and addressing mechanisms, there is most likely additional mechanisms to navigate to the critical data structures immediately from the first sector or page 0x1e (since the address of that is known / fixed). Also continuing to investigate each page and analyzing it's contents, especially in the scope of various file and system changes should go a long ways to revealing semantics.

read more

August 31, 2014

Google summer of Code final update final update

This has been one of quite interesting summers I have had, I worked with the Google summer of Code program. Being my first time it was really interesting, to work along the wonderful people in the opensource community.

This summer I was working with the fedora-infra team on the project Fedora College. The project was to develop a virtual classroom environment for new contributors in the community. Though it may look a bit mundane but, it surely does help to solve a major problem i.e. introducing new contributors towards the fedora project. 

The project was completed the well within the proposed time line and we are planning to do the packaging of the project in the coming days to deploy it on fedora infrastructure. The main part of the project (i.e. the coding part) was completed well within the timeline, but another part that's deploying and executing the project still remains. We are planning to launch the project as soon as possible, but a large responsibility also lies on the team that manages and creates the content for the fedora project. The project lays great emphasis on the video content and multimedia lectures. We need to have a dedicated team for managing the things.

Also, the web-application is currently not too tightly coupled with the fedora infrastructure. Though the API classes and the fedora message parser are well written and can help significantly for further integration with the fedora messaging system.

So, once the project has been completed I moved to Hong Kong, for my masters studies. Though from the curriculum it appears I wont be able to have much time for continuing the community effort but, I can surely help at times when I am required.

So, anyone who would like to contribute and test the things, could do it here:                                                                                                                                

Thanks for Reading this post.

August 20, 2014

GSOC : Final Update
After months of scrounging the xlators for getting to know how they work, we've finally come to an end with my glusterfsiostat project. You can check out the latest source code from

I've worked on a fast pace over the past couple of days and completed all the tasks left out. This includes finishing up the python web server script which now supports displaying of live I/O graphs for multiple Gluster volumes mounted at the same time. The server and the primary stat scripts now also generate error if profiling is not turned on for any volume. Following are some screenshots of the server live in action.


The approach here is quite similar to Justin Clift's tool( but I've tried to build this as a bare bones package, since Justin's tool requires you to first setup an extensive stack (Elastic Search server, logstash etc.). My aim is that the contents of this tool should be self sufficient for anyone to download once and use it, not to complete dependencies first. The response from my mentor about the work done has been pretty supportive. I look forward to improving this project and working on some more exciting ones with GlusterFS in the future.

August 18, 2014

Bugspad and Future Plans

Code cleaning, and rigourous testing and bug fixing, underwent this week. Tested the instance with bigger datasets, and the tested for response times.
The current code, is also a bit untidy, and needs some refactoring. So, I have started my work on making, a design documentation, with explicit, details,
of the workflow of the application. Currently hand-drafting it, then I would be digitising it(For the time being I am uploading my workflow explanation,
charts which is not of great quality :P ). I have divided the whole workflow, according to urls, and then subdiving it into components, which have an
effect on the performance (especially speed), directly, ie all SQL/Cache queries, being made. This would allow a clear idea, of the purpose, and role
of each component. This would also invite more contributors, and increase the understandability of the current code. Also, it would allow to focus especially,
on the bottlenecks of time in the workflow, and experiment, with different available tools and methods. So, this would be what I would doing into next week. Cheerio!




August 15, 2014

Google Summer of Code 2014
Summer of Code week 11 Report

So, Here we are , in the last few weeks of Google summer of Code. It had been a really exciting and happening journey for me. I can consider this to be my first actual set of contributions to any opensource project.

As, always last few weeks for any project goes towards documentation and testing, so did mine. During the period we worked on a set of targets and pushed those to my mentors repository.

To, list down in brief :

1. Added categories and sort by categories features. These endpoint though quite effective in helping user find out content were largely left uncovered during the project.

2. Worked on improving the project documentation. Inclusive of the API docs, Project Docs, Sample content and the code docs.

3.We created a request for resources ticket with the fedora admin for initial deployment of the project.
Ticket  :

4. But, before we can actually get the project hosted. Its important to do the packaging of the product. And get it included in the Redhat Bugzilla.

I guess, most of the time left would be spent on the packaging and testing. I guess, the project has some external dependencies and may require us to revamp the code.

Demo for the project is : or (Visible only to a group member of fedora project i.e. Summer coding group and the Proven packagers group.)

Hammad Haleem

August 14, 2014

Flock 2014: Report

Flock 2014 took place in Prague this year. Here’s a photo to start with:

Tshirt ;)

Although Flock wasn’t the first FOSS/Dev/Tech conference I have been to, it was my first Fedora-specific event. Definitely special in it’s own way - most of the speakers and attendees are from within the Fedora community, so basically every third person is someone you have chatted with over irc, seen on planet, or is someone whose wikipage you have stumbled upon at some point. Which means, you walk to the end of the corridor, look at a person’s badge and say “Oh, so YOU are that guy!”

I wouldn’t say I attended a lot of the talks, since many of them would go over my head; but I did spend a good time interacting with the people present & offered to help with their projects & found potential contributors for mine. After all, the Flock organizers were awesome enough to live stream and upload all the sessions on YouTube (I’m still watching some): Flock channel on YouTube.

First day started with the opening by Matt, followed by a keynote on how FOSS was accepted in the EU by Gijs Hillenius. It was inspiring; I was left wondering how I can make an impact at least at the University level for a start. The next one I attended (online though) was on the State of Fedora Fonts, by Pravin Satpute.

I spent most of the remainder of the first day in the hackroom, reviewing slides for my talk, scheduled for later during the day. Mine followed Marina’s session on Gnome OPW which I attended in part - she did a great job of outlining how the community had succeeded in increasing participation of women within FOSS communities, events and projects. Her talk also reminded me that Marie (riecatnor) was here at Flock! I’ve known Marie for a while through the Fedora design IRC, but we hadn’t met in person.

Soon after, I did a talk called the Curious Case of Fedora Freshmen. It covered how Freshmen found it difficult to cope up with more experienced folk speaking complex things to them or simply not paying enough attention. I brought up various programs that would help Freshmen, should they be worked on. The talk was followed by a pretty extensive discussion. I’ll start Wiki pages working on some of the stuff I brought up during the session within a couple of week’s time. Slides for my talk are here and the video is here.

One interesting session I attended was by Chris Roberts and Marie Catherine Nordin - on Fedora badges. Chris did a quick run through about how fedmsg awards badges, and Marie followed up with her Fedora badges internship (which I have been impressed with like forever). Post-session, I introduced myself to Marie and we’ve been super friends since! :D

In fact, at this point, you should read Marie’s blog post about Flock. It covers a bit of what we did over our own impromptu hackfest - as she calls it ;) Marie spent a great deal of time explaining me how to play with nodes and we worked a bit on Waartaa’s logo. She also did a Glyph from scratch, it was quite amazing!

Snake man!

Among other sessions I attended were one on state of the Ambassador’s Union by Jiri, Advocating by Christoph, Improving Ambassadors Mentor program by Tuan (online) and Meet your FesCo (filled with Josh Boyer’s humor).

Fun stuff: On the first evening was FudPub, where we competed over who can take in more beer ;) Thanks to the organizers, we were on a boat another evening. We did a tour of Prague during the night - pretty mesmerizing! On other days, Gnokii took us to Budvarka, where some of us had lots of local food and beer ;)


On final day, Marie, Ralph, Toshio, Pingou, Arun and I, among others went on a quick city trip before we were headed home :) I’ll have to say, I’m now richer in terms of memories and a few badges ;)

Me in Prague

I’d really like to thank the Fedora community for having supported me on this trip! You guys deserve a badge ;)

August 11, 2014

Xtreme Programming into Bugspad

I am now into the last phase of my project. Due to delays and bit less communication post mid term, due to
my poor internet connectivity (which was my sole responsibility, and I agree :( ), I could not discuss much with
my mentor much. However I am planning to use my backup plan to discuss what was remaining, and make the implemented features robust and error free. It will be my toughest week of the project, testing out and removing tad bits of the code. Here I go to have a taste of XP (Xtreme Programming).
PS. Was short on words!

August 09, 2014

GSoC - week 10+11
Last week I've been working on improving the LVM plugin thinpool sharing capabilities. I didn't explain LVM thin-provisioning before, so I'll do it now to rationalize what I'm doing.
With classical volumes you have a volume with given size and it always occupies the whole space that was given to it at the time of creation, which means that when you have more volumes, you usually can't use the space very efficiently because some of the volumes aren't used up to the capacity whereas others are full. Resizing them is a costly and risky operation. With LVM thin-provisioning there's one volume called thinpool, which provides space to thin volumes that are created within. The thin volumes take only as much space as they need and they don't have physical size limit (unless you set virtual size limit). That means that if the space is not used by one volume it can be used by another.
Previously there was one thinpool per configuration which corresponded to one buildroot. It could have snapshots, but there was still one buildroot that could be used at the moment. Now you can use mock's unique-ext mechanism to spawn more buildroots from single config. Unique-ext was there even before I started making changes, but now I implemented proper support for it in the LVM plugin. It's a feature that was designed mostly for buildsystems, but I think it can also be very useful for regular users who want to have more builds running at the same time. With LVM plugin the thinpool and all the snapshots are shared between the unique-exts, which means you can have multiple builds sharing the same cache and each one can be based on different snapshot. The naming scheme had to be changed to have multiple working volumes where the builds are executed. Mock implements locking of the initial volume creation, so if you launch two builds from the same config and there wasn't any snapshot before, only one of the processes will create the initial snapshot with base packages. The other process will block for that time, because otherwise you'll end up with two identical snapshots and that would be a waste of resources.
Other sharing mechanism that is now implemented is sharing the thinpool among independent configs. Then the snapshots aren't shared because the configs can be entirely different (for example different versions of Fedora), but you can have only one big logical volume (the thinpool) for all mock related stuff, which can save a lot of space for people that often use many different configs. You can set it with config_opts['pool_name'] = 'my-pool' and then all the configs with pool_name set to the same name will share the same thinpool.
Other than that I was mostly fixing bugs and communicating with upstream.

This week I've been on Flock and it has been amazing. There were some talks that are relevant for mock, most notably State of Copr build service, which will probably use some of the new features of mock in the future and Env&Stacks WG plans, which also mentioned mock improvements as one of their areas of interest.

August 06, 2014

Up from slumber into the pre alpha release mode.

A really long gap of a week, the slumber period, caused by the havocous TPSW department of our college, due to which I could not work much. Now fortunately the period is over. I could not devote much time to my work, so I am going to make up for the time in the coming days. The work planned remains:

  • Redisifying the mysql queries.
  • Fixing any incumbent bugs
  • Working with my mentor on rpm packaging of the code.

The extra features, for the admin interface such as the permissions and groups is due, after this, which we will be working on. In spirit of the pre Alpha release.

August 05, 2014

Google Summer of Code 2014: Weekly update

Google Summer of Code 2014: Week 8-10 update

It is one of the penultimate weeks of the summer of code program. It had been one of most exciting summer so far. And Here, we have reached the eleventh Week of the summer of code program. This week was quite fruitful, we worked on various small aspects of the fedora college, Tweaking things and generally concentrated on making things better.

TO summarize this week I worked on following things :
  1. API Documentation , I wrote the documentation for the API and designed web pages for the display of the same. When you install the project, you can view the API documentation at the /api/docs/
  2. Yohan, pointed out some interesting issues that were required to be taken care of like the decorators for auth, making path dynamic and other minor changes. 
  3. Made the GUI admin portal usable, create template for admin page.
  4. There were some errors in the delete media endpoint and were corrected for good.
  5. Also, there were a couple of other issues, that were addressed this week.
Now the project has been formally added to the fedora-infra,

Demo for the project is : or (Visible only to a group member of fedora project.)

Also, the project doesn't support user registrations so you guys need to register for account on the fedora and the authenticate using the fedora-project. open ID.

Also, This week we as students planned a Google summer of code meetup at my university. The meetup, is for emphasizing on how to contribute to the Open Source community. More details about the meetup can be found here : .The meetup is quite interesting and GsoC student participants from various organizations will be coming down to speak about what they did this summer.

Thanks for Reading through the Post.
Hammad Haleem

July 30, 2014

Google Summer of Code 2014: Week 8-10 update

Google Summer of Code 2014: Week 8-10 update

Hello Folks, 

My project is almost complete and in the past weeks we did work on some things and I would like to update about the status of my project. Also, this time  I have included a couple of screenshots in the blog post. 

To be precise in the previous weeks I have been working on following things: 
  1. Improve the GUI for home page. The CSS has been inspired from pintrest. You can see a demo here.  
  2. Also, we worked on the parser class for the fedora messaging bus. So, that messages sent from the fedora college can be easily parsed.
  3. I have also added  ability to rate and mark tutorials as favorites. Below are presented some screenshots about the same. Though this is not currently reflected in the demo, but is present in the code published at my repository. 
  4. There is a list of to-do's present here : Once I am done with these, they can be added to the Redhat BugZilla. THis can be created as a package and added to fedora.

Now the project has been formally added to the fedora-infra,

Demo for the project is : or (Visible only to a group member of fedora project.)

Also, the project doesn't support user registrations so you guys need to register for account on the fedora and the authenticate using the fedora-project. open ID.

Thanks for Reading through the Post.

July 29, 2014

IsItFedoraRuby new design

The past week I tried to do something about the looks of isitfedoraruby. It was fun using bootstrap (my first time) and I think the outcome is cool. I tried to use Fedora like colors and the font is Liberation Sans, same as Fedora pkgdb.

You can check the overall changes:


They are now borderless, with highlighted headings. They are also responsive which means if the table is bigger than the page it gets its own sidebar without breaking the rest of the site.


index page

The index page show all packaged rubygems along with some interesting info. You can see if a package is out of date if is highlighted with a red color. On the other hand green means is up to date with latest upstream.

The code that does that is pretty simple. Bootstrap provides some css classes for coloring. So I wanted to use warning for outdated and success for up to date packages. I highlighted the whole table row so I used:

%tr{class: rpm.up_to_date? ? 'success' : 'danger'}

In particular check line 19.

show page

Previously there was a ton of information all in one page. Now, the info is still there but I have devided it into tab sections.

Currently there are 5 tabs.

The main tab has a gem's basic info:

  • Up to date badge (green yes or red no)
  • Gitweb repository url
  • SPEC file url
  • Upstream url
  • Maintainer FAS name
  • Number of git commits
  • Last packager (in case a package is co-maintained)
  • Last commit message
  • Last commit date
  • Description

Basic Info

Then there is a tab about version information:

  • Table with gem versions across supported Fedora versions (rawhide, 21, 20)


Another important tab is a list with a packages's dependencies:

  • One table with dependencies with column whether they are runtime/development deps
  • One table with dependents packages


The bugs tab depicts all of package's open bugs for Fedora in a table.


And lastly koji builds for only the supported Fedora versions.


rubygems show page

The description is now on top of the page. Instead of one column, the new look has two columns, one for basic info and one for the depdendencies table.

Compare rake:

owner page

I added some info on top of the page about the number of the packages a user owns:

  • Total
  • Up to date
  • Outdated

The table that has an owner's packages is also highlighted to depict outdated and up to date packages.

Here's an embarassing screenshot which reminds me I have to update my packages...

Owner page

The navigation bar was a PITA to configure and make as responsive as possible. There were a lot of bits and pieces needed to fit together, here are some of them.

I used a helper method which I found in this so answer.

I used the same colors of Fedora pkgdb. With the help of a firefox extension named colorpicker and I gave the navbar the color it has now. twbscolor is a cool site that extracts your chosen color even in scss, which I used along with some minor tweaks.

In responsive mode there is a dropdown menu. That requires some javascript and the steps are:

1.Add *= require bootstrap in app/assets/stylesheets/application.css

2.Add //= require bootstrap in app/assets/javascripts/application.js

3.Add in app/assets/javascripts/application.js:

  toggle: false

4.Add bootstrap classes to header view:

      %button.navbar-toggle{ type: 'button', data: {toggle: 'collapse', target: '#header-collapse'}} 'Toggle navigation'
      = link_to 'FedoraRuby', root_path, class: 'navbar-brand'

    %nav.collapse.navbar-collapse#header-collapse{role: 'navigation'}
        %li{class: is_active?(root_path)}
          = link_to _('Home'), root_path
        %li{class: is_active?(rubygems_path)}
          = link_to _('Ruby Gems'), rubygems_path
        %li{class: is_active?(fedorarpms_path)}
          = link_to _('Fedora Rpms'), fedorarpms_path
        %li{class: is_active?(about_path)}
          = link_to _('About'), about_path

Search field

I wanted the search field to be together with the search button. In bootstrap this is accomplished with input-group-buttons. The final code was:

    = form_tag( { :controller => 'searches', :action => 'redirect' },
    :class => 'navbar-form', :method => 'post') do
        = text_field_tag :search, params[:search] ||= '',
            class: 'search-query form-control',
            placeholder: 'Search'
          = button_tag raw('<span class="glyphicon glyphicon-search"></span>'), name: nil, class: 'btn btn-default'

Instead for a search button with text, I used an icon.

There was also another problem regarding responsiveness. In different page sizes the header looked ugly and the search bar was getting under the menu.

I fixed it by adding a media query in custom.css.scss that disappears the logo in certain widths.

@media (min-width: 768px) and (max-width: 993px) {
  .navbar-brand {
    display: none

Here are before/after screenshots to better understand it.



Responsive design

Bootstrap comes with responsiveness by default. In order to activate it you have to add a viewport meta tag in the head of your html, so in app/views/layouts/application.html.haml add:

%meta{ :content => "width=device-width, initial-scale=1, maximum-scale=1", :name => "viewport" }

See full application.html.haml

It sure was fun and I learned a lot during the process of searching and fixing stuff :)

Testing, testing and more testing

The previous week, was very eventful. I and my mentor were discussing on the plans of implementing, the groups and permissions feature, which I had planned earlier. However, we concluded that it would be better to clean up the current code, and perform more rigorous testing so that the current implemented features are robust and performance centric. So, have placed the permissions stuff on the shelf for the time being. So far I have been testing via the API, filed 1.7 million bugs or so, only to realise that I wont be able to access those, as I had missed the product versions in each, which I have made compulsory as a part of design decision. So I fixed that part, and am refiling more bugs. The testing which I have done so far, have the result as follows, when only 1 user makes a request at a time:

  • Fetching 10,000+ bugs takes around 1-2 seconds.
  • Filing a bug via API takes around 2-3 seconds(on an average).
  • Filing bugs via the UI(mechanize) takes 4-5 seconds (on an average)

I know the above numbers are not impressive, and the reason behind the same, is that I used mysql at places wherein I should have used redis. So I am onto that now, and more testing, which would be followed by the initial RPM packaging of the application. :D

July 26, 2014

Updates with GlitterGallery

Personally, I’ve been troubled with illness for a while now. College has started and time has gotten scarce. However, work on GG is going great as usual. As usual, I’d recommend running through the demo hosted at, since design work is only best experienced.

For Fedora folks following the project, I’d like to mention some highlights:

For starters, now we have a remember me, because a couple of people said it’s trouble having to enter login credentials every time they’re on a new instance of their browser. Of course, it’s optional; it’s only meant to aid you. We’re facing trouble with the 3rd party login, which suddenly seems to have broken. Paul is investigating it at the moment.

Login page

I’ve improved the toolbars on Project and User projects. There are slight changes to the transitions and the active element is now highlighted properly.


Paul recently rolled out the server side stuff for GlitterGallery Issues. I’ve also given it some front-end polish. Here’s some screenies:

Issues New

Issues List

Other areas I’m currently working on:

  1. Slideshow display for project images (80% complete)
  2. Multiple uploads for project components (50% complete)
  3. OpenShift QuickStart (stuck)

July 24, 2014

GSoC - Mock improvements - week 9
This week I've been mostly focusing on minor improvements and documentation (manpages). Almost all my changes were already submitted upstream and if everything goes well, you can expect a new release of mock to be available in rawhide in the near future. I merged changes from the ready branch to master so now they should differ only in minor things. (Sorry for duplicates in git history, I didn't realize that beforehand)

Support for Mikolaj's nosync external library was added and the old implementations that existed as a part of mock were dropped. You can enable it by setting
config_opts['nosync'] = True
and you have to install the nosync library (mock doesn't require it in the specfile, because it's not available everywhere). If the target is multilib, you need both aritectures of the library to be installed in order to have a preload library for both types of executables. If you don't, it will print a warning and won't be activated. If you can't install both versions and still want to use it, set
config_opts['nosync_force'] = True
but expect a lot of (harmless) warnings from  The library is available in rawhide (your mirrors might not have picked it yet)

LVM plugin was moved to separate subpackage and conditionaly disabled on RHEL 6, since it requires lvm2-python-libs and newer kernel and glibc (for setns). One of the things that I needed to sacrifice when I was making the LVM plugin was the IPC namespace unsharing, which mock uses for a long time. The problem was that lvcreate and other commands deadlocked on unitialized semaphore in the new namespace, so I temporarily disabled it and hoped I'll find a solution later. And I did, I wrapped all functions that manipulate LVM in function that calls setns to get back to global IPC namespace and after the command is done, it call setns again to get back to mock's IPC namespace.

One of the other problems I encountered is Python's (2.7) handling of SIGPIPE signal. It sets it to ignored and doesn't set it back to default when it executes a new process, so a shell launched from Python 2 (by Popen, or execve) doesn't always behave the same as regular shell.
Example in shell:
$ cat /dev/zero | head -c5
# cat got SIGPIPE and exited without error

$ python -c 'import subprocess as s;["bash"])'
$ cat /dev/zero | head -c5
cat: write error: Broken pipe
# SIGPIPE was ignored and cat got EPIPE from write()

It can be fixed by calling
signal.signal(signal.SIGPIPE, signal.SIG_DFL)
in Popen's preexec_fn argument.

For cat it's just an example and it didn't make much difference. But if you put tee between the cat and head, it will loop indefinitely instead of exiting after first 5 bytes. And there are lots of scripts out there relying on the standard behavior. It actually bit me in one of my other programs, so I thought it's worth sharing and I also fixed it in mock.

July 23, 2014

GSOC Week 8+9 : To be or not to be
These past two weeks, I've been busy since my college re-opened and I spent my past weekend coding away in an overnight hackathon. As instructed by my mentor, I spent this week testing my recent patch whether enabling I/O profiling always in io-stats really degrades io-performance or not.

For this, I performed two write tests, one with a 20 MB file and the other with a 730 MB file. Each file was written 20 times to the mounted volume after clearing the buffers on every iteration and the time taken measured with the time command. Since the values at different times for writing the same file are quite varied, I plotted a graph using the obtained values(Y-axis represents seconds). As you might see in these images, there is no clear pattern found in the variation of values obtained while writing.

So according to me, values in both the conditions are quite near to each other and equally capable of going quite high or low than the mean value and hence, there is no negative effect seen due to the change proposed. You can follow this discussion on the ML at

July 21, 2014

Groups, permissions and bugspad.

This week was not much productive in terms of the amount of code written, as I was traveling by the grace of Indian railways. I finally am in my hostel room. However I used this time to plan out things and also test out the bugspad instance on the server. I made a script using mechanize and requests libraries of python to do so, which I’ll be adding to the scripts section of the repo. I am also working on the permissions stuff on a new branch. Instead of having groups I am planning to have usertypes instead keeping it product centric. This would require a minor change in the schema as I would be using charfields to denote the user types. For example, “c1″ for users assigned to group with component id 1, similarly “p1″ for users with product id 1. Would discuss more with the upstream is my mentor on the missing features and how to go about it.

July 18, 2014

GSoC - Mock improvements - week 8
Good news, we're merging my changes upstream. It's been a lot of changes and the code wasn't always in the best shape, so I didn't want to submit it before all major features are implemented. Mirek Suchy agreed he'll do the code review and merge the changes. Big thanks to him for that :)
I've setup a new branch, rebased it on top of current upstream and tried to revisit all my code and get rid of changes that were reverted/superseded, or are not appropriate for merging yet.  I squashed fixup commits to their originl counterparts to reduce the number of commits and changed lines.
The changes that weren't submitted are:
  • C nofsync library, because Mikolaj made a more robust nosync library that is packaged separately, and therefore supersedes the bundled one.
    I did the review:
    That way mock can stay noarch, which gets rid of lots of packaging issues. And also saves me a lot problems with autoconf/automake. There is no support for it yet, because I need to figure out how to make it work correctly in multilib environment.
  • nofsync DNF plugin - it's an ugly DNF hack and I consider it superseded by aforementioned nosync library
  • noverify plugin - it's also a DNF hack, I will make a RFE for optional verification in DNF upstream instead
Everything else was submitted including the LVM plugin.  The merging branch is not pushed on github because I frequently need to make changes by interactive rebasing and force pushing the branch each time kind of defeats the purpose of SCM.
Other than that I was mostly fixing bugs, the only new features are the possibility of specifying additional commandline options to rpmbuild, such as --rpmfcdebug with --rpmbuild-opts option and ability to override command executable paths for rpm, rpmbuild, yum, yum-builddep and dnf, in order to be able to use different version of the tools than the system-wide version.

July 16, 2014

HOPE X Lightning Track

Planning to be in the city this weekend? Want to give a short presentation at one of the world's biggest hacker/maker conferences? I'm helping organize the Lightning Talks Track at this year's HOPE X. All are welcome to present, topics can be on anything relevant / interesting.

If you're interested simply email me or add yourself to the wiki page and we'll be in touch with more information. Even if you don't want to give a talk, I encourage you to checkout the conf schedule, it's looking to be a great lineup!

Happy hacking!

***update 07-26***: Conference was great and talks were awesome. Besides a few logistical glitches, all went well. The final speaker / presentation lineup can be seen on the HOPE Wiki (also attached below). Many thanks to all that participated!



read more

July 14, 2014

bugspad missing features

This week composed of reading on missing features of bugspad and planning as to incorporate it. I went through the design docs of bugzilla,whatever was available :P.

  • Group permissions and what to choose for the alpha state of bugspad.
  • Flags to be used for both bugs and attachments.
  • Mailing server setup and handling of mails for the cc list. I and mentor have decided and he is going to help on this to get going.
  • Testing on bigger data sets on the infra system.

I missed the infra meeting last Thursday due to my stupid Internet woes which is finally going to end as I return back to my college. :D

Google Summer of Code, week 8 update.
Google Summer of Code 2014: Week 7 update

Its 8th week of Google summer of Code, according to the submitted schedule. I was supposed to complete my work on the backend module. And start my work on the GUI from 9th week. This week was more for polishing things up.

I would like to again refer to the the . Where we usually have discussions about the things left to do. 

With a target to complete the backend module and start with documentation and GUI from next week. We did some last minute polishing and almost finalized the working of the product.

So, broadly speaking I can list what I was upto in the previous week.
  1. Added the support for FedMsg, 
    1. Added the ability to publish fedmsg by the application for the following actions 
      1. Upload of any media content
      2. Creation / Revision of Content
  2. Worked on the email system, enabling email to be sent for user registrations.
  3. Worked on the Admin panel. I was using the flask-admin for admin panel. It was not showing the foreign key relations properly. It was due to some error with the database models.
  4. Other Smaller changes include the GUI improvements, smaller bug fixes and pagination.
Also, we have sent a request to Ralp Bean to help us setup a staging environment for fedora-college. So, with the initial demo of the product ready you guys can actually see it on the staging environment.

Now the project has been formally added to the fedora-infra,

Demo for the project is : /

Also, the project doesn't support user registrations so you guys need to register for account on the fedora and the authenticate using the fedora-project. open ID.

Thanks for Reading through the Post.

July 12, 2014

isitfedoraruby gsoc midterm sum up

This sums up my past month involvement with the project. A lot of reading in between...


I added a changelog so that the changes are easily seen, so here it is (this week is v 0.9.1):

v 0.9.1

- Refactor rake tasks
- Source Code uri in fedorarpms, points to pkgs.fp.o gitweb
- Add first integration tests
- Retrieve commit data via Pkgwat
- Show name of last packager in fedorarpms#show
- Show last commit message in fedorarpms#show
- Show last commit date in fedorarpms#show
- Use api to fetch rawhide version instead of scrapping page
- Retrieve homepage via Pkgwat
- Fix duplication of dependencies in fedorarpms#show
- Do not show source url in rubygems#show if it is the same as the homepage
- Do not show source url in fedorarpms#show if it is the same as the homepage
- Split methods: versions, dependencies in fedorarpm model
- New rake tasks to import versions, dependencies and commits
- Show last packager in fedorarpms#show
- Show last commit message in fedorarpms#show
- Show last commit date in fedorarpms#show

v 0.9.0

- Remove unused code
  - Remove HistoricalGems model
  - Remove Build controller/view
  - Remove methods related to local spec/gem downloading
  - Remove empty helpers
  - Cleaned routes, removed unused ones
- Conform to ruby/rails style guide
- Maintainer field for packages are now using the fas_name
- Automatically fetch versions of Fedora by querying the pkgdb api
- Addded rake task to fetch rawhide version and store it in a file locally
- Show koji builds from supported Fedora versions only
- Bugs
  - Query bugs via api using pkgwat
  - Drop is_open from bugs table
  - Show only open Fedora bugs, exclude EPEL
- Hover over links to see full titles when truncated
- Rename builds table to koji_builds
- Added tests
  - Unit tests for models
- Added Github services
  - travis-ci
  - hound-ci
  - coveralls
  - gemnasium
- Development tools
  - shoulda-matchers
  - rspec
  - capybara
  - rack-mini-profiler
  - rubocop
  - factory_girl
  - annotate
  - railsroady

You should notice some version numbers. That's also a new addition and every week I will deploy a new version, so eventually at some point in the end of the summer, version 1.0.0 will be released.

Here are some nice stats from git log.

Git stats: 91 commits / 4,662 ++ / 2,874 --

Rails/Ruby style guide

Fixed arround 500 warnings that rubocop yielded.


Added: unit tests for models.

Missing: A bunch of code still needs testing, rspec is not enough to properly test api calls. I will use vcr and webmock in the future to cover these tests. Integration tests are also not complete yet.

Bugs fixed

wrong owners

Previously it parsed the spec file and checked the first email in the changelog. Co-maintainers have also the ability to build a package and in that case it shows wrong info. Another case is where a user changes their email they are taken into account twice, so when hitting /by_owner not all packages are shown. I was hit by this bug.

It now fetches the owner's fas name using pkgwat which I use to sort by owner.

dependencies shown twice

The current implementation scraps the SPEC file of a rubygem via the gitweb and then stores the dependencies. The problem is that when one uses gem2rpm, ~> is expanded to >= and <=, which leads to list some dependencies twice.

Double dependencies

The fix was quite easy. Here is the controller that is in charge for the show action:

  def show
    @name = params[:id]
    @rpm = FedoraRpm.find_by_name! @name
    @page_title =
    @dependencies = @rpm.dependency_packages.uniq
    @dependents = @rpm.dependent_packages.uniq
    rescue ActiveRecord::RecordNotFound
      redirect_to action: 'not_found'

All I did was to add uniq.

duplicate homepage and source uri

In a gem page you could see this:

Double homepage

The information is taken from the api. Some have the same page for both gem's homepage and source uri. The secret was lying in the [view][].
  %h3 Gem Information
    =link_to @gem.homepage, @gem.homepage
  - unless @gem.source_uri.blank?
      Source Code:
      =link_to @gem.source_uri, @gem.source_uri

All I did was to change this from this:

- unless @gem.source_uri.blank?

to this:

- unless @gem.source_uri.blank? || @gem.source_uri == @gem.homepage

So now it skips showing the homepage if it is the same as the source uri.


Show more info in fedorarpm show page

I added some more information at the fedorarpm page. Now it shows, last packager, last commit message and last commit date. Useful if something is broken with the latest release and you want to blame someone :p

And since many times a package has many co-maintainers you get to see the real last packager.

Here's a shot of the page as it is now:

More info

Rake tasks

As I have made some major refactoring in the fedorarpms model, I split many methods to their own namespace. For example, previously there was a single method for importing the versions and dependencies, now they are two separate.

As a consequense, I added rake tasks that could be invoked for a single package. Also the namespace is now more descriptive.

The tasks are for now the following:

rake fedora:gem:import:all_names               # FEDORA | Import a list of names of ALL gems from
rake fedora:gem:import:metadata[number,delay]  # FEDORA | Import gems metadata from
rake fedora:gem:update:gems[age]               # FEDORA | Update gems metadata from
rake fedora:rawhide:create                     # FEDORA | Create file containing Fedora rawhide(development) version
rake fedora:rawhide:version                    # FEDORA | Get Fedora rawhide(development) version
rake fedora:rpm:import:all[number,delay]       # FEDORA | Import ALL rpm metadata (time consuming)
rake fedora:rpm:import:bugs[rpm_name]          # FEDORA | Import bugs of a given rubygem package
rake fedora:rpm:import:commits[rpm_name]       # FEDORA | Import commits of a given rubygem package
rake fedora:rpm:import:deps[rpm_name]          # FEDORA | Import dependencies of a given rubygem package
rake fedora:rpm:import:gem[rpm_name]           # FEDORA | Import respective gem of a given rubygem package
rake fedora:rpm:import:koji_builds[rpm_name]   # FEDORA | Import koji builds of a given rubygem package
rake fedora:rpm:import:names                   # FEDORA | Import a list of names of all rubygems from
rake fedora:rpm:import:versions[rpm_name]      # FEDORA | Import versions of a given rubygem package
rake fedora:rpm:update:oldest_rpms[number]     # FEDORA | Update oldest <n> rpms
rake fedora:rpm:update:rpms[age]               # FEDORA | Update rpms metadata

That was it for now. For any changes be sure to check out the changelog regularly!

GSoC 2014 - week 7
Hi again, I'm sorry I didn't post last week, because I've been on a vacation.
Here's what I've done this week:

Passing additional options to underlying tools
rpmbuild has an option --short-circuit that skips stages of build preceding the specified one. It doesn't build a complete RPM package, but it's very handy for debugging builds that fail, especially in the install section. But this option is not accessible from within mock and I already mentioned in my proposal that I want to make it available. The option is also called --short-circuit and it accepts an argument - either build, install, or binary, representing the build phase that will be the first while the preceding phases would be skipped.
Example invocation:
$ mock rnv-1.7.11-6.fc21.src.rpm --short-circuit install

For Yum or DNF some of the options that are often used when user invokes the package manager directly also weren't available in mock. --enablerepo and --disablerepo are very common ones and now they are also supported by mock - they're directly passed to the underlying package manager.
Example invocation:
$ mock --install xmvn maven-local --enablerepo jenkins-xmvn --enablerepo jenkins-javapackages
The repos of course have to be present in the yum.conf in mock config.

Python 3 support
I started working on porting mock to Python 3. This doesn't mean that mock will run on Python 3 only, I'm trying to preserve compatibility with Python 2.6 without the need to have two version of mock for each. I changed the trace_decorator to use regular Python decorators instead of peak.utils.decorate and dropped dependency on the decoratortools package. There are slight changes in traceLog's output, that I don't consider important, but if someone did, it could be solved by using python-decorator package, which is available for both versions. There are some features that are still untested, but the regularly used functionality is already working. Rebuilding RPMs, SRPMs, working in shell, manipulating packages is tested. The plugins, that are enabled by default (yum-cache, root-cache, ccache, selinux) also work. What doesn't work is the LVM plugin, because it uses lvm2-python-libs, which doesn't have a Python 3 version yet. Same applies to mockchain, which uses urlgrabber. To try mock with Python 3, either change your system default Python implementation or manually hardcode python3 as the interpreter to the shebang in /usr/sbin/xmock.

July 09, 2014

Bookmarking chat logs in waartaa – GSoC post-midterm

Post-midterm phase of GSoC has already begun and there is still lot of work to be done(mostly, UI improvement and deploying all my previous work on server).

Lately, I wasn’t getting much time but somehow I have managed to add bookmarking feature in waartaa. I have added support for both single as well as multiple bookmarking in both live chat page and search page.

Single Bookmark

Beside every chat message, there appears a bookmark icon on hover. When user clicks on it, it gets bookmarked(in front-end only) and a popup appears on top of chat window which has field ‘Label’ with default value equal to chat message’s date-time, a ‘Done’ button to save data in db and ‘Cancel’ button for obvious reason.

Multiple Bookmarks

It happens many times when user wants to bookmark multiple chat messages under one label, for instance, he wants to save a conversation happened in some random IRC channel. Its easy to bookmark multiple messages in waartaa. You just have to choose two endpoints of a conversation and long click(atleast one second) one of them and normal click the other one. This will bookmark all messages in between along with the endpoints.

Bookmarks model

Bookmarks {                                     
  label: String, // Bookmark label                                
  roomType: String ('channel'/'pm'/'server'),   
  logIds: List, // chat messages id               
  user: String, // username of user for whom bookmark is created                                
  userId: String,  // user id                             
  created: Datetime,                   
  lastUpdated: Datetime,                        
  creator: String, // username of user who created bookmark                   
  creatorId: String                             


I know there isn’t much you can infer from below screenshots but this is all I have right now to share with you.


Single bookmarking


Multiple bookmarking


With this, bookmarking feature is complete and here is the PR 129.

<script>JS I love you.</script>

July 08, 2014

Google Summer of Code seventh Week update.
Google Summer of Code 2014: Week 7 update

Its seventh week of Google summer of code, The week had been quite hectic, It was decided by us to release a version before all the fedora infra team members, by the end of this week. So, most of the time went on polishing stuff, making existing code more efficient and writing demos.

So, here we had discussions about targets and other stuff.

Formally we worked to solve the following issues:

  1. Implementation of a blog, for fedora college. Inclusive of a blog RSS feed.
  2. The uploads now are more effective rather than writing whole of the file at once,we now upload in chunks.
  3. Added pagination to various modules and make the GUI more elaborative.
  4.  Configured Welcome e-mail for web-application.
  5. Support for tags for content.
  6. Added support for Code highlighter among other things.
  7. Wrote some demos for making the look of the web-application presentable.

Now the project has been formally added to the fedora-infra, ( ), the code would be now reviewed and viewed by whole community. We have worked really hard on this and expecting good reviews. With the polishing task taken up this week I would say the Project is almost complete.

Thanks for Reading through the Post.

July 07, 2014

GSOC Week 7 : Back on track
It's time to get back on track. Passing the midterms with supposedly good flying colors was really great. I apologize for my tardiness during the last two weeks for unable to post any update regarding my progress, owing to the fact of me not feeling very well during this time.

The progress till now includes re-thinking of the previous patch and the methodology io-stats will use to dump the private info. As suggested by my mentor, I'm moving the job of speed calculation and other major work to the glusterfsiostat script rather than code it all in the glusterfs codebase. You can look at the new patch here :

Also, my project was accepted to be hosted on Gluster Forge at where you can track the progress for the python script and rest of the code base related to my project.

Recently, my mentor and me have started to track our progress with the help of Scrum model, by using trello. This helps us break the bigger jobs into smaller tasks and set the deadline on each of them to better estimate their supposed date of completion.