July 30, 2016

On managing Ruby versions

This is a little thought on packaged Ruby versions (mostly in Linux-based systems) and why I don’t get many people advising newcomers to start by installing RVM when in reality they just want to program Ruby.

If you are a beginner or someone who just want to try Ruby chances are you don’t need a Ruby version manager. This is especially true on systems such as Fedora that always have a new Ruby version available (time to switch to Fedora!), but also kind of true on others (even latest Mac OS X/macOS has at least 2.0 and when I used Mac OS it was even pre-installed!).

What I don’t find ideal is that so many Ruby tutorials are just starting with installing a Ruby version manager (often RVM). I have seen beginners to install RVM for a one-day-i-want-to-try-it-out. “But RVM is great!” you say? Perhaps, but every tool introduce some complexity. Why do you want to use a version manager – learn it and fight it when you don’t yet understand it – even before actually programming Ruby for a while?

What is even worse is that our community actively discourage the use of packaged (system) Ruby. Yes it’s packaged for programs that actually need it as a dependency, but it’s also there for everybody else! Big companies pay big money to be able to use tested and security-patched packages (be it Ruby or Apache HTTP server) on enterprise platforms and you don’t even take 5 minutes to find out if it can suit your need?

But I not only like to use system Ruby for development, I also use it for deployment when I can. Getting security patches for free with system updates is nice. Managing as many components with system package managers (such as DNF) is great and that’s why we have them. Yet so many people out there oppose the idea it seems. “Lock the application dependencies” they say!

For me the choice is about security (updates for free) and reduced complexity (not managing another tool in production especially not a tool that suggest its installation by running a remote bash script!).

So is there any disadvantage to this? Of course! If there isn’t people would not invent version managers in the first place! For one you might be getting really old version with your system, sometimes that old that upstream don’t support it any longer (but the vendor might!). To battle this problem Red Hat offered newer versions as Software Collections, but true, then you have to go learn about scl-utils instead of RVM, which is not that ideal either.

The other disadvantage might be not getting some features that more complex version managers offer, such as gem sets. In my opinion Bundler does a good job at putting the $LOAD_PATH together so there isn’t anything I would really miss, but I am not saying you might.

Even though many people teach people Ruby by telling them about RVM, the official Ruby web actually advocates the right thing to do which makes me quite happy!

And so how do I manage my Ruby versions you ask? If the Ruby in my Fedora does not meet the requirement I use ruby-install to install new versions and chruby to change them. I am not advocating that they are the best, but there are very simple and that’s what I like about them the most.

I might also just change the Gemfile to be able to use my version of Ruby like this:

ruby_version_file = File.expand_path('../.ruby-version', __FILE__)
version = File.read(ruby_version_file).gsub(/ruby-(.*)/, '\1') if File.exists?(ruby_version_file)
ruby ENV['PROJECT_RUBY_VERSION'] || version

On production systems I always use packaged Ruby in my own projects.

In my mind I treat Ruby the same as Python, Perl or Node.js on my system. They come nicely packaged, tested and with security updates. Why should I be fighting something that actually works for me? Why should I install various version managers for various runtimes and learn how to use each one of them? In my opinion this does not scale well :).

Author is a Fedora contributor and a formal Red Hat emloyee.

Vídeo do mês
Faz tempo que não coloco um vídeo do mês, então vai lá <iframe allowfullscreen="allowfullscreen" frameborder="0" height="215" src="https://www.youtube.com/embed/psC6mk9ZTP4" width="460"></iframe>
Deploy Kubernetes with ansible on Atomic

I've been playing with Project Atomic as a platform to run Docker containers for some time now. The reason I like Project Atomic is something for another blogpost. One of the reasons however, is that while it's a minimal OS, it does come with Python so I can use Ansible to do orchestration and configuration management.

Now, running Docker containers on a single host is nice, but the real fun starts when you can run containers spread over a number of hosts. This is easier said than done and requires some extra services like a scheduler, service discovery, overlay networking,... There are several solutions, but one that I particularly like is Kubernetes.

ProjectAtomic happens to ship with all necessary pieces needed to deploy a Kubernetes cluster using Flannel for the overlay networking. The only thing left is the configuration. Now this happens to be something Ansible is particularry good at.

The following wil describe how you can deploy a 4 node cluster on top of Atomic hosts using Ansible. Let's start with the Ansible inventory.

inventory

We will keep things simple here by using a single file-based inventory file where we explicitly specify the ip adresses of the hosts for testing purposes. The important part here are the 2 groups k8s-nodes and k8s-master. The k8s-master group should contain only one host which will become the cluster manager. All nodes under k8s-nodes will become nodes to run containers on.

[k8s-nodes]
atomic02 ansible_ssh_host=10.0.0.2
atomic03 ansible_ssh_host=10.0.0.3
atomic04 ansible_ssh_host=10.0.0.4


[k8s-master]
atomic01 ansible_ssh_host=10.0.0.1

Variables

Currently these roles don't have many variables that can be configured but we need to provide the variables for the k8s-nodes group. Create a folder group_vars with a file that has the same name of the group. If you checked out the repository you already have it.

$ tree group_vars/
group_vars/
    k8s-nodes

The file should have following variables defined.

skydns_enable: true

# IP address of the DNS server.
# Kubernetes will create a pod with several containers, serving as the DNS
# server and expose it under this IP address. The IP address must be from
# the range specified as kube_service_addresses.
# And this is the IP address you should use as address of the DNS server
# in your containers.
dns_server: 10.254.0.10

dns_domain: kubernetes.local

Playbook

Now that we have our inventory we can create our playbook. First we configure the k8s master node. Once this is configured we can configure the k8s nodes.

deploy_k8s.yml

 - name: Deploy k8s Master
   hosts: k8s-master
   remote_user: centos
   become: true
   roles:
     - k8s-master

 - name: Deploy k8s Nodes
   hosts: k8s-nodes
   remote_user: centos
   become: true
   roles:
     - k8s-nodes

Run the playbook.

  ansible-playbook -i hosts deploy_k8s.yml

If all ran without errors you should have your kubernetes cluster running. Lets see if we can connect to it. You will need kubectl. On Fedora you can install the kubernetes-client package.

$ kubectl --server=192.168.124.40:8080 get nodes
NAME              STATUS    AGE
192.168.124.166   Ready     20s
192.168.124.55    Ready     20s
192.168.124.62    Ready     19s

That looks good. Lets see if we can run a container on this cluster.

$ kubectl --server=192.168.124.40:8080 run nginx --image=nginx
replicationcontroller "nginx" created

Check the status:

$ kubectl --server=192.168.124.40:8080 get pods
NAME          READY     STATUS    RESTARTS   AGE
nginx-ri1dq   0/1       Pending   0          55s

If you see the pod status in state pending, just wait a few moments. If this is the first time you run the nginx container image, it needs to be downloaded first which can take some time. Once your pod is is running you can try to enter the container.

kubectl --server=192.168.124.40:8080 exec -ti nginx-ri1dq -- bash
root@nginx-ri1dq:/#

This a rather basic setup (no HA masters, no auth, etc..). The idea is to improve these Ansible roles and add more advanced configuration.

If you are interested and want to try it out yourself you can find the source here:

https://gitlab.com/vincentvdk/ansible-k8s-atomic.git

Fedora @ EuroPython 2016 - event report

DSC_1342.JPG

I am just sitting here, unsure how to start to describe probably the best conference I have ever attended! I was given the privilege to attend the biggest Python conference in Europe, meet the Python community, see what is new in Python world, and the best part was that I was representing Fedora Project.

Five days, hundreds of smiling faces asking about Fedora and Red Hat, great food, the ocean, the beautiful city, everything was there!

Here follows my personal event report, and I will try to keep it short, but no promises! :)

 The booth

DSC_1233.JPG

DSC_1243.JPG

Cntc1fPXEAEea13.jpg

Fedora (represented by Michal from Red Hat Brno and myself) was sharing booth with Red Hat, which was great because when someone is interested in Fedora, you can also talk to them about Red Hat and vice versa.

Jiri and Marta, recruiters from Red Hat Brno were amazing. You can clearly see that they are really good at this sort of stuff. They came with four (I think) big boxes of booth materials, swag and all sorts of goodies which really made our booth stand out. In my opinion (which is clearly biased), we had the prettiest booth on EuroPython. Jiri and Marta were also very well-informed, even about Fedora, and they answered tons of different questions. They also had a form which if you fill in you get informed about job offerings at Red Hat, which was extremely helpful.

The first day of the conference was a bit crazy. We had tons and tons of people coming and asking all sorts of different questions about jobs at Red Hat and about Fedora, of course. I was surprised with many question about running Fedora in a container (with project Atomic) and especially with my favorite Fedora question on the conference - what does the hot dog symbolize in the development builds. :)

The other days were just a bit quieter, but still many people were interesed in what we have to offer. I don’t think there were that many EuroPython attendees who didn’t visit the Red Hat/Fedora booth at least once. :)

We had many “Fedora <3’s Python” t-shirts, so Michal came with an idea to make a Fedora Badge and whoever shows it to us, gets a t-shirt. Fedora badges server was a bit slow, but in the end I think we handed over about 100 t-shirts. :)

Also, the Python Developer Guide to Fedora flyer was really, really well designed and beautiful. We handed a lot of those. They were also very informative, I was amazed how much information was on that one piece of paper. Really good job by the designers!

I met quite a few Fedora users, and I’m proud to say that I think that I convinced a number of non-Fedora users to give it a try. The questions varied from the usual ones: “Why Fedora over something else”, “What is the relationship between Fedora and Red Hat” etc. to the advanced ones like: “Does Fedora ship security patches separately”, “What parts of Fedora are written in Python” and similar.

Overall, it was awesome being with Jiri, Marta and Michal on the booth, although tiring at times. I loved the experience of talking with many people about Fedora and how it can work for them. Our “first door neighbors” were awesome guys from TravelPerk (business travel organizer webapp) and JetBrains which I annoyed with many questions and was given many awesome stickers!

 The conference

EuroPython 2016 was held at a beautiful venue called Euskalduna Conference Center. You could immediately tell that it is really made for big conferences. It was really well thought out.

I attended a few talks when the booth was a bit quieter and when Michal was there. I really loved the talks on Friday about deep learning and neural networks, and I found them really informative. Also I attended some really interesting talks like, how Disney uses Python, how can you use Python to recognize musical notes, how does LIGO (the gravitational waves research lab) use Python and others. The quality of the talks was really high, and the speakers were always there afterwards to answer all the questions.

The conference was really well-organized with many side events like a visit to the Cider house, a Pintxos night, Pokemon Go contest, micro:bit giveaways etc. Every day there were short announcements followed by a keynote and then the rest of the talks, workshops and trainings. There were also Lightning talks at the end of each day which I sadly only attended once. Also, everyone got a goodies bag with a beach towel, conference guide, one micro:bit (wow!) and many other stuff, which was very nice!

The Python community is really amazing. Everyone is friendly, helpful and willing to talk about great variety of subjects. It’s really easy to just go and talk to someone, and I loved that!

DSC_1351.JPG

On Saturday and Sunday, there were also sprints at a different venue - pictured above. The concept was new to me, but I really liked the idea of many developers just getting together and working on stuff. Sadly, I had to leave early to pack for my return trip.

 Bilbao

DSC_1275.JPG

Bilbao is awesome, no other word for it. It’s probably one of my favorite cities ever. By architecture, it is a mix of modern and old, similar to Budapest, Vienna and the rest of big cities in Europe, but the feel is completely different. City’s small streets with many local shops, many bridges (some very modern, some ancient looking), bits of modern art everywhere around the city, really set it apart. And the people are very different (in a good way). I have never experienced a big city like Bilbao to be so calm and relaxing. Honestly, you have to visit to see what I’m on about! No one is in a rush, or shouting, or angry. People drive slowly, stop in the middle of the road to say hi, block the entire street, and still no one, and I repeat absolutely no one is angry or rushing them or anything. You always hear these horror stories about tourist cities where everyone is trying to “get you”, well with Bilbao it’s completely opposite. It could be that it is more business that tourist city (by the number of museums, sights, cafes it definitely isn’t) or it is just the calm and relaxed people who make it that way. You get a sense that someone, a long time ago said, let’s make a city and do everything right. It’s not directly on the ocean, but the metro will take you to the beach of your choice in less than 30 minutes. The buildings are mostly low, with the exception of few skyscrapers. The city is astonishingly well connected, with metro, buses, tram and taxis. The airport is behind the hill (with a tunnel underneath) so the noise levels are minimal. You get the picture, I hope. :)

DSC_1372.JPG

I also visited Guggenheim museum (pictured above), with the ticket kindly provided by EuroPython organizers at a discount. Although I don’t know anything about modern art, there were a few paintings that I really liked, and the building is amazing. It’s rude to say it, I know, but the building was more artful to me that anything in it. It’s probably just me not understanding anything inside, though (including how the elevator works)…

DSC_1318.JPG

After the conference day ended, on Friday, we visited the beach. You basically take the metro and just exit when you can see the sea. Very easy. The beach pictured above (just a few hundred meters away from the Bidezabal metro stop) was breathtaking! I have never been to such a long, clean and beautiful beach. It isn’t even the largest one, but someone on the EuroPython Telegram group mentioned it (thanks!) so we decided to visit it, and it was awesome.

DSC_1253.JPG

Obviously, if you visit Bilbao, you will have to try pintxos! They are everywhere, in almost every bar! Pintxo is like a sandwich evolved, they are big, and extremely tasty. You can have them with everything, sea food, ham, cheese, anything! I think I can safely say that if you ask any of us we would all recommend a small bar called La Sonema, where we had some of the best pintxos ever! And also, a beer with your pintxo is a must.

 The flight

I have to mention, that this was my first time “changing planes”. It’s really much less scary than everyone is saying - at least for me it was. You don’t have to worry about your luggage (at least with Lufthansa), and if the airport where you are changing planes is well organized (like Munich was) you can probably get from one end to the other in about 20 minutes. My first flight from Belgrade to Munich was delayed for about 25 minutes, and I still made it (even though I had to pass through two scanner gates - non EU stuff) in time to the second one, despite having only 45 minutes to change planes. So, my advice is, follow the signs for the gate, don’t be nervous (like me) and you will get there on time quite easily. On the return trip, I even had to get on the train underneath the Munich airport to get to the other terminal (which was scary) but still made it, with time to spare.

 The room

I was in a very nice AirBnb apartment which Michal found about 10 minutes away from the conference venue. We were staying with Slavek and Petr from Red Hat Brno (awesome guys!). I had some issues with my bed the first night, but the guys helped me get another one from a different room, and everything was great afterwards! We even had couple of big supermarkets near, so everything was very easy.

 tl;dr

Python community is awesome, and I had a blast at EuroPython 2016! I learned a lot of new stuff, including how to make these:

download.png

I loved representing Fedora and I hope I will do it again in the future. I’ve met some great people from Red Hat Brno and I hope I will meet them again soon. Till then! :)

Convert iPhone contacts to vCard

On a recent troubleshooting attempt, I lost all the contacts in my Android phone. It had also received a recent update which took away the option to import contacts from another phone via bluetooth.
I still had some contacts in the old iPhone, but now that mass transfer via bluetooth is gone, it was a question of manually sending each contact in vCard format to the Android phone. That means I should probably find a less dreadful way to get the contacts back.

Here is one way to extract contacts en-masse from iPhone into popular vCard format. The contact and address details in iOS are stored by AddressBook application in a file named ‘AddressBook.sqlitedb‘ which is an sqlite database. The idea is to open this database using sqlite, extract the details from a couple of tables and convert the entries into vCard format.

Disclaimer: the iPhone is an old 3GS running iOS 6 and it is jailbroken. If you attempt this, your mileage would vary. Required tools/softwares are usbmuxd (especially libusbmuxd-utils) and sqlite, with the prerequisite that openssh server is running on the jailbroken iPhone.

  1. Connect iPhone via USB cable to the Linux machine. Run iproxy 2222 22 to connect to the openssh server running on the jailbroken phone. iproxy comes with libusbmuxd-utils package.
  2. Copy the addressbook sqlite database from phone:scp -P 2222 mobile@localhost:/var/mobile/Library/AddressBook/AddressBook.sqlitedb .

    Instead of steps 1 and 2 above, it might be possible to copy this file using Nautilus (gvfs-afc) or Dolphin (kio_afc) file manager, although I’m not sure if the file is accessible.

  3. Extract the contact and address details from the sqlite db (based on this forum post):sqlite3 -cmd ".out contacts.txt" AddressBook.sqlitedb "select ABPerson.prefix, ABPerson.first,ABPerson.last,ABPerson.organization, c.value as MobilePhone, h.val ue as HomePhone, he.value as HomeEmail, w.value as WorkPhone, we.value as WorkEmail,ABPerson.note from ABPerson left outer join ABMultiValue c on c.record_id = ABPerson.ROWID and c.label = 1 and c.property= 3 left outer join ABMultiValue h on h.record_id = ABPerson.ROWID and h.label = 2 and h.property = 3 left outer join ABMultiValue he on he.record_id = ABPerson.ROWID and he.label = 2 and he.property = 4 left outer join ABMultiValue w on w.record_id = ABPerson.ROWID and w.label = 4 and w.property = 3 left outer join ABMultiValue we on we.record_id = ABPerson.ROWID and we.label = 4 and we.property = 4;"
  4. Convert the extracted contact details to vCard format:cat contacts.txt | awk -F\| '{print "BEGIN:VCARD\nVERSION:3.0\nN:"$3";"$2";;;\nFN:"$2" "$3"\nORG:"$4"\nEMAIL;type=INTERNET;type=WORK;type=pref:" $9"\nTEL;type=CELL;type=pref:"$5"\nTEL;TYPE=HOME:"$6"\nTEL;TYPE=WORK:"$8"\nNOTE:"$9"\nEND:VCARD\n"}' > Contacts.vcf
  5. Remove the empty content lines if some contacts do not have all the different fields:sed -i '/.*:$/d' Contacts.vcf

Now simply transfer the Contact.vcf file containing all the contact details to Android phone’s storage and import contacts from there.


Tagged: hacking, linux, mac

July 29, 2016

LastPass 0Day — Why Using cleartext tokens in the URL is bad practice.

Source: lastpass password manager tell all

This is yet another reason why sanitizing OpenAuth or  other token urls to the minimal allowed to resolve (the hostname) is good practice.

So exactly what is the issue at hand?

Well LastPass as with most password managers that in some way connect to a sync or cloud mechanism,  uses a  cookie of sorts on all sites you setup with autofill ( no typing needed,  great defense against keyloggers),  however the issue is that the parser to determine if such a site is accessed / logged in leaves cleartext tokens in the url and takes a malformed url as username:password @ foo.tld i.e. johndoe/mypassword@facebook.com which allows an attacker on a machine that is logged in (without 2fa –more on this later) to spill the beans about all passwords in 2 ways.

Method 1:  log in or access a machine that is logged in and not locked out (Lock screens are useful folks) to access without any further password/credential prompts the password store and click ‘show password’ and then jig is up.  As alluded to earlier if 2fa (two factor auth) is enabled this is thwarted as it requires that secondary challenge for anything account or password store related.

Method 2: Typing in the username (in plaintext in password store) and the target site and the password becomes visible in plaintext in the url.

The really scary part is that now 2  security researchers have exposed these attacks and its still unpatched.

Original article courtesy of https://www.thehackernews.com


Filed under: 0Day /Vulns, Community, Current Events, PSAs, Security Tagged: 0day, Community, LastPass, PasswordManagers, Secure Passwords, Security
New badge: Let's have a party (Fedora 24) !
LetYou organized a party for the release of Fedora 24
New badge: Design Ninja 2 !
Design Ninja 2You participated in the Design Team FAD, 2016
GSoC 2016: Moving towards staging

This week wraps up for July and the last period of Google Summer of Code (GSoC 2016) is almost here. As the summer comes to a close, I’m working on the last steps for preparing my project for deployment into Fedora’s Ansible infrastructure. Once it checks out in a staging instance, it can make the move to production.

Next steps for GSoC 2016

My last steps for the project are moving closer to production. Earlier this summer, the best plan of action was to use my development cloud instance for quick, experimental testing. Once a point of stability is reached, it would be tested on a staging instance of the real Fedora Magazine or Community Blog. Once reviewed and tested, it would work its way to production for managing future installations and upgrades for any WordPress platform in Fedora.

When the time comes to move it to production, I will file a ticket in the Infrastructure Trac with my patch file to the Ansible repository.

One last correction

One sudden difficulty I’ve found is using the synchronize module in my upgrade playbook. Originally, I was copying and replacing the files using the copy module to carry out this, but I found synchronize to offer a better solution, using rsync. However, after switching, I ran into a small error that had me hung up.

When running the upgrade playbook, it would trigger an issue with rsync requiring a TTY session to work as a privileged user. I found a filed bug for this in the Ansible repository. Fixing it required setting a specific flag in the server configuration when using rsync. To avoid doing this, I altered my upgrade playbook to not avoid dependence on a root user for running, and instead using user and group permissions for the wordpress user. I’m working through smoothing out a few minor hiccups with the synchronize module during today, mostly dealing with the directory not being found when executing the module, even though it exists.

Flock 2016

On Sunday, I’ll be flying out to Poland for Flock 2016, Fedora’s annual contributor conference. During Flock, I’ll meet several other Fedora contributors in person, including my mentor. We plan to set up the staging instance either later tonight or during Flock, depending on how time ends up going.

I’ll also be delivering a talk and hosting a workshop during the week as well! One of the workshops I’m hoping to attend is the Ansible best practice working session. I’ll be seeing if there’s anything I can glean to build into the last week of the project during the workshop.

The post GSoC 2016: Moving towards staging appeared first on Justin W. Flory's Blog.

Instalación guiada de Debian 8.0 #LPIC
Uno de los temas que entran en la LPIC, son las instalaciones de las diversas distribuciones como Debian, Fedora, RHEL (Red Hat Enterprise Linux), Ubuntu... y hoy nos toca hablar un poco de Debian y cómo instalarla en modo no gráfico.



Según la documentación oficial de Debian. Esta es una organización liderada en 1993 por Ian Murdock, quién envió peticiones a todos los desarrolladores de software para crear una distribución basada en Linux fundándose así el proyecto Debian.

Este proyecto está constituida enteramente por voluntarios y entusiastas que se encargan de elaborar y desarrollar software libre, promocionando así los ideales de esta magnífica y solidaria comunidad.

Actualmente hay diversos ramales de desarrollo en Debian. No solo nos podemos encontrar con la gran conocida y refutada distribución basada en el kernel Linux. También podemos contar con Debian/Hurd, el cuál utiliza Hurd como kernel.


Sin embargo, mucho antes, podíamos contar con Debian/kFreeBSD que utilizaba un kernel FreeBSD adaptado para funcionar con las bibliotecas GNU y utilizar el set de herramientas de APT; Debian/NetBSD que utiliza NetBSD un kernel adaptado del sistema operativo NetBSD con un set de herramientas APT. Ambos proyectos se encuentras desmantenidos.

En cuánto arquitecturas de hardware, tiene un gran soporte como pueden ser:

  • i386
  • amd64
  • armel
  • armhf
  • mips
  • mipsel
  • powerpc
  • ppc64el
  • s390x
Recuerda seleccionar la que soporte tu CPU o no te arrancará el medio de instalación o medio live.


Comenzando con la instalación

Comenzamos con la instalación, la siguiente instalación la haremos basándonos en la arquitectura x86_64 tan presentes en nuestros días. Además, así aprenderemos a dejar Debian listo con soporte multilib como veremos más adelante.

Pantalla principal


Cuando insertemos el pendrive o el CD/DVD, veremos la siguiente pantalla:


Tenemos las siguientes opciones:
  • Install: Arrancará un instalador en modo no-GUI tipo ncurses.
  • Graphical install: Se iniciará el instalador en modo gráfico
  • Advanced options: Opciones avanzadas de arranque y/o instalación
    • Incluyen:
      • Volver atrás
      • Instalación en modo experto
      • Modo rescate
      • Instalación automatizada
      • Instalación en modo experto con interfaz gráfica
      • Modo rescate con interfaz gráfica
      • Instalación automatizada en modo gráfico
  • Help: Mostrará una pantalla de ayuda que veremos más adelante
  • Install with speech synthesis: Es un método de instalación que permite narrarte la instalación a través de síntesis de voz.

Este es el menú de ayuda, en el que veremos diferentes opciones como las siguientes:
  • [F2] Prerequisitos para la instalación de Debian:
    • Necesitaremos como mínimo:
      • 105 MB de RAM 
      • 680 MB de espacio en disco para instalar el sistema *base* aparte el adicional para otro tipo de paquetes.
      • Y para más información mirar su guía en la página oficial del proyecto.
  • [F3] Métodos de arranque:
    • install: Iniciar la instalación en modo no-GUI, el método por defecto
    • installgui: Instalación en modo gráfico 
    • expert: Instalación en modo experto no-GUI para un mayor control
    • expertgui: Igual que la opción anterior pero en modo gráfico
    • Se puede adjuntar parámetros del kernel seguidos del método de instalación
  • [F4] Modo rescate
    • rescue: Iniciar en modo rescate no gráfico
    • rescuegui: Lo mismo que la opción anterior pero modo GUI
  • [F5] Un vistazo a los parámetros de arranque (incluye lo mismo que las opciones F6,F7 y F8)
  • [F6] Parámetros para máquinas especiales
    • Podemos ver distintas opciones como el soporte IBM PS/1 o ValuePoint (disco IDE); varios IBM ThinkPads; portátiles con problemas de resolución de pantalla; puertos seriales...etc
  • [F7] Parámetros de arranque para varios tipos de controladoras de discos
    • Se puede visualizar una lista de ejemplo de todas las controladoras que pueden utilizarse como Adaptec, BusLogic, Certain DELL machines. La lista completa se encuentra en el fichero de texto kernel-parameters.txt
  • [F8] Parámetros comprendidos por la instalación del sistema
    • Muestran parámetros para deshabilitar el framebuffer; no iniciar PCMCIA; forzar la configuración estática de la red; establecer el mapa o modelo del teclado; seleccionar el tipo de entorno de escritorio; y opciones de accesibilidad como utilizar temas de alto contraste; una tty Braille o un sintetizador de voz.
  • [F9] Obtener ayuda
    • Utilizando la página oficial del proyecto
  • [F10] Copyrights y garantías
    • Te menciona el Copyright del proyecto, no contiene garantías, y que cada uno de los paquetes suelen contener las licencias en /usr/share/doc/nombre_paquete/copyright
No hay forma de volver al menú inicial al menos que reiniciemos. Si pulsamos enter accederemos al menú de instalación por defecto, que es no-GUI. Y del cuál llevaremos a cabo la instalación paso por paso:


Procederemos a seleccionar el idioma para nuestro sistema. Esto conlleva la instalación de documentación como los man-pages en español, correctores ortográficos, configuración de la codificación... que se obtendrá más adelante.


Seleccionamos dónde nos encontramos, esto nos ayudará a obtener nuestra zona horaria.


Configuramos el mapa del teclado.


En este apartado, tenemos que ponerle un nombre que identifique a nuestro dispositivo a nivel de red. En caso de estar en casa, nos lo podemos inventar sin problemas.



Tampoco es relevante si estamos en casa y no hay una controlado de dominio. Lo podemos incluso dejar vacío.


Asignamos una clave para el superusuario. Para que nos entendamos. Imagínate que vas a tu banco, y accedes a tu cajero y operas con tus cuentas. Pero no puedes acceder al resto más que a la tuya.

Sin embargo, el director del banco, puede acceder a tu cuenta y ver tus operaciones y tus datos bancarios. Incluso puede ponerte sanciones económicas y en un supuesto, retirar fondos de tu cuenta. 

Pues eso sería el superusuario. Para tener una mayor seguridad, los sistemas UNIX-like y/o UNIX añaden un usuario que tiene acceso total. Y posteriormente, añaden otros usuarios que no tienen dichos privilegios. Ya veremos más adelante cómo se van añadiendo determinados privilegios y cómo se gestionan los usuarios del sistema.

Es preferible que la contraseña que le asignes sea difícil de adivinar mezclando minúsculas, mayúsculas, números y otro tipo de caracteres con una longitud de al menos 8 caracteres; y que sea fácil de recordar.


Pasarás por una mera comprobación para ver si no te has equivocado al escribir la contraseña. 


En este apartado, podremos poner  nuestro nombre, nombre y apellidos o lo que quieras, no afectará al rendimiento y/o a la operatividad del usuario sin privilegios.


Aquí cambia la cosa, aquí tendremos que utilizar un nombre, apellido, nombre y apellido y/o apodo o lo que quieras siempre y cuando sea válido y cumpla con las normas de elaboración de usuario. En este caso, Debian no permite la utilización de espacios, caracteres especiales y/o mayúsculas.
Con este usuario, vas a operar de forma rudimentaria en el sistema.


Escogeremos una contraseña para nuestro usuario rudimentario. Puede ser un pelín más fácil que la de superusuario; pero igualmente, bajo mi recomendación, que posea 8 caracteres de los cuáles debe contener mayúsculas, minúsculas, números y algún carácter especial.


Seleccionaremos a continuación la zona horaria en la que nos encontremos. Como vivo en las Islas Canarias, la selecciono.


Esta es la pantalla principal para particionar el disco duro o el medio de almacenamiento en cuestión y proseguir instalando Debian.
Se puede optar por estas 4 opciones si no tenemos información en el disco:
  • Guiado - utilizar el espacio libre contiguo más grande
    • *Solo disponible si hay particiones y si hay espacio libre.
  • Guiado - utilizar todo el disco: Borra todas las particiones y genera un esquema para solo instalar Debian
  • Guiado - utilizar el disco completo y configurar LVM: LVM (Logical Volume Manager) es una forma de agrupar dispositivos de almacenamiento o particiones (llamados volúmenes físicos) en un solo espacio grande. Este método es mejor utilizarlo en servidores, o si pretendes añadir más dispositivos a tu PC. Es uno de los puntos que veremos más adelante en otros artículos.
  • Guiado - utilizar el disco completo y configurar LVM cifrado: Lo mismo que el apartado anterior, pero añadir un cifrado fuerte que no permita montar las particiones sin desencriptarlas a priori.
  • Manual: Nos permite configurar el esquema de particionamiento a nuestra manera
Nosotros escogeremos manual.


Al escoger manual, tenemos varias opciones disponibles:
  • Particionamiento guiado: Te devolverá a la pantalla anterior.
  • Configurar los volúmenes ISCSI: Son un tipo de dispositivo de almacenamiento en red del que no dispongo, así que lo saltamos.
  • SCSI1....: Es el disco duro SATA dónde voy a instalarlo. *Escogeremos esta opción.


Si es la primera vez que conectas el dispositivo de almacenamiento, y no tiene formato ni tabla de particiones. Debian te pedirá que crees una antes de continuar.


Vemos que ya podemos comenzar a añadir particiones. Vamos a ignorar las opciones superiores ya que no las necesitaremos en este modo de instalación.

Para crear una partición, solo tenemos que situar el cursor como lo tenemos en esta imagen.


Nos solicitarán 3 opciones:
  • Crear una partición nueva: La que usaremos
  • Particionar de forma automática el espacio libre: Genera un modelo de particiones de forma automática
    • Te pedirá si quieres tenerlo todo en el /; /home separado; /home, /var y /tmp separados.
  • Mostrar información de Cilindros...: Nos dirá la información que poseen esos tres datos del disco.

Le asignaremos el tamaño que queramos a la partición que vayamos a crear y seguimos.


La tabla de particiones MS-DOS limita las particiones primarias en un tamaño de 4. Si pretendes tener más de 1 sistema operativo, tal vez prefieras crear particiones lógicas en vez de primarias. Como nosotros, vamos a utilizar Debian solamente en nuestro disco. Seleccionaremos primaria.

Este será nuestro esquema en un disco de 80GB:
  • /boot - ext4 - 2GB
  • swap - swap - 2GB
  • / - ext4 - Resto de espacio

En nuestro esquema, nos da un poco igual si queda al principio o al final. Nos podemos saltar este paso.


Este es un ejemplo de como nos quedaría nuestra partición root del sistema. Hay opciones como el porcentaje de bloques reservados que no vamos a discutir en este artículo o el uso habitual.


Ejemplo de partición /boot


Ejemplo de swap (espacio de área de intercambio), en caso de que se quede sin RAM el sistema comenzará a tomar de esta partición, suspensión e hibernación tienen un papel especial la swap. 


Así nos quedaría el esquema, ahora guardaremos los cambios y aceptaremos darle el formato.


Comenzará a instalarse el sistema base en nuestro dispositivo de almacenamiento.


El instalador nos pedirá si queremos cargar otro CD/DVD para que lo añada como repositorio. Como llevaremos a cabo la instalación via red. Le daremos que no.


Pulsamos que sí para comenzar a seleccionar el repositorio que queramos.




Nos preguntará si tenemos un Proxy en nuestra red. Si no lo tenemos, ignorar el mensaje pulsando Enter. Si no, se tendrá que configurar.


Comenzará a descargar los ficheros de información del repositorio.


Si quieres colaborar en esta encuesta, pulsa que si. En cualquier caso, no.


Te aparecerá la siguiente ventana. Selecciona aquello que quieras tener en tu sistema y continua con la instalación. En mi caso he mantenido las opciones marcadas y he marcado el entorno de escritorio MATE.


Una vez finalizada la instalación de los paquetes. El instalador te pedirá si quieres instalar GRUB en el MBR o en la partición. En nuestro caso, utilizaremos el MBR pulsando que sí.




Seleccionas el disco y/o dispositivo dónde quieras instalar el GRUB. En nuestro caso es /dev/sda que es el único del que disponemos.


Una vez recibamos este mensaje. Ya podremos sentirnos aliviados. Lo más duro ya ha pasado y por fin tenemos Debian instalado en nuestro PC. En unos segundos después de que se reinicie el ordenador, te arrancará por primera vez Debian.




Como yo seleccioné MATE, se verá de la siguiente manera:



Ya tendremos nuestra Debian instalado esperando a recibir instrucciones para comenzar a trabajar con él.

Referencias

  • Debian.org
  • Eni - Preparación a la certificación LPIC-I
Titelleiste bei GTK3 Anwendungen verkleinern
Bitte beachtet auch die Anmerkungen zu den HowTos!

Wem die Titelleisten bei GTK3 Anwendungen zu fett sind, der kann ihnen relativ einfach eine kleine Diät verordnen.

Dazu muss lediglich der folgende Code in die Datei ~/.config/gtk-3.0/gtk.css (muss ggf. vorher angelegt werden) eingefügt werden:

.header-bar.default-decoration {
 padding-top: 3px;
 padding-bottom: 3px;
 font-size: 0.8em;
}

.header-bar.default-decoration .button.titlebutton {
 padding: 0px;
}

window.ssd headerbar.titlebar {
    padding-top: 4px;
    padding-bottom: 4px;
    min-height: 0;
}

window.ssd headerbar.titlebar button.titlebutton {
    padding: 0px;
    min-height: 0;
    min-width: 0;
}

Nach einem Neustart der Gnome-Shell bzw. einem Aus- und wieder Einloggen sollten die Titelleisten der GTK3-Anwendungen spürbar schmaler sein und weniger Platz auf dem Bildschirm benötigen.

Quelle: Blog von Adam Šamalík (hier und hier)

Getting involved with Fedora Quality Assurance

Fedora is a large project with several different teams and groups working on the distribution every day. Quality assurance, abbreviated to QA, is an important and fundamental part of what goes into making a release of Fedora successful. To make sure everything works and performance is improving, testing and quality checks are a must. In Fedora, the user experience is crucial as there are thousands and thousands of people who expect a more stable, feature-enriched operating system that gets better with each release.

The Fedora Quality Assurance (QA) team is the group of Fedora contributors that helps cover testing of the software that makes up Fedora. Through various test cases and different hardware, the team goes through important software that makes up Fedora and helps make sure it works as expected. Despite the importance of the work, getting involved isn’t too difficult.

Joining Fedora Quality Assurance

Fedora contributor and QA team member Sumantro Mukherjee recently wrote a three-part series on how to join the team. The first article mostly introduces the work and important tools for all team members. It also shows places to get involved and take part in conversations about testing and quality assurance.

Getting started with Fedora QA (Part 1)

<iframe class="wp-embedded-content" data-secret="nlKWIF85iJ" frameborder="0" height="338" marginheight="0" marginwidth="0" sandbox="allow-scripts" scrolling="no" security="restricted" src="https://communityblog.fedoraproject.org/getting-started-fedora-qa-part-1/embed/#?secret=nlKWIF85iJ" title="“Getting started with Fedora QA (Part 1)” — Fedora Community Blog" width="600"></iframe>

The second part goes into setting up your work environment with a virtual machine and how to test new versions of packages awaiting feedback. In this article, you will be working with Rawhide, the ever-changing development version of Fedora.

Getting started with Fedora QA (Part 2)

<iframe class="wp-embedded-content" data-secret="c5j5kt3Vzl" frameborder="0" height="338" marginheight="0" marginwidth="0" sandbox="allow-scripts" scrolling="no" security="restricted" src="https://communityblog.fedoraproject.org/getting-started-fedora-qa-part-2/embed/#?secret=c5j5kt3Vzl" title="“Getting started with Fedora QA (Part 2)” — Fedora Community Blog" width="600"></iframe>

The third and last part to the series reviews test cases and how to report your findings to the community. This article appears next Tuesday, so keep an eye out for the conclusion!

Get in touch with QA

Come say hello to the Fedora Quality Assurance team! You can find them on their mailing list and you can also interact with the global community on IRC.


Debug courtesy of Lemon Liu, Crash Test Dummy courtesy of James Keuning (from the Noun Project)

Espartanos from Fedora Brazil team, featuring filiperosset
Espartanos from Fedora Brazil team,

featuring filiperosset

July 28, 2016

Community Testing Day on August 5th

The Pulp 2.10 GA is scheduled for Aug 16, 2016 (https://pulp.plan.io/projects/pulp/wiki/Release_Schedule). To deliver a robust Pulp release, the Pulp QE and Pulp team have organized a Community Test day where the Pulp community can come together to test Pulp. This post further details the event and how to participate it.

What is a Community Testing Day and why host it?

According to the Fedora blog, organizing a test day “is a great way to help expose your project and bring more testers to try a new update before it goes live”. The main objective of  hosting a Pulp Test Day is to identify as many issues as possible early on with the help of the community.Once identified, important issues can be fixed before the GA releases leading to a more stable release.

What to test?

The focus of the test day will be the new features included in Pulp 2.10. The issues fixed in 2.10 can also be verified that day. A list of these features will be provided, and participants can sign up for the features they want to test. The attendees are welcomed to test existing features as well. Members can also participate in the test day by adding tests to our automation test suite (Pulp Smash) or by submitting fixes to the reported bugs. All issues and bugs will be filed and triaged to prepare for 2.10 GA.

When?

The Pulp QE team has tentatively decided to host the day on Aug 05, 9am-5pm EDT.

Where?

The major communication approach will be through IRC. Participants are encouraged to join channels #pulp and #pulp-dev on irc.freenode.net. The Pulp email list pulp-list@redhat.com can also be used for communications. In order to expand the audience of this Test Day, members of both the open source community as well as downstream users will be notified via email.

Getting Ready

Here are some instructions for you to install or update your Pulp.

  • Optionally, you can use Ansible to install Pulp onto Fedora 23 by the following commands:

    #!/usr/bin/env sh
    set -euo pipefail
    dnf update -y
    dnf install -y git vim ansible python-dnf libselinux-python
    mkdir code
    cd code
    git clone https://github.com/pulp/pulp_packaging
    cd pulp_packaging
    echo localhost > hosts
    ansible-playbook -i hosts –connection local -e pulp_version=’2.9′ ci/ansible/pulp_server.yaml

  • You can also update your current, non-production Pulp to Pulp 2.10 beta. Here is a link about how to upgrade to 2.8. You can follow the instructions to upgrade to 2.10: http://docs.pulpproject.org/user-guide/release-notes/2.8.x.html#upgrade-instructions. It is important to not use a production environment. Also note, Pulp does not support upgrading from a beta or RC to a newer beta, RC, or GA version. In other words you’ll end up deleting your Pulp testing environment after you are done.
  • If you don’t have your own instance of Pulp on hand, for internal Red Hat associates, we can provide you with a system. Just check in with preethi or chongrui on IRC.

How to test

Now that you have a running instance of Pulp 2.10, you can start testing the new features and look into the existing issues that can be verified.

Sign up to test a new feature

Please enter your name, targeting features and OS/platform information on the spreadsheet. After your tests, you can put the results on the spreadsheet as well: https://docs.google.com/spreadsheets/d/1XgD43QI7ELYWFI_eeZdWqHwtCZDywMX-uGTrSZJvPX0/edit#gid=12432565

Automate tests

The Pulp QE team uses a python-based testing framework to automate testing features and and regression testing. This framework is called Pulp Smash, and if you’re interested in contributing with your own automated tests for some of the new features, here’s how to get started:

Install Pulp Smash

Pulp Smash can be easily installed onto your system. Simply follow the official documentation found here: http://pulp-smash.readthedocs.io/en/latest/index.html If you want a different approach to how to install it, you can also read this post from our intern.

Run the tests

Once you have Pulp Smash installed and configured, we highly recommend that you read through http://pulp-smash.readthedocs.io/en/latest/about.html#contributing for a quick introduction to the process of contributing with automated tests. Then you can:
  • Manually test on Pulp 2.10 beta with API/CLI features.
  • Develop Pulp Smash tests to help automated testing.
  • Choose one of the existing test cases that need to be automated here: https://github.com/PulpQE/pulp-smash/issues.

Reporting bugs and issues

While playing with Pulp 2.10 Beta, you may encounter an issue or something that does not quite match your expectations or the existing documentation. For these cases we kindly ask that you help us by:

Test Day Report

After the completion of Test Day, Pulp QE  will deliver a summary via email containing information about:
  • What new features were tested?
  • What existing features were tested?
  • Number of Pulp issues opened
  • Number of Pulp Smash issues created
  • Number of PR/automation submitted
  • Number of Pulp issues closed/reopened

Also, there will be a follow-up blog post on pulpproject.org with this summary.

Who’s available

The following PulpQE members will be available on IRC/freenode to provide help and guidance with any questions you may have, as well as help with your tests, inform of known workarounds, or just chat about the awesome new features:
  • Chongrui Zhang (chongrui)
  • Preethi Thomas (preethi)
The following  PulpQE members can help out with Pulp Smash automation as well:
  • Chongrui Zhang (chongrui)
  • Elyezer Rezende (elyezer)
 The Pulp Development team will also help out by answering questions on IRC.
We expect that Pulp 2.10 will be an amazing new release, and we look forward to seeing you joins us this Test Day!
Pulp QE Team

Reference

Fedora PSA: Why is my account listed as spamcheck_* (Updated)
In my previous post I talked about the fact that Fedora infrastructure has been undergoing a large amount of spam that caused us to put in extra circuit breaker programs to try and slow down the account creation and web spam. After 3 months of constant attrition, we eventually had to change the policy on the wiki that people with new accounts could not open up web pages without being sponsored by an existing group. That caused an immediate drop in web spam, and 12 hours later a drop in account creation. However not a cessation in either of them which has made it not possible to remove the circuit breakers. My guess is that some amount of accounts are being stockpiled for if the wiki permissions are made open again so that they can start putting wiki spam in. 


Above we have a link for account creation for 2016. Normally we see spikes during convention shows and such but in March we saw a large number of accounts created.. many of which were not used until June. At this point the accounts were flagged as spam when pages created had various "Quickbooks", "Norton Antivirus", "Cheap Electricity" or "Emergency Printer Support" pages created on the wiki and one of the circuit breakers tagged the account to lock down. Other accounts created by the same IP address were never used so my guess is that accounts were forgotten or were to be used for some other "campaign" in the future. 

If you do find your account to be listed as spamcheck_denied or spamcheck_manual, you can contact admin@fedoraproject.org and please list the following information:
  1. The email address you used for creating the account. [We have a lot of people who use one account name for account creation and another to report the problem. This makes tracking down the account info easier for us.]
  2. The account name you requested.
  3. If possible the IP address you used when creating the account. The website icanhazip is extremely useful here.
We may request other information that you inputted to try and confirm that it is you requesting the spamcheck cleanup. 

Sprint Demo 5 — July 28, 2016

See some of the great features going into Pulp 2.10. This is a sprint demo of Sprint 5, which is the second sprint of 2.10 work.

Welcome (0:00)
State of Pulp (1:07)
Resource Manager Failover (4:28)
Sync from Private Docker Repository using Basic Auth (14:39)
Docker Rsync Demos (17:39)
Custom Checksum Type in updateinfo.xml (26:34)
RPM Sync –force-full flag (30:50)

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="360" src="https://player.vimeo.com/video/176636191" width="640"></iframe>

Pulp Sprint Demo 5 — 7_28_2016 from Brian Bouterse on Vimeo.

OpenSCAP XSLT performance improvements for faster SSG builds

As I contribute more and more patches to SCAP Security Guide I got increasingly frustrated with the build speeds. A full SSG build with make -j 4 took 2m21.061s and that’s without any XML validation taking place. I explored a couple of options how I could cut this time significantly. I started by profiling the Makefile and found that a massive amount of time is spent on 2 things.

Generating HTML guides

xslt_optimization_html_guide_chart

We generate a lot of HTML guides as part of SSG builds and we do that over and over for each profile of each product. That’s a lot of HTML guides in total. Generating one HTML guide (namely the RHEL7 PCI-DSS profile from the datastream) took over 3 seconds on my machine. While not a huge number this adds up to a long time with all the guides we are generating. Optimizing HTML guides the first thing I focused on.

I found that we are often selecting huge nodesets over and over instead of reusing them. Fixing this brought the times down roughly 30%. I found a couple other inefficiencies and was able to save an additional 5-10% there. Overall I have optimized it roughly 35-40% in common cases.

During the optimization I have accidentally fixed a pretty jarring bug regarding refine-value and value selectors. We used to select a big nodeset of all cdf:Value elements in the entire document, then select all their cdf:values inside and choose the last based on the selector. This is clearly wrong because we need to select the right cdf:Value with the right ID and then look at only its selectors. Fixing that make the transformation faster as well because the right cdf:Value was already pre-selected.

Old XSLTs:

$ time ../../../shared/utils/build-all-guides.py -j 1 --input ssg-rhel7-ds.xml
real 0m16.736s
user 0m16.349s
sys  0m0.397s

New XSLTs:

$ time ../../../shared/utils/build-all-guides.py -j 1 --input ssg-rhel7-ds.xml
real 0m11.203s
user 0m10.836s
sys  0m0.379s

Transforming XCCDF 1.1 to 1.2

xslt_optimization_xccdf11_12_chart

It took 30 seconds on my machine to transform RHEL6 XCCDF 1.1 to 1.2. That is just way too much for a simple operation like that. Clearly something was wrong with the XSLT transformation. As soon as I profiled the XSLT using xsltproc --profile I found that we select the entire DOM over and over for every @idref in the tree. That is just silly. I fixed that by using xsl:key and using the very same @idref to element mapping for all lookups. This saved a lot of time.

Doing the RHEL6 XCCDF 1.1 to 1.2 transformation with old XSLTs

real 0m34.635s
user 0m34.585s
sys  0m0.047s

Doing the RHEL6 XCCDF 1.1 to 1.2 transformation with new XSLTs

real 0m0.619s
user 0m0.573s
sys  0m0.045s

The numbers were similar for the RHEL7 XCCDF 1.1 to 1.2 transformation.

Final results for the SSG build

I started with 2m21.061s and my goal was to bring that down to 50%. The final time on my machine after the optimizations with make -j 4 is 1m4.217s. Savings of roughly 55%. Most of those savings are in the XCCDF 1.1 to 1.2 transformation that we do for every product.

The savings are great on my beefy work laptop (i7-5600U) but we should benefit even more from them on our Jenkins slaves that aren’t as powerful. I have yet to test how much they would help there but I estimate it will be 10 minutes for each build.

Correctness

When I suggested to deploy these improvements on our Jenkins slaves, Jan Lieskovsky brought up an important point about correctness. We decided to diff old and new guides and old and new XCCDF 1.2s to be sure we aren’t changing behavior. Please see the attached ZIP file for a test case I created to verify that we haven’t changed behavior. During the process of creating this test case I discovered that I have accidentally fixed a bug mentioned above 🙂 To silence the diffs I have introduced just this bug into the new XSLTs I used. This made the performance slightly worse so keep that in mind when looking at the numbers.

./test_xccdf11_to_12.sh 
Doing the RHEL6 XCCDF 1.1 to 1.2 transformation with old XSLTs

real 0m34.635s
user 0m34.585s
sys  0m0.047s

Doing the RHEL6 XCCDF 1.1 to 1.2 transformation with new XSLTs

real 0m0.619s
user 0m0.573s
sys  0m0.045s

Diffing old_xslt_output/ssg-rhel6-xccdf-1.2.xml and new_xslt_output/ssg-rhel6-xccdf-1.2.xml
The files are the same.


Doing the RHEL7 XCCDF 1.1 to 1.2 transformation with old XSLTs

real 0m33.146s
user 0m33.089s
sys  0m0.050s

Doing the RHEL7 XCCDF 1.1 to 1.2 transformation with new XSLTs

real 0m0.749s
user 0m0.702s
sys  0m0.047s

Diffing old_xslt_output/ssg-rhel7-xccdf-1.2.xml and new_xslt_output/ssg-rhel7-xccdf-1.2.xml
The files are the same.
./test_html_guides.sh 
Doing the RHEL6 and 7 SDS HTML guide transformations with old XSLTs

real 0m39.104s
user 0m38.605s
sys  0m0.491s

Doing the RHEL6 and 7 SDS HTML guide transformations with new XSLTs

real 0m28.974s
user 0m28.531s
sys  0m0.433s

Diffing old_xslt_output/guides_for_diff and new_xslt_output/guides_for_diff
No differences.
ControllerExtraConfig and Tripleo Quickstart

Once I have the undercloud deployed, I want to be able to quickly deploy and redeploy overclouds.  However, my last attempt to affect change on the overcloud did not modify the Keystone config file the way I intended.  Once again, Steve Hardy helped me to understand what I was doing wrong.

Summary

/tmp/deploy_env.yml already definied ControllerExtraConfig: and my redefinition was ignored.

The Details

I’ve been using Quickstart to develop.  To deploy the overcloud, I run the script /home/stack/overcloud-deploy.sh which, in turn, runs the command:

openstack overcloud deploy --templates --libvirt-type qemu --control-flavor oooq_control --compute-flavor oooq_compute --ceph-storage-flavor oooq_ceph --timeout 90 -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml -e $HOME/network-environment.yaml --control-scale 3 --neutron-network-type vxlan --neutron-tunnel-types vxlan -e /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml --ntp-server pool.ntp.org \
${DEPLOY_ENV_YAML:+-e $DEPLOY_ENV_YAML}  "$@"|| deploy_status=1

I want to set two parameters in the Keystone config file, so I created a file named keystone_extra_config.yml

parameter_defaults:
   ControllerExtraConfig:
     keystone::using_domain_config: true
     keystone::domain_config_directory: /path/to/config

And edited /home/stack/overcloud-deploy.sh to add in -e /home/stack/keystone_extra_config.yml likwe this:

openstack overcloud deploy --templates --libvirt-type qemu --control-flavor oooq_control --compute-flavor oooq_compute --ceph-storage-flavor oooq_ceph --timeout 90 -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml -e $HOME/network-environment.yaml --control-scale 3 --neutron-network-type vxlan --neutron-tunnel-types vxlan -e /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml --ntp-server pool.ntp.org \
    ${DEPLOY_ENV_YAML:+-e $DEPLOY_ENV_YAML}    -e /home/stack/keystone_extra_config.yml   "$@"|| deploy_status=1

I have run this both on an already deployed overcloud and from an undercloud with no stacks deployed, but in neither case have I seen the values in the config file.

Steve Hardy walked me through this from the CLI:

openstack stack resource list -n5 overcloud | grep “OS::TripleO::Controller ”

| 1 | b4a558a2-297d-46c6-b658-46f9dc0fcd51 | OS::TripleO::Controller | CREATE_COMPLETE | 2016-07-28T01:49:02 | overcloud-Controller-y2lmuipmynnt |
| 0 | 5b93eee2-97f6-4b8e-b9a0-b5edde6b4795 | OS::TripleO::Controller | CREATE_COMPLETE | 2016-07-28T01:49:02 | overcloud-Controller-y2lmuipmynnt |
| 2 | 1fdfdfa9-759b-483c-a943-94f4c7b04d3b | OS::TripleO::Controller | CREATE_COMPLETE | 2016-07-28T01:49:02 | overcloud-Controller-y2lmuipmynnt

Looking in to each of these  stacks for the string “ontrollerExtraConfig” showed that it was defined, but was not showing my values.  Thus, my customization was not even making it as far as the Heat database.

I went back to the quickstart command and did a grep through the files included with the -e flags, and found the deploy_env.yml file already had defined this field.  Once I merged my changes into /tmp/deploy_env.yml, I saw the values specified in the Hiera data.

Of course, due to a different mistake I made, the deploy failed.  When specifying domain specific backends in a config directory, puppet validates the path….can’t pass in garbage like I was doing, just for debugging.

Once I got things clean, tore down the old overcloud and redeployed, everything worked.  Here was the final /home/stack/deploy_env.yaml environment file I used:

parameter_defaults:
  controllerExtraConfig:
    keystone::using_domain_config: true
    keystone::config::keystone_config:
      identity/domain_configurations_from_database:
        value: true

    # In releases before Mitaka, HeatWorkers doesn't modify
    # num_engine_workers, so handle via heat::config 
    heat::config::heat_config:
      DEFAULT/num_engine_workers:
        value: 1
    heat::api_cloudwatch::enabled: false
    heat::api_cfn::enabled: false
  HeatWorkers: 1
  CeilometerWorkers: 1
  CinderWorkers: 1
  GlanceWorkers: 1
  KeystoneWorkers: 1
  NeutronWorkers: 1
  NovaWorkers: 1
  SwiftWorkers: 1

And the modified version of overcloud-deploy now executes this command:

# Deploy the overcloud!
openstack overcloud deploy --debug --templates --libvirt-type qemu --control-flavor oooq_control --compute-flavor oooq_compute --ceph-storage-flavor oooq_ceph --timeout 90 -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml -e $HOME/network-environment.yaml --control-scale 3 --neutron-network-type vxlan --neutron-tunnel-types vxlan -e /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml --ntp-server pool.ntp.org -e /home/stack/deploy_env.yaml   "$@"|| deploy_status=1

Looking in the controller nodes /etc/keystone/keystone.conf file I see:

#domain_specific_drivers_enabled = false
domain_specific_drivers_enabled = True

# Extract the domain specific configuration options from the resource backend
# where they have been stored with the domain data. This feature is disabled by
# default (in which case the domain specific options will be loaded from files
# in the domain configuration directory); set to true to enable. (boolean
# value)
#domain_configurations_from_database = false
domain_configurations_from_database = True

# Path for Keystone to locate the domain specific identity configuration files
# if domain_specific_drivers_enabled is set to true. (string value)
#domain_config_dir = /etc/keystone/domains
domain_config_dir = /etc/keystone/domains
Flocking to Kraków

In less than five days, the fourth annual Flock conference will take place in Kraków, Poland. This is Fedora’s premier contributor event each year, alternately taking place in North America and Europe. Attendance is completely free for anyone at all, so if you happen to be in the area (maybe hanging around after World Youth Day going on right now), you should certainly stop in!

This year’s conference is shaping up to be a truly excellent one, with a massive amount of exciting content to see. The full schedule has been available for a while, and I’ve got to say: there are no lulls in the action. In fact, I’ve put together my schedule of sessions I want to see and there are in fact no gaps in it. That said, here are a few of the sessions that I suspect are going to be the most exciting:

Aug. 2 @11:00 – Towards an Atomic Workstation

For a couple of years now, Fedora has been at the forefront of developing container technologies, particularly Docker and Project Atomic. Now, the Workstation SIG is looking to take some of those Project Atomic technologies and adopt them for the end-user workstation.

Aug. 2 @17:30 – University Outreach

I’ve long held that one of Fedora’s primary goals should always be to enlighten the next generation of the open source community. Over the last year, the Fedora Project began an Initiative to expand our presence in educational programs throughout the world. I’m extremely interested to see where that has taken us (and where it is going next).

Aug. 3 @11:00 – Modularity

This past year, there has been an enormous research-and-development effort poured into the concept of building a “modular” Fedora. What does this mean? Well it means solving the age-old Too Fast/Too Slow problem (sometimes described as “I want everything on my system to stay exactly the same for a long time. Except these three things over here that I always want to be running at the latest version.”). With modularity, the hope is that people will be able to put together their ideal operating system from parts bigger than just traditional packages.

Aug. 3 @16:30 – Diversity: Women in Open Source

This is a topic that is very dear to my heart, having a daughter who is already finding her way towards an engineering future. Fedora and many other projects (and companies) talk about “meritocracy” a lot: the concept that the best idea should always win. However the technology industry in general has a severe diversity problem. When we talk about “meritocracy”, the implicit contract there is that we have many ideas to choose from. However, if we don’t have a community that represents many different viewpoints and cultures, then we are by definition only choosing the best idea from a very limited pool. I’m very interested to hear how Fedora is working towards attracting people with new ideas.

 


Flock: conference preview for Fedora contributors

This year, the Polish city of Kraków hosts the fourth annual Flock conference. Flock is a working conference for contributors to Fedora. Each year, Fedora contributors come together to discuss new ideas for Fedora, and work to make those ideas a reality. It’s also a chance for folks who usually work together via email, IRC, or other remote channels to sit together and build stronger relationships.

Flock originated from smaller and more impromptu events held several times a year in different locations. Those were called FUDCons – Fedora Users and Developers Conferences. As the number of attendees and topics grew, Fedora needed a more organized and professional event. As a result, the first Flock took place in 2013 in Charleston, South Carolina, USA. Flock is now the major Fedora event of the year, and has an essential impact on the state of Fedora.

Content at Flock

The State of Fedora is also the title of Flock’s now traditional opening keynote. The keynote is usually presented by the Fedora Project Leader, and that is no exception this year. Attendees can look forward to Matthew Miller’s personal take on the project’s status and its anticipated future. There is also a keynote from Radosław Krowiak, the co-owner of Polish educational company Akademia Programowania. It will highlight coding as not only a way to create programs, but also an exercise of creativity and teaching problem solving approaches that is a fresh alternative to traditional education based on schemes and “the right answers.”

The keynote may be the most interesting session for many. Others may find interest in talks revolving around Project Atomic, ostree, or the upcoming Atomic-based release of Fedora Workstation. These technologies are likely to take over the hype Docker has maintained for the last couple of years. Still, container related content will be amply represented.

Non-technical issues will be widely (and wildly!) discussed as well. This year’s featured non-technical theme is outreach efforts. Those efforts range from minority and gender outreach to university and high-school programs.

The Schedule divides the sessions into categories. Most talks are under categories Building a Better Distro (38), Making Life Better for Contributors (19), and Hackfest Workshops (14). Workshops, in fact, comprise two out of the four Flock days. Since the goal of this event is getting to action, two days are reserved for theory and two for practice.

Are you hooked but your plans don’t allow you to attend? Don’t worry, the Community blog and the project’s Twitter page are preparing to share daily content from the conference in the form of blogs, pictures, and short videos. The sessions will be recorded, too. The availability of these recordings will be announced here in the Fedora Magazine as soon as they’re ready.

Prominent Flock speakers

Community gurus that will present at the conference include:

  • Joe Brockmeier – Long-time open source advocate, member of the Apache Software Foundation, member of the Fedora Cloud working group since inception, and active Fedora propagator. Joe will run a workshop about Fedora Budget at Flock 2016.
  • Paul W. Frields – Former Fedora Project Leader, manager of the Fedora Engineering team, founding member of the Fedora Project Board, Fedora package maintainer, musician, and more. Paul will present on the Fedora Magazine project.
  • Dan Walsh – Computer security expert with 30+ years of experience, aka “Mr. SELinux,” who currently focuses on containers and Docker. Dan is an open-source enthusiast who will present on his current projects. If he can’t enthuse you with open source, no one can!
  • Matthew Miller – Co-organizer of the first FUDCons at Boston University, current Fedora Project Leader, regular contributor to Fedora Magazine (Five Things in Fedora This Week), Cloud SIG participant, and 2016 Flock keynote speaker.
  • Thomas Cameron – Senior Principal Cloud Evangelist at Red Hat, Thomas is influential in the field of containers, cloud solutions, JBoss middleware, and more, in the industry since 1993. He specializes in cloud security and integration, has authored training for many Red Hat products, and is one of the most experienced and sought after presenters in the field.

Flock is not only about meeting your colleagues who work on the other side of the globe. It’s just as much about defining the future of the project you care about.

Fedora is an open source community, and always open to new members. Even if you’re not a hacker, you’re welcome to check out the vibe, in person or via coverage and recordings. Fair warning, though: open source and Fedora can be habit-forming!

Getting started with Fedora QA (Part 2)

This article is a part of a series introducing what the Fedora Quality Assurance (QA) team is, what they do, and how you can get involved. If you’ve wanted to get involved with contributing to Fedora and testing is interesting to you, this series explains what it is and how you can get started.

Next steps towards Fedora Quality Assurance (QA)

This is a continuation of the previous post in this series of how to get involved with the Fedora Quality Assurance (QA) team. Make sure you have the Bugzilla, FAS account, and email alias set up before following these steps. You can find more information about how to get those accounts in the earlier post.

There are several different tools and services available to help us test Fedora. This helps us insure the quality of software and stay on target for a stable release. One of the easiest ways to get involved is to run Rawhide, the ever-changing development version of Fedora, or a pre-release version like the Alpha or Beta. While you are able to upgrade your system directly to these versions, sometimes you may want to use a virtual machine (VM) to work in. We’ll cover some of the Fedora-specific tools for quality assurance testing as well as setting up your environment for testing.

Setting up test environment

The easiest way to get started with testing is to run the testing versions, as mentioned before. However, that’s not possible on all hardware and may not be something you wish to do. In that case, you can set up a virtual machine (VM) to do your testing in.

To begin with, you will need an ISO file of the version of Fedora you want to test. This may be a Rawhide image or the Alpha or Beta of the next Fedora release. For whatever development version of Fedora you wish to use, you can find them here (at the time of publishing, there is only Rawhide, because Fedora 25 is not yet branched – it will appear there soon as well). After the download is complete, you can begin setting up the VM. There are different tools available that you can use to test with.

Virtual Machine Manager

The Virtual Machine Manager, or virt-manager, is one popular and effective tool you can use for creating virtual machines. You can install it on your Fedora system by typing the following command in a terminal.

$ sudo dnf install virt-manager

Once you have it installed, you can go ahead and started preparing your virtual machine. Open up the application and go ahead and create a new virtual machine. It will look something like this screenshot.

Getting started with Fedora Quality Assurance (QA): Creating a new VM with Virtual Machine Manager

You can follow these steps to finish setting it up.

  1. Create a new virtual machine, chose the “Local install media” option.
  2. When prompted, find the ISO file for the version of Fedora you wish to test. You can leave the checkbox on for automatically detecting the operating system based on the install media.
  3. You’ll be prompted to set the number of CPU cores and amount of memory to give the machine. You’ll also see how much is now available on your system. Select however much you prefer.
  4. Next, you will be prompted to enable storage for the virtual machine. You will want to allocate a fair amount if you will be using this VM for regular testing. 20GB is a good default to assume.
  5. Finally, you’ll be prompted for a name and networking selections. Give the VM a name where you will remember what you use it for. For Network selection, make sure you choose NAT.
  6. Hit Finish.

After finishing these steps, Virtual Machine Manager will save your settings and immediately try to start your new VM. Once the machine starts up, you will be presented with the normal Fedora welcome screen, given the option to “Install on hard drive” or “Try it”. Click the install option.

Follow through the normal Fedora installation process for best results. Make sure when setting up your system, you tell it to take up the full available space (the 20GB partition or other size we set aside earlier). Make sure to set your user and root passwords in the installation as well. Once finished, you will be prompted to reboot like normal.

Now that we have a running Fedora test environment, we will need to make sure it is current.

Updating virtual machine with updates-testing

Depending on when you download the ISO and how far into a release it is, you will have new updates available as soon as you start the machine. It’s good habit to immediately update your virtual machine to the latest version of packages so that you are running the latest updates for software.

One key difference from updating like normal is we will permanently enable the updates-testing repository for updates. Using this repository will allow you to use “test” versions of software that are waiting feedback in the Fedora package testing system, Bodhi. Running the following commands will enable this repository for you.

$ sudo dnf config-manager --set-enabled updates-testing
$ sudo dnf update --enablerepo=updates-testing

The last command may take some time to execute, depending on the speed of your network and ability of your hardware. After this finishes, your test environment is configured and ready!

Bodhi

Bodhi is a web-based tool used for pushing package updates to Fedora. First, updates are pushed to the testing repository, called updates-testing. After staying in testing and getting feedback, they will move to the stable repository, called updates. To help prevent bad updates from being shipped, Bodhi is equipped with a fairly extensive test suite. This is one of the easiest ways to start contributing to Fedora and to the Quality Assurance team.

Getting started with Fedora Quality Assurance (QA): Bodhi interface

When you log into Bodhi and look at a package update, you will see critical path tests and general functionality notes on the right side of the window. Towards the bottom of the page, you can leave feedback after testing the package. The rule is to run the packages and test cases, and to give “karma” for the update. This will either be a +1 or -1. The plus indicates the package is stable and working as expected, with no regression noted. A negative will be given if the update does not function as expected or if there are major performance issues you noticed. Generally, you should only give a -1 if you are confident the issue exists and is replicable.

Try finding a package you use and see if there are any pending updates available for it. Give it a run in your virtual machine and test its functionality. If it works as expected, you can leave karma of your own on the package. This helps the packager receive feedback about the update and push stable updates to the stable repository.

Getting started with Fedora Quality Assurance (QA): Leaving feedback on a bug in Bodhi

Earning testing badges

What’s in it for you, if you are a tester? Over time, you earn and accumulate badges to show off and prove you have devoted your time and energy to testing badges. You can log into the Fedora Badges system at badges.fedoraproject.org. Log in with the same FAS credentials as you have used elsewhere. You will be able to open your profile and see all the badges you earned.

As you test more and more packages, there are different levels and tiers of badges you can receive over time. Each badge acts as a small milestone for your testing contributions.

Getting started with Fedora Quality Assurance (QA): Earn badges for testing updates

Get in touch with QA

Come say hello to us! You can find the Quality Assurance team on their mailing list and you can also interact with the global community on IRC.


Debug courtesy of Lemon Liu, Crash Test Dummy courtesy of James Keuning (from the Noun Project)

The post Getting started with Fedora QA (Part 2) appeared first on Fedora Community Blog.

On the killing of intltool

If you have a project that uses intltool, you should be trying to get rid of it in favor of using AM_GNU_GETTEXT instead. Matthias wrote a nice post about this recently. Fortunately, it’s very easy to do. I decided to port gnome-chess during some downtime today, and ran into only one tough problem:

make[1]: Entering directory '/home/mcatanzaro/.cache/jhbuild/build/gnome-chess/po'
Makefile:253: *** target pattern contains no '%'. Stop.
make[1]: Leaving directory '/home/mcatanzaro/.cache/jhbuild/build/gnome-chess/po'

This was pretty inscrutable, but I eventually discovered the cause: I had forgotten to remove [encoding: UTF-8] from POTFILES.in. This line is an intltool thing and you have to remove it when porting, same as you need to remove the type hints from the file, or it will break the Makefile that gets generated. This is just a heads-up as it seems like an easy thing to forget, and since the error message provided by make is fairly useless.

A couple unrelated notes:

  • If your project uses git.mk, as any Autotools project really should, you’ll have to modify that too.
  • Don’t forget to remove any workarounds added to POTFILES.skip to account for intltool’s incompatibility with modern Automake distcheck.
  • For some reason, msgfmt merges translations into XML files in reverse alphabetical order, the opposite of intltool, which seems strange and might be a bug, but is harmless.

Say thanks to Daiki Ueno for his work maintaining gettext and enhancing it to make change practical, and to Javier Jardon for pushing this within GNOME and working to remove intltool from important GNOME modules.

All systems go
Service 'FedoraHosted.org Services' now has status: good: Everything seems to be working.
There are scheduled downtimes in progress
New status good: Everything seems to be working. for services: Fedora Wiki, Fedora People, Zodbot IRC bot, The Koji Buildsystem, Tagger, Darkserver, Package Database, Fedora pastebin service, Blockerbugs, Badges, Mirror Manager, Koschei Continuous Integration, Ipsilon, Mirror List, Fedora Infrastructure Cloud, Account System, Package maintainers git repositories, Fedora websites, Documentation website, COPR Build System, Package Updates Manager, Ask Fedora, Fedora Packages App, FreeMedia, Fedora Messaging Bus, Fedora elections, Mailing Lists, Fedora Calendar

July 27, 2016

Fedora EPEL for ARM. Hey guys, if you're attending flock remember to talk about Fedora EPEL for ARM...
Fedora EPEL for ARM.

Hey guys, if you're attending flock remember to talk about Fedora EPEL for ARM.

thank you.

FINAL REMINDER! systemd.conf 2016 CfP Ends on Monday!

Please note that the systemd.conf 2016 Call for Participation ends on Monday, on Aug. 1st! Please send in your talk proposal by then! We’ve already got a good number of excellent submissions, but we are very interested in yours, too!

We are looking for talks on all facets of systemd: deployment, maintenance, administration, development. Regardless of whether you use it in the cloud, on embedded, on IoT, on the desktop, on mobile, in a container or on the server: we are interested in your submissions!

In addition to proposals for talks for the main conference, we are looking for proposals for workshop sessions held during our Workshop Day (the first day of the conference). The workshop format consists of a day of 2-3h training sessions, that may cover any systemd-related topic you'd like. We are both interested in submissions from the developer community as well as submissions from organizations making use of systemd! Introductory workshop sessions are particularly welcome, as the Workshop Day is intended to open up our conference to newcomers and people who aren't systemd gurus yet, but would like to become more fluent.

For further details on the submissions we are looking for and the CfP process, please consult the CfP page and submit your proposal using the provided form!

ALSO: Please sign up for the conference soon! Only a limited number of tickets are available, hence make sure to secure yours quickly before they run out! (Last year we sold out.) Please sign up here for the conference!

AND OF COURSE: We are also looking for more sponsors for systemd.conf! If you are working on systemd-related projects, or make use of it in your company, please consider becoming a sponsor of systemd.conf 2016! Without our sponsors we couldn't organize systemd.conf 2016!

Thank you very much, and see you in Berlin!

There are scheduled downtimes in progress
New status scheduled: Scheduled reboots in progress for services: Fedora Wiki, Fedora People, Zodbot IRC bot, The Koji Buildsystem, Darkserver, Tagger, Package Database, Fedora pastebin service, Blockerbugs, Badges, FedoraHosted.org Services, Mirror Manager, Koschei Continuous Integration, Ipsilon, Mirror List, Fedora Infrastructure Cloud, Account System, Package maintainers git repositories, Fedora websites, Documentation website, COPR Build System, Package Updates Manager, Ask Fedora, Fedora Packages App, FreeMedia, Fedora Messaging Bus, Fedora elections, Mailing Lists, Fedora Calendar
Time de Espartanos do Brazil. Post Número 2 despite the fact that there are no ACTIVE mentor/coordinator...
Time de Espartanos do Brazil.

Post Número 2


despite the fact that there are no ACTIVE mentor/coordinator for pt-br language, so there are no way to add new translators for pt-br language.


+Ralph Bean
Tempus fugit

Rawhide builds now have fc26 for a disttag.  Where did *that* time go?

#feelingold


We made a group in facebook called "Operação Pega Leão", we sue our own government for not obeying their...
We made a group in facebook called "Operação Pega Leão", we sue our own government for not obeying their own laws, if you sell or ship something with value of less than $100 usd to Brazil the goverment can't charge import duties.
Fedora: Powered icons to your website.
Do you remember the image from default webpages like apache or nginx when you test your HTTP server?
So I want one good fedora way to put my work with fedora.
First I need to have one fedora icon and this text: powered by.

I used Inkscape. You know about Inkscape:Inkscape is professional quality vector graphics software which runs on Windows, Mac OS X and GNU/Linux and uses the W3C open standard SVG (Scalable Vector Graphics) as its native format.
First I take a take a look to the common guidelines. 
This steps help you to understand the importance of guideline if you want to make new different designs.
The .svg result files can be found here.




monitoring HTTP requests on the fly

Install httpry:

sudo yum install httpry

or

$ sudo yum install gcc make git libpcap-devel
$ git clone https://github.com/jbittel/httpry.git
$ cd httpry
$ make
$ sudo make install

then run:

sudo httpry -i eth0

Output will be like:

httpry version 0.1.8 -- HTTP logging and information retrieval tool
Copyright (c) 2005-2014 Jason Bittel <jason.bittel@gmail.com>
Starting capture on eth0 interface
2016-07-27 14:20:59.598    172.31.43.18    169.254.169.254    >    GET    169.254.169.254    /latest/dynamic/instance-identity/document    HTTP/1.1    -    -
2016-07-27 14:20:59.599    169.254.169.254    172.31.43.18    <    -    -    -    HTTP/1.0    200    OK
2016-07-27 14:22:02.034    172.31.43.18    169.254.169.254    >    GET    169.254.169.254    /latest/dynamic/instance-identity/document    HTTP/1.1    -    -
2016-07-27 14:22:02.034    169.254.169.254    172.31.43.18    <    -    -    -    HTTP/1.0    200    OK
2016-07-27 14:23:04.640    172.31.43.18    169.254.169.254    >    GET    169.254.169.254    /latest/dynamic/instance-identity/document    HTTP/1.1    -    -
2016-07-27 14:23:04.640    169.254.169.254    172.31.43.18    <    -    -    -    HTTP/1.0    200    OK
2016-07-27 14:24:07.122    172.31.43.18    169.254.169.254    >    GET    169.254.169.254    /latest/dynamic/instance-identity/document    HTTP/1.1    -    -
2016-07-27 14:24:07.123    169.254.169.254    172.31.43.18    <    -    -    -    HTTP/1.0    200    OK

 

 

 

 


RISC-V on an FPGA, pt. 8

Some thoughts on SiFive Freedom U500 on the low-end Arty Board:

  • 256MB (249MB available), vs 128MB for the Nexys 4. However memory is used for the filesystem initramfs / tmpfs (see next point) so there is a trade-off between RAM and storage. Recall the Nexys 4 has a microSD card for storage.
  • The root filesystem is the initramfs, presumably loaded off the 16MB of SPI flash included with the board. So it’s more like an embedded target than something you could do any development on.
  • Slow – noticeably slower than lowRISC on the Nexys 4. However it’s also under half the price, and this is an FPGA design, and the performance of the real hardware will be completely different.
  • I was kind of expecting that ethernet would work, but in any case it doesn’t appear to work for me.
  • You can replace their Linux kernel and root image with your own — see section 5 in the documentation. However that step, as well as programming the board, requires the proprietary Vivado software. AFAICT there is no free software alternative.

RISC-V on an FPGA, pt. 7

Today I’m going to try SiFive’s Freedom U500 64 bit RISC-V design on the very low-end $148 Arty Board. If you want more background into what SiFive are up to then I recommend watching this 15 minute video, but in brief they seem to be positioning themselves as a distributor and integrator of RISC-V.

The good thing is they compile everything you need including the Xilinx bitstream. The bad thing is that these are behind a registration wall with a poisonous license agreement (in particular, non-commercial use only). I hope this is only because of the bits of 3rd party proprietary IP they are using for ethernet, flash, UART, etc.

If you want to do this yourself, read the getting started guide here. Assuming you have Xilinx Vivado installed, following the instructions is completely straightforward except for one point: You need to do “Boot from Configuration Memory device” after programming. Anyway you will have a booting Linux/RISC-V in a few minutes.

screenshot-u500

After the cut, the boot output.

           SiFive RISC-V Coreplex
[    0.000000] Linux version 4.6.2-ga9474b8 (aou@i0.internal.sifive.com) (gcc version 6.1.0 (GCC) ) #3 Sun Jul 10 17:43:16 PDT 2016
[    0.000000] bootconsole [early0] enabled
[    0.000000] Available physical memory: 250MB
[    0.000000] Initial ramdisk at: 0xffffffff800140c0 (2182877 bytes)
[    0.000000] Zone ranges:
[    0.000000]   Normal   [mem 0x0000000080600000-0x000000008fffffff]
[    0.000000] Movable zone start for each node
[    0.000000] Early memory node ranges
[    0.000000]   node   0: [mem 0x0000000080600000-0x000000008fffffff]
[    0.000000] Initmem setup node 0 [mem 0x0000000080600000-0x000000008fffffff]
[    0.000000] Built 1 zonelists in Zone order, mobility grouping on.  Total pages: 63125
[    0.000000] Kernel command line: earlyprintk 
[    0.000000] PID hash table entries: 1024 (order: 1, 8192 bytes)
[    0.000000] Dentry cache hash table entries: 32768 (order: 6, 262144 bytes)
[    0.000000] Inode-cache hash table entries: 16384 (order: 5, 131072 bytes)
[    0.000000] Sorting __ex_table...
[    0.000000] Memory: 246856K/256000K available (2187K kernel code, 113K rwdata, 416K rodata, 2216K init, 224K bss, 9144K reserved, 0K cma-reserved)
[    0.000000] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=1, Nodes=1
[    0.000000] NR_IRQS:0 nr_irqs:0 0
[    0.000000] clocksource: riscv_clocksource: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275 ns
[    0.000000] Calibrating delay loop (skipped), value calculated using timer frequency.. 2.00 BogoMIPS (lpj=10000)
[    0.000000] pid_max: default: 32768 minimum: 301
[    0.010000] Mount-cache hash table entries: 512 (order: 0, 4096 bytes)
[    0.010000] Mountpoint-cache hash table entries: 512 (order: 0, 4096 bytes)
[    0.050000] devtmpfs: initialized
[    0.070000] clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604462750000 ns
[    0.090000] NET: Registered protocol family 16
[    0.180000] clocksource: Switched to clocksource riscv_clocksource
[    0.220000] NET: Registered protocol family 2
[    0.230000] TCP established hash table entries: 2048 (order: 2, 16384 bytes)
[    0.240000] TCP bind hash table entries: 2048 (order: 2, 16384 bytes)
[    0.240000] TCP: Hash tables configured (established 2048 bind 2048)
[    0.250000] UDP hash table entries: 256 (order: 1, 8192 bytes)
[    0.250000] UDP-Lite hash table entries: 256 (order: 1, 8192 bytes)
[    0.260000] NET: Registered protocol family 1
[    3.960000] Unpacking initramfs...
[    7.820000] console [sbi_console0] enabled
[    7.820000] console [sbi_console0] enabled
[    7.820000] bootconsole [early0] disabled
[    7.820000] bootconsole [early0] disabled
[    7.840000] futex hash table entries: 256 (order: 0, 6144 bytes)
[    7.850000] workingset: timestamp_bits=61 max_order=16 bucket_order=0
[    8.250000] io scheduler noop registered
[    8.260000] io scheduler cfq registered (default)
[    9.780000] gpio leds: loaded 4 GPIOs
[    9.870000] Freeing unused kernel memory: 2216K (ffffffff80000000 - ffffffff8022a000)
[    9.880000] This architecture does not have kernel memory protection.
Starting logging: OK
Starting network...

Welcome to Buildroot
sifive login: root
# cat /proc/cpuinfo 
hart    : 0
isa     : RV64G

# uname -a
Linux sifive 4.6.2-ga9474b8 #3 Sun Jul 10 17:43:16 PDT 2016 riscv GNU/Linux

New Taskotron tasks

For a while now, Fedora Quality Assurance (QA) is busy with building Taskotron core features and didn’t have resources for additions to tasks that Taskotron runs. That changed a few weeks back when we started running task-dockerautotest, task-abicheck and task-rpmgrill tasks in our development environment. Since then, we are happy with the results of those tasks. We deployed them to the production instance last week. Please note that the results of those tasks are informative only. Let’s introduce the tasks briefly.

task-dockerautotest

task-dockerautotest is the first package-specific task that Taskotron runs. It triggers on each completed Docker build in Koji. Please note that we wrote only the Taskotron wrapper for the dockerautotest test suite. The test suite’s git repository is found here and the task’s git repository here.

Task-abicheck

task-abicheck looks for incompatibilities in ABI between two versions of a package. The task runs only on a subset of packages in Fedora. Currently, we run the task on a subset (excluding firefox, thunderbird, kernel, kdelibs, kdepim and qt) of packages from critpath. For now, the reason we limit packages we run the task on is due to memory consumption the task needs for some packages. Each time a build of a package from critpath completes in Koji, task-abicheck runs and compares ABIs of the build and its earlier version. Kudos to Dodji Seketeli for the work on libabigail and Sinny Kumari for writing the task! The task’s git repository is found on pagure.

Task-rpmgrill

task-rpmgrill runs each time a build completes in Koji. It performs a set of analysis tests on all RPMs built from the build’s source RPM. The task expands capabilities of task-rpmlint with build log analysis or ELF checks, to name a few. Kudos to Ralph Bean for writing Taskotron wrapper for the task! The task’s git repository is found here.

Looking at the task results

Interested in looking at the new tasks results? You can do so via ResultsDB frontend.

Taskotron’s glorious future

Recently, Miro Hroncok sent a review request for “Python Version Check” which we are planning to check and deploy in the not so distant future. We are also looking for “guinea pigs” to write package-specific tasks so we can test our upcoming feature, dist-git style tasks. If you have a task (or an idea of), either generic or packages-specific and want to run it in Taskotron, please do reach to us.

Receiving notifications about results

If you wish to receive notifications about any of new tasks results of you package, you can do so in FMN. If you are not sure how to set up notifications, my earlier post on that topic should help with that.

Found an issue?

As usual, if you hit any issues and/or have any questions or suggestions, don’t hesitate to contact us either in #fedora-qa on Freenode IRC or on qa-devel mailing list. You can also file bugs or send RFE in our Phabricator instance.

The post New Taskotron tasks appeared first on Fedora Community Blog.

RISC-V on an FPGA, pt. 6


(Click for larger image)

The board on the left is the Digilent Nexys 4 DDR which I was using yesterday to boot Linux. It costs £261 (about $341) including tax and next day delivery. The board on the right is the cheaper Digilent Arty Board, which cost me $148 (two day delivery from the US).

There are clear differences in the number of connectors, LEDs and buttons. The Nexys has VGA, ethernet, USB, 8 digit LED, many lights, a temperature sensor, and lots of buttons. The Arty has just the bare minimum as you’d expect given the price difference. Both boards use the same Xilinx Artix-7 family of FPGA, but they are not exactly the same. The expensive board on the left uses the XC7A100T which has 100K+ logic cells, the cheap board on the right uses the XC7A35T with 33K cells. What this means practically is that you are more limited in the size of designs which can be programmed on the smaller chip. Does this matter for RISC-V? Yes and no. The untether-v0.2 design I was using yesterday takes about 50K cells so it won’t fit on the smaller board. However SiFive apparently (I have not checked) have a reduced design that will fit. (Note that none of this affects the operating system or available RAM — those are separate issues. However a smaller design will have to cut a few corners, leave out parts of the chip that implement optimizations, etc and so will generally run slower).

Oddly the Arty Board (on the right) has more DDR3 RAM — 256MB vs 128MB. That is an improvement, since doing any real Linux work in 128MB of RAM is tough, but still not massive. The other significant difference is the Arty Board does not have a microSD-card socket. It has (just) 16MB of on-board flash instead. I’m not clear how you get Linux on there, but that’s what I’ll be exploring.

Finally it’s worth saying that both boards are incomplete, although in very minor ways. The Nexys 4 comes with an OTG USB cable, which is all you need to power the board, program it and use the serial port. However it omits a microSD-card which you will need to store Linux / other RISC-V software that you want to run. The Arty comes without any cables and thus requires that you supply an OTG USB cable. As mentioned above there seems to be no microSD-card option at all.


Fedora Women Day 2016

CnZe3pLUAAAwGrm (1)

Fedora Women Day is celebrated to raise awareness and bring Fedora women contributors together. This is a great time to network with other women in Fedora and talk about their contributions and work in Fedora Project.

The event was held at Netaji Subhash Engineering College (NSEC) in Kolkata, India on 15th July, 2016.

Fedora Women Day was also celebrated in Pune, India and Tirana, Albania. https://fedoraproject.org/wiki/Fedora_Women_Day_2016#Local_Events

The event started at 10:30 AM. Women started coming in and it was pretty nice crowd.

The event started with my talk. It was my first talk so I was really excited.

I talked about Fedora Women Day and the purpose. Then I started talking about the work I do in Fedora Project. Most of the part of my talk was regarding Fedora Infrastructure and Fedora Cloud.

Since my most of the contributions lie in Bodhi(Fedora Infrastructure) and Tunirtests(Fedora Cloud) so I specifically gave some insight on these projects. I explained the architecture of Bodhi and Tunirtests and how one can start contributing those specific projects.

I also shared my story on how I started contributing to Fedora Project.

Here is the slide of my talk: trishnaguha.github.io/trishnagslides-what-i-do-in-fedora-how-can-you-get-involved.html

After few hours of my talk I had to leave early for some urgent work. You will find the full event report here: Event Report.

I received Fedora stickers, F24 workstation DVD and Fedora T-shirt, but not sure I can put the T-shirt on, it seems so large😦.

Jpeg


Time de Espartanos do Brazil. Post Número 1
Time de Espartanos do Brazil.

Post Número 1



July 26, 2016

Séparation de F25 et de Rawhide

Ce mardi 26 juillet a été le jour d'une nouvelle étape dans l'élaboration de la prochaine version de Fedora à savoir Fedora 25.

C'est l'occasion où toute l'infrastructure interne du projet et ses contributeurs se mettent en branle pour accueillir une nouvelle branche de développement pour Fedora 25. Cela signifie également que tous les paquets à ce stade sont dupliqués, d'un côté pour Rawhide, de l'autre pour Fedora 25.

Dès maintenant, la mise à jour des deux systèmes sera différente. Les versions des paquets également. Fedora 25 va ainsi poursuivre sa route de stabilisation en suivant tous le processus habituel d'Alpha, Beta et Release Candidates en passant par les journées de tests et autres évènements destinés à améliorer sa qualité.

Pour ceux sur Rawhide, cela signifie devoir faire un choix entre poursuivre les tests de la branche en perpétuelle développement, ou participer à stabiliser Fedora 25. Cela revient à désactiver le dépôt dédié à Rawhide pour activer ceux de Fedora et lancer une synchronisation. Plus l'utilisateur atteint pour changer de voie, et plus le risque de soucis lors de l'opération augmente.

Cette journée est également une autre occasion, celle du tri des fonctionnalités retenus pour Fedora 25. En effet, il y a quelques semaines, les développeurs ont envoyé leurs listes de travaux à effectuer pour cette version. Certains ont été acceptés, d'autres non dès cette étape là.

Tandis qu'aujourd'hui, ceux qui ont été retenus préalablement, devront être testables. C'est-à-dire que l'essentiel de la nouveauté est dore et déjà en place. L'objectif est que les mois restants avant la sortie officielle de Fedora 25 servent à stabiliser ces changements, à les valider. Et pour le faire correctement, il faut du temps. D'où l'importance d'avoir ces fonctionnalités déjà opérationnelles ou presque.

Nous le dirons jamais assez, mais le Projet Fedora est une distribution communautaire où chacun peut apporter sa pierre à l'édifice. N'hésitez pas à installer Rawhide ou la future F25 (ou via une mise à niveau) pour remonter toute anomalie. Plus c'est fait tôt, plus les chances de corrections avant la sortie seront grandes.

Fedora 24: systemd-analyze (solved)

Odd Results

I read an article about systemd-analyze and tried it on my Dell XPS 13 9343.

Startup finished in 6.363s (firmware) + 6.029s (loader) + 1.020s (kernel) + 2.756s (initrd) + 1min 30.405s (userspace) = 1min 46.575s

Running this on my Lenovo T530 still running Fedora 23 resulted in the following:

Startup finished in 4.625s (firmware) + 6.467s (loader) + 1.898s (kernel) + 2.153s (initrd) + 7.929s (userspace) = 23.073s

So, to ensure that this is an issue with Fedora 24 and not my Dell I booted the T530 up to Fedora 24 and re-ran the systemd-analyze on it and got the following results:

Startup finished in 6.127s (firmware) + 15.969s (loader) + 4.068s (kernel) + 3.066s (initrd) + 1min 31.377s (userspace) = 2min 610ms

When the boot first completed and I was able to open a terminal window on the desktop to run the systemd-analyze command I got a message that boot was not finished.

Bootup is not yet finished. Please try again later.

This result was duplicated on the Dell XPS 13 as well. I then tried troubleshooting the issue with systemd-analyze blame and got the following:

          3.850s plymouth-quit-wait.service
1.872s dnf-makecache.service
1.613s plymouth-start.service
674ms systemd-udev-settle.service
598ms firewalld.service
591ms dev-mapper-fedora\x2droot.device
430ms lvm2-monitor.service
273ms lvm2-pvscan@8:3.service
231ms libvirtd.service
216ms accounts-daemon.service
205ms cups.service
177ms udisks2.service
147ms systemd-logind.service
116ms ModemManager.service
112ms user@42.service
111ms upower.service
98ms user@1000.service
97ms proc-fs-nfsd.mount
88ms abrtd.service
87ms systemd-udev-trigger.service
87ms packagekit.service
86ms polkit.service
82ms systemd-journald.service
76ms iio-sensor-proxy.service
68ms systemd-udevd.service
65ms systemd-journal-flush.service
64ms gdm.service
63ms systemd-vconsole-setup.service
61ms unbound-anchor.service
56ms NetworkManager.service
48ms abrt-ccpp.service
48ms bluetooth.service
48ms systemd-fsck@dev-disk-by\x2duuid-63a65c9c\x2dff2d\x2d4e27\x2dac05\x2dae7b80c0fb3b.service
47ms systemd-fsck@dev-disk-by\x2duuid-880D\x2dE9AE.service
47ms systemd-tmpfiles-setup-dev.service
46ms colord.service
43ms avahi-daemon.service
41ms fedora-readonly.service
40ms gssproxy.service
37ms rtkit-daemon.service
33ms systemd-fsck@dev-mapper-fedora\x2dhome.service
30ms chronyd.service
28ms systemd-fsck-root.service
24ms systemd-tmpfiles-setup.service
24ms systemd-rfkill.service
21ms dmraid-activation.service
21ms systemd-remount-fs.service
20ms livesys-late.service
20ms kmod-static-nodes.service
19ms home.mount
19ms livesys.service
19ms fedora-import-state.service
18ms dev-mapper-fedora\x2dswap.swap
18ms dev-hugepages.mount
17ms wpa_supplicant.service
16ms systemd-sysctl.service
16ms auditd.service
15ms systemd-tmpfiles-clean.service
14ms boot-efi.mount
13ms plymouth-read-write.service
12ms rpc-statd-notify.service
10ms systemd-user-sessions.service
10ms dev-mqueue.mount
9ms blk-availability.service
8ms sys-kernel-debug.mount
6ms systemd-random-seed.service
6ms sys-fs-fuse-connections.mount
6ms boot.mount
6ms systemd-backlight@leds:dell::kbd_backlight.service
5ms nfs-config.service
5ms tmp.mount
4ms systemd-update-utmp-runlevel.service
4ms systemd-update-utmp.service
4ms var-lib-nfs-rpc_pipefs.mount
4ms dracut-shutdown.service
3ms systemd-backlight@backlight:intel_backlight.service
2ms sys-kernel-config.mount

I am still searching for an explanation, but Google searches are not turning up much that is useful. In the end it is curiosity more than something that is actually impacting me as I am able to start working long before systemd-analyze is capable of giving me results and certainly it is not taking 1 minute and 30 seconds for the computer to boot. In fact, when I timed the boot it only took 18.5 seconds for me to get to the desktop.

Update:

thanks to the comment from Michal Schmidt I ran systemctl list-jobs after confirming that systemd analyze thought that the system was still booting up and the results are:

[cprofitt@tardis-xps ~]$ systemd-analyze
Bootup is not yet finished. Please try again later.
[cprofitt@tardis-xps ~]$ systemctl list-jobs
JOB UNIT                                              TYPE  STATE
243 hypervkvpd.service                                start waiting
246 sys-devices-virtual-misc-vmbus\x21hv_vss.device   start running
247 systemd-update-utmp-runlevel.service              start waiting
111 multi-user.target                                 start waiting
234 sys-devices-virtual-misc-vmbus\x21hv_fcopy.device start running
110 graphical.target                                  start waiting
244 sys-devices-virtual-misc-vmbus\x21hv_kvp.device   start running
245 hypervvssd.service                                start waiting
233 hypervfcopyd.service                              start waiting

9 jobs listed.

I have more items to go on now and will start doing more research.

Update 2:

I found a bug for Fedora regarding these services and further research found a blog post recommending to disable the following services:

systemctl disable hypervkvpd.service
systemctl disable hypervfcopyd.service
systemctl disable hypervvssd.service

With these services disabled I rebooted the laptop and was immediately able to get results from systemd-analyze:

Startup finished in 6.104s (firmware) + 5.973s (loader) + 1.025s (kernel) + 2.728s (initrd) + 5.969s (userspace) = 21.801s

A huge reduction in the time. While the services were not causing any ‘real’ impact they also appear to provide no functionality unless running on a hyper-v system.


Entwicklung von Fedora 25 kommt in die heiße Phase

Gemäß dem Zeitplan für Fedora 25 wurde heute im Fedora GIT der Entwicklungszweig für Fedora 25 vom Hauptzweig („master“) abgespalten, womit die Entwicklung von Fedora 25 in die heiße Phase einbiegt.

Falls sich jemand wundern sollte, das so bald nach dem Release von Fedora 24 schon das Branching von Fedora 25 erfolgt: Da bei Fedora 24 der Release-Termin unüblicherweise einige Monate hinter dem von Gnome lag, wurde für Fedora 25 ein verkürzter Release-Zyklus beschlossen, indem man z.B. auf ein komplettes Rebuild der Pakete für Fedora 25 verzichtet. Features und neue Funktionen, die ein solches Mass-Rebuild erfordern würden, wurden in Folge dessen auf Fedora 26 verschoben.

Sofern es zu keinen größeren Problemen während der Entwicklung von Fedora 25 kommt, soll Fedora 25 Anfang November freigeben werden.

Die veröffentlichten News werden nach bestem Wissen und Gewissen zusammengetragen. Eine Garantie für die Vollständigkeit und/oder Richtigkeit wird nicht übernommen.
Chromium ist jetzt in den offiziellen Fedora Repositories verfügbar

Kurz notiert: Seit gestern sind Chromium-Pakete im updates-testing Repository von Fedora 24 verfügbar. Ob es auch Pakete für Fedora 23 geben wird, ist zur Zeit noch nicht bekannt.

Wer nicht warten möchte, bis Chromium im regularen updates Repository landet, kann es einfach mittels

su -c'dnf install chromium --enablerepo=updates-testing'

installieren.

Es kann jedoch vorkommen, das Chromium einzelne Seiten nicht anzeigen kann, wenn parallel Google Chrome installiert und Chromium so konfiguriert ist, das es das Flash-Plugin von Google Chrome verwendet.

Update: Ich sehe gerade, das Chromium noch nicht im updates-testing Repository gelandet ist, sondern nur in der Warteschleife für den nächsten Push von Updates in die Repositories hängt. Es sollte aber trotzdem nicht mehr all zu lange dauern, bis Chromium in updates-testing gelandet ist.

Die veröffentlichten News werden nach bestem Wissen und Gewissen zusammengetragen. Eine Garantie für die Vollständigkeit und/oder Richtigkeit wird nicht übernommen.
Elections Retrospective, July 2016

The results are in! The Fedora Elections for the Fedora 24 release cycle of FESCo and the Council concluded on Tuesday, July 26th. The results are posted on the Fedora Voting Application and announced on the mailing lists. You can also find the full list of winning candidates below. I would also like to share some interesting statistics in this July 2016 Elections Retrospective.

2016 Elections Retrospective Report

In short, voter turnout is approximately on its average level (well, slightly below).

Fedora Engineering Steering Committee (FESCo)

We had four vacant seats and five nominations for the F24 cycle, with 196 voters casting their votes.

FESCo Winning Candidates Votes
Stephen Gallagher (sgallagh) [info] 655
Josh Boyer (jwb/jwboyer) [info] 619
Dennis Gilmore (dgilmore/ausil) [info] 557
Dominik Mierzejewski (rathann) [info] 474

Compared to the historical data, with 196 voters, we are slightly bellow the average of 213 voters, which seems to understandable due to the fact we are in the middle of the vacations season.

Fedora Project Elections Retrospective, July 2016

The statistics showing how many people voted each day during the voting period is also quite interesting.

Fedora Project Elections Retrospective, July 2016

For this Election, we sent reminders on the first, fourth, and last day of the voting period, which is reflected in the charts as an increase of voters comparing to a day before the reminder.

Out of the four elected nominees, three (sgallagh, jwboyer and dgilmore) have been elected for a repeat term. One elected nominee (rathann) has been elected for the first time (AFAIK).

Fedora Council

We had one vacant seat and two nominations for the Fedora 24 cycle, with 189 voters casting their votes.

Council Winning Candidate Votes
Langdon White (langdon) [info] 240

The Fedora Council came into existence in November 2014, and hence, we do not have much previous data. Historically, before there was a Council, there was a Board. On the chart below you can see the comparison between voter turnout for the Fedora Board elections vs Council Elections. The average voters turnout for Council & Board elections is 221, and for Council only is the average 206.

Fedora Project Elections Retrospective, July 2016

The profile for number of voters per day was similar to the one we saw for FESCo.

Fedora Project Elections Retrospective, July 2016

Langdon have been elected for a repeat term and will stay within Council at least for the next two release cycles.

You can find some more Council Election related metrics here.

Special Thanks

Congratulations to the winning candidates, and thank you to all the candidates who ran this election! Community governance is core to the Fedora Project, and we couldn’t do it without your involvement and support.

A special thanks to jflory7 and the members of the CommOps Team for helping organize another successful round of Elections!

And last but not least, thank YOU to all the Fedora community members who participated and voted this election cycle. Stay tuned for future Elections Retrospective articles for future Elections!

The post Elections Retrospective, July 2016 appeared first on Fedora Community Blog.

RISC-V on an FPGA, pt. 5

I’ve learned a few things about this process. These are just my random thoughts in no particular order:

  • You need the heavily proprietary Xilinx Vivado to compile the Verilog source code to a bitstream. However you can use free tools to write the bitstream to the FPGA (eg. xc3sprog). Edit: There is another option: You can write the bitstream onto the SD-card and switch the jumper JP1 to SD-card. On boot the FPGA will load the bitstream from the SD-card. This is actually much nicer because it means you don’t need to manually reprogram the FPGA every time you power it up.
  • The core lowRISC CPU is free, but that’s not everything that goes into the bitstream. Also in the bitstream are some peripherals, such as the UART, and those are Xilinx proprietary IP. So the bitstream isn’t fully free and more importantly isn’t redistributable. (Note, according to this talk there is a plan to fix this).
  • This is a nice summary of the v0.2 architecture, esp. page 4

RISC-V on an FPGA, pt. 4

It boots!

lowRISC boot program
=====================================
Load boot into memory
Load 11660 bytes to memory.
Read boot and load elf to DDR memory
Boot the loaded program...
[    0.000000] Linux version 3.14.41-g9a25e8d (rjones@moo.home.annexia.org) (gcc version 5.2.0 (GCC) ) #1 Mon Jul 25 19:07:50 BST 2016
[    0.000000] Available physical memory: 126MB
[    0.000000] Zone ranges:
[    0.000000]   Normal   [mem 0x00200000-0x07ffffff]
[    0.000000] Movable zone start for each node
[    0.000000] Early memory node ranges
[    0.000000]   node   0: [mem 0x00200000-0x07ffffff]
[    0.000000] Built 1 zonelists in Zone order, mobility grouping on.  Total pages: 31815
[    0.000000] Kernel command line: root=/dev/htifblk0
[    0.000000] PID hash table entries: 512 (order: 0, 4096 bytes)
[    0.000000] Dentry cache hash table entries: 16384 (order: 5, 131072 bytes)
[    0.000000] Inode-cache hash table entries: 8192 (order: 4, 65536 bytes)
[    0.000000] Sorting __ex_table...
[    0.000000] Memory: 124488K/129024K available (1725K kernel code, 120K rwdata, 356K rodata, 68K init, 211K bss, 4536K reserved)
[    0.000000] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=1, Nodes=1
[    0.000000] NR_IRQS:2
[    0.150000] Calibrating delay using timer specific routine.. 20.01 BogoMIPS (lpj=100097)
[    0.150000] pid_max: default: 32768 minimum: 301
[    0.150000] Mount-cache hash table entries: 512 (order: 0, 4096 bytes)
[    0.150000] Mountpoint-cache hash table entries: 512 (order: 0, 4096 bytes)
[    0.150000] devtmpfs: initialized
[    0.150000] NET: Registered protocol family 16
[    0.150000] bio: create slab  at 0
[    0.150000] Switched to clocksource riscv_clocksource
[    0.150000] NET: Registered protocol family 2
[    0.150000] TCP established hash table entries: 1024 (order: 1, 8192 bytes)
[    0.150000] TCP bind hash table entries: 1024 (order: 1, 8192 bytes)
[    0.150000] TCP: Hash tables configured (established 1024 bind 1024)
[    0.150000] TCP: reno registered
[    0.150000] UDP hash table entries: 256 (order: 1, 8192 bytes)
[    0.150000] UDP-Lite hash table entries: 256 (order: 1, 8192 bytes)
[    0.150000] NET: Registered protocol family 1
[    0.150000] futex hash table entries: 256 (order: 0, 6144 bytes)
[    0.150000] io scheduler noop registered
[    0.150000] io scheduler cfq registered (default)
[    0.180000] htifcon htif0: detected console
[    0.190000] console [htifcon0] enabled
[    0.190000] htifblk htif1: detected disk
[    0.190000] htifblk htif1: added htifblk0
[    0.190000] TCP: cubic registered
[    0.190000] VFS: Mounted root (ext2 filesystem) readonly on device 254:0.
[    0.190000] devtmpfs: mounted
[    0.190000] Freeing unused kernel memory: 68K (ffffffff80000000 - ffffffff80011000)
# uname -a
Linux ucbvax 3.14.41-g9a25e8d #1 Mon Jul 25 19:07:50 BST 2016 riscv GNU/Linux

Dale Raby: How do you Fedora?

We recently interviewed Dale Raby on how he uses Fedora. This is part of a series on the Fedora Magazine where we profile Fedora users and how they use Fedora to get things done. If you are interested in being interviewed for a further installment of this series, contact us on the feedback form.

Who is Dale Raby?

Dale started using Linux around 1999 when he became disconcerted with his Windows 95 computer and a young clerk in an office supply store told him about Linux. “I started reading some of the magazines, most notably Maximum Linux and eventually got to know their senior editor, Woody Hughes and Show Me the Code columnist Mae Ling Mak,” said Raby. His first distribution was Mandrake 6.5 which came in a box with a boot floppy.

Raby manages a small gun shop in Green Bay, Wisconsin. He is also an author with three published books: The Post-Apocalyptic Blacksmith, 777 Bon Mots for Gunslighers and Other Real Men, The Wives of Jacob Book I: In the Beginning, and In the Beginning.

Dale’s hobbies include blacksmithing, hunting, shooting, printing with a vintage platen press, photography, writing, woodworking, and repairing computers. His childhood heroes were Jesus, Spiderman, Montgomery Scott and Mr. Spock.

Fedora Community

When asked about his first impression of the Fedora community, Raby said, “For the most part, they are a pretty awesome bunch.” When he has issues, he is able to find answers by searching Google. In the early days, when he needed hand-holding, he always found people to help.

Dale shares a philosophy with Doctor Leonard McCoy about change. “I know, change is good, but changes just for the sake of making changes benefits nobody.” Raby continued, “Part of this may be that I am a bearded old goat who hates changes to my well-ordered world, but why would anybody have to move my (for example) mailcap file or the way GnuPG is implemented?” His penchant for keeping things the same is not limited to Fedora or computers. Raby said, “Don’t feel bad though, I also want to get rid of the throttle-body fuel injection in my Ford Ranger and install a carburetor.”

What hardware?

Raby has a Dell Latitude E6400 with an Intel Core 2 Duo T9600 and 4GB of memory. He also has a CyberWorks desktop with an Intel Celeron 430, ASUSTeK P5GC-MX/1333 motherboard.

Dale Raby: How do you Fedora?

What software?

Raby uses LibreOffice, Mutt, Thunderbird, both GNOME and LXDE desktops, and GnuPG. He has tried to get LemonPOS up and running, but had little success.

Raby has two stores and must transfer inventory between them. “As there are two stores and merchandise frequently travels from one store to the other, I use Dropbox to transfer files. Guns have to be tracked carefully,” said Dale. “We always generate an electronic document that gets stored in a shared file on Dropbox as well as a hard copy that accompanies the gun when it is transferred. Even if the paperwork gets lost, the receiving clerk can look at the electronic file and check the serial number and description against the actual gun itself.”

The paperwork for transfers are generated using LibreOffice. The business also auctions guns and need to have pictures of them. Raby uses gThumbs to manage the images, and GIMP to edit them when necessary.

Dale Raby: How do you Fedora?

When asked about the books he writes, Raby said, “I use Focuswriter to write my books, then store them in a shared Dropbox directory where Diane, my proofreader, can access them and find all of my mistakes before I submit the manuscript to Book County for distribution. With retirement looming up in about ten years it would be nice to have some future royalty checks coming in to supplement my social security.”

New Taskotron Tasks

For a while now, we, Fedora QA, have been busy with building Taskotron core features and didn’t have much resources for additions to the tasks that Taskotron runs. That changed a few weeks back when we started running task-dockerautotest, task-abicheck and task-rpmgrill tasks in our dev environment. Since we have been happy with the results of having run those tasks, we deployed them to the production instance as well last week. Please note that the results of those tasks are informative only. Lets introduce the tasks briefly:

Task-dockerautotest

Task-dockerautotest is the first package-specific task that Taskotron runs. It is triggered on each completed docker build in Koji. Please note that we wrote only the taskotron wrapper for the dockerautotest test suite. The test suite’s git repo can be found here and the task’s git repo here.

Task-abicheck

Task-abicheck looks for incompatibilities in ABI between two versions of a package. The task is run only on a subset of packages in Fedora. Currently we run the task on a subset (we exclude firefox, thunderbird, kernel, kdelibs, kdepim and qt) of packages from critpath. The reason we limit packages we run the task on, for now, is due to memory consumption the task needs for some packages. Each time a build of a package from critpath is completed in Koji task-abicheck is run and compares ABIs of the build and its previous version. Kudos to Dodji Seketeli for the work on libabigail and Sinny Kumari for writing the task! The task’s git repo can be found on pagure.

Task-rpmgrill

Task-rpmgrill is run each time a build is completed in Koji and performs a set of analysis tests on all RPMs built from the build’s source RPM. The task expands capabilities of task-rpmlint with build log analysis or ELF checks, to name a few. Kudos to Ralph Bean for writing Taskotron wrapper for the task! The task’s git repo can be found here.

Looking at the task results

If you are interested in looking at the new tasks results, you can do so via ResultsDB frontend.

Glorious future

Recently, Miro Hroncok sent a review request for “Python Version Check” which we are planning to review and deploy in the not so distant future. We are also looking for “guinea pigs” to write package-specific tasks so we can test our upcoming feature, dist-git style tasks. If you have a task (or an idea of), either generic or packages-specific and want to run it in Taskotron, please do reach to us.

Receiving notifications about results

If you wish to receive notifications about any of new tasks results of you package, you can do so in FMN. If you are not sure how to set up notifications, my previous post on that topic should help with that.

Found an issue?

As usual, if you hit any issues and/or have any questions or suggestions, don’t hesitate to contact us either in #fedora-qa on Freenode IRC or on qa-devel mailing list. You can also file bugs or send RFE in our Phabricator instance.


RISC-V on an FPGA, pt. 3

Compiling the lowRISC software, full GCC, Linux and busybox was straightforward. You have to follow the documentation here very carefully. The only small problem I had was their environment variable script uses $TOP which is also used by something else (bash? – not sure, anyway you just have to modify their script so it doesn’t use that name).

The above gets you enough to boot Linux on Spike, the RISC-V functional emulator. That’s not interesting to me, let’s see it running on my FPGA instead.

LowRISC again provide detailed instructions for compiling the FPGA which you have to follow carefully. I had to remove the -version flags in a script, but otherwise it went fine.

It wasn’t clear to me what you’re supposed to do with the final bitstream (./lowrisc-chip-imp/lowrisc-chip-imp.runs/impl_1/chip_top.new.bit) file, but in fact you use it in Vivado to program the device:

Screenshot_2016-07-26_08-33-29

The simple hello world example was successful (output shown below is from /dev/ttyUSB1 connected to the dev board):

Screenshot_2016-07-26_08-36-42


Blivet-gui 2.0

Blivet-gui has reached another milestone few days ago -- I've released version 2.0. This happened mostly because of the new Blivet 2.0 and its new API, but there are some new features in blivet-gui as well.

FreeIPA Lightweight CA internals

In the preceding post, I explained the use cases for the FreeIPA lightweight sub-CAs feature, how to manage CAs and use them to issue certificates, and current limitations. In this post I detail some of the internals of how the feature works, including how signing keys are distributed to replicas, and how sub-CA certificate renewal works. I conclude with a brief retrospective on delivering the feature.

Full details of the design of the feature can be found on the design page. This post does not cover everything from the design page, but we will look at the aspects that are covered from the perspective of the system administrator, i.e. "what is happening on my systems?"

Dogtag lightweight CA creation

The PKI system used by FreeIPA is called Dogtag. It is a separate project with its own interfaces; most FreeIPA certificate management features are simply reflecting a subset of the corresponding Dogtag interface, often integrating some additional access controls or identity management concepts. This is certainly the case for FreeIPA sub-CAs. The Dogtag lightweight CAs feature was implemented initially to support the FreeIPA use case, yet not all aspects of the Dogtag feature are used in FreeIPA as of v4.4, and other consumers of the Dogtag feature are likely to emerge (in particular: OpenStack).

The Dogtag lightweight CAs feature has its own design page which documents the feature in detail, but it is worth mentioning some important aspects of the Dogtag feature and their impact on how FreeIPA uses the feature.

  • Dogtag lightweight CAs are managed via a REST API. The FreeIPA framework uses this API to create and manage lightweight CAs, using the privileged RA Agent certificate to authenticate. In a future release we hope to remove the RA Agent and authenticate as the FreeIPA user using GSS-API proxy credentials.
  • Each CA in a Dogtag instance, including the "main" CA, has an LDAP entry with object class authority. The schema includes fields such as subject and issuer DN, certificate serial number, and a UUID primary key, which is randomly generated for each CA. When FreeIPA creates a CA, it stores this UUID so that it can map the FreeIPA CA’s common name (CN) to the Dogtag authority ID in certificate requests or other management operations (e.g. CA deletion).
  • The "nickname" of the lightweight CA signing key and certificate in Dogtag’s NSSDB is the nickname of the "main" CA signing key, with the lightweight CA’s UUID appended. In general operation FreeIPA does not need to know this, but the ipa-certupdate program has been enhanced to set up Certmonger tracking requests for FreeIPA-managed lightweight CAs and therefore it needs to know the nicknames.
  • Dogtag lightweight CAs may be nested, but FreeIPA as of v4.4 does not make use of this capability.

So, let’s see what actually happens on a FreeIPA server when we add a lightweight CA. We will use the sc example from the previous post. The command executed to add the CA, with its output, was:

% ipa ca-add sc --subject "CN=Smart Card CA, O=IPA.LOCAL" \
    --desc "Smart Card CA"
---------------
Created CA "sc"
---------------
  Name: sc
  Description: Smart Card CA
  Authority ID: 660ad30b-7be4-4909-aa2c-2c7d874c84fd
  Subject DN: CN=Smart Card CA,O=IPA.LOCAL
  Issuer DN: CN=Certificate Authority,O=IPA.LOCAL 201606201330

The LDAP entry added to the Dogtag database was:

dn: cn=660ad30b-7be4-4909-aa2c-2c7d874c84fd,ou=authorities,ou=ca,o=ipaca
authoritySerial: 63
objectClass: authority
objectClass: top
cn: 660ad30b-7be4-4909-aa2c-2c7d874c84fd
authorityID: 660ad30b-7be4-4909-aa2c-2c7d874c84fd
authorityKeyNickname: caSigningCert cert-pki-ca 660ad30b-7be4-4909-aa2c-2c7d87
 4c84fd
authorityKeyHost: f24b-0.ipa.local:443
authorityEnabled: TRUE
authorityDN: CN=Smart Card CA,O=IPA.LOCAL
authorityParentDN: CN=Certificate Authority,O=IPA.LOCAL 201606201330
authorityParentID: d3e62e89-df27-4a89-bce4-e721042be730

We see the authority UUID in the authorityID attribute as well as cn and the DN. authorityKeyNickname records the nickname of the signing key in Dogtag’s NSSDB. authorityKeyHost records which hosts possess the signing key – currently just the host on which the CA was created. authoritySerial records the serial number of the certificate (more that that later). The meaning of the rest of the fields should be clear.

If we have a peek into Dogtag’s NSSDB, we can see the new CA’s certificate:

# certutil -d /etc/pki/pki-tomcat/alias -L

Certificate Nickname              Trust Attributes
                                  SSL,S/MIME,JAR/XPI

caSigningCert cert-pki-ca         CTu,Cu,Cu
auditSigningCert cert-pki-ca      u,u,Pu
Server-Cert cert-pki-ca           u,u,u
caSigningCert cert-pki-ca 660ad30b-7be4-4909-aa2c-2c7d874c84fd u,u,u
ocspSigningCert cert-pki-ca       u,u,u
subsystemCert cert-pki-ca         u,u,u

There it is, alongside the main CA signing certificate and other certificates used by Dogtag. The trust flags u,u,u indicate that the private key is also present in the NSSDB. If we pretty print the certificate we will see a few interesting things:

# certutil -d /etc/pki/pki-tomcat/alias -L \
    -n 'caSigningCert cert-pki-ca 660ad30b-7be4-4909-aa2c-2c7d874c84fd'
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 63 (0x3f)
        Signature Algorithm: PKCS #1 SHA-256 With RSA Encryption
        Issuer: "CN=Certificate Authority,O=IPA.LOCAL 201606201330"
        Validity:
            Not Before: Fri Jul 15 05:46:00 2016
            Not After : Tue Jul 15 05:46:00 2036
        Subject: "CN=Smart Card CA,O=IPA.LOCAL"
        ...
        Signed Extensions:
            ...
            Name: Certificate Basic Constraints
            Critical: True
            Data: Is a CA with no maximum path length.
            ...

Observe that:

  • The certificate is indeed a CA.
  • The serial number (63) agrees with the CA’s LDAP entry.
  • The validity period is 20 years, the default for CAs in Dogtag. This cannot be overridden on a per-CA basis right now, but addressing this is a priority.

Finally, let’s look at the raw entry for the CA in the FreeIPA database:

dn: cn=sc,cn=cas,cn=ca,dc=ipa,dc=local
cn: sc
ipaCaIssuerDN: CN=Certificate Authority,O=IPA.LOCAL 201606201330
objectClass: ipaca
objectClass: top
ipaCaSubjectDN: CN=Smart Card CA,O=IPA.LOCAL
ipaCaId: 660ad30b-7be4-4909-aa2c-2c7d874c84fd
description: Smart Card CA

We can see that this entry also contains the subject and issuer DNs, and the ipaCaId attribute holds the Dogtag authority ID, which allows the FreeIPA framework to dereference the local ID (sc) to the Dogtag ID as needed. We also see that the description attribute is local to FreeIPA; Dogtag also has a description attribute for lightweight CAs but FreeIPA uses its own.

Lightweight CA replication

FreeIPA servers replicate objects in the FreeIPA directory among themselves, as do Dogtag replicas (note: in Dogtag, the term clone is often used). All Dogtag instances in a replicated environment need to observe changes to lightweight CAs (creation, modification, deletion) that were performed on another replica and update their own view so that they can respond to requests consistently. This is accomplished via an LDAP persistent search which is run in a monitor thread. Care was needed to avoid race conditions. Fortunately, the solution for LDAP-based profile storage provided a fine starting point for the authority monitor; although lightweight CAs are more complex, many of the same race conditions can occur and these were already addressed in the LDAP profile monitor implementation.

But unlike LDAP-based profiles, a lightweight CA consists of more than just an LDAP object; there is also the signing key. The signing key lives in Dogtag’s NSSDB and for security reasons cannot be transported through LDAP. This means that when a Dogtag clone observes the addition of a lightweight CA, an out-of-band mechanism to transport the signing key must also be triggered.

This mechanism is covered in the design pages but the summarised process is:

  1. A Dogtag clone observes the creation of a CA on another server and starts a KeyRetriever thread. The KeyRetriever is implemented as part of Dogtag, but it is configured to run the /usr/libexec/ipa/ipa-pki-retrieve-key program, which is part of FreeIPA. The program is invoked with arguments of the server to request the key from (this was stored in the authorityKeyHost attribute mentioned earlier), and the nickname of the key to request.
  2. ipa-pki-retrieve-key requests the key from the Custodia daemon on the source server. It authenticates as the dogtag/<requestor-hostname>@REALM service principal. If authenticated and authorised, the Custodia daemon exports the signing key from Dogtag’s NSSDB wrapped by the main CA’s private key, and delivers it to the requesting server. ipa-pki-retrieve-key outputs the wrapped key then exits.
  3. The KeyRetriever reads the wrapped key and imports (unwraps) it into the Dogtag clone’s NSSDB. It then initialises the Dogtag CA’s Signing Unit allowing the CA to service signing requests on that clone, and adds its own hostname to the CA’s authorityKeyHost attribute.

Some excerpts of the CA debug log on the clone (not the server on which the sub-CA was first created) shows this process in action. The CA debug log is found at /var/log/pki/pki-tomcat/ca/debug. Some irrelevant messages have been omitted.

[25/Jul/2016:15:45:56][authorityMonitor]: authorityMonitor: Processed change controls.
[25/Jul/2016:15:45:56][authorityMonitor]: authorityMonitor: ADD
[25/Jul/2016:15:45:56][authorityMonitor]: readAuthority: new entryUSN = 109
[25/Jul/2016:15:45:56][authorityMonitor]: CertificateAuthority init 
[25/Jul/2016:15:45:56][authorityMonitor]: ca.signing Signing Unit nickname caSigningCert cert-pki-ca 660ad30b-7be4-4909-aa2c-2c7d874c84fd
[25/Jul/2016:15:45:56][authorityMonitor]: SigningUnit init: debug Certificate object not found
[25/Jul/2016:15:45:56][authorityMonitor]: CA signing key and cert not (yet) present in NSSDB
[25/Jul/2016:15:45:56][authorityMonitor]: Starting KeyRetrieverRunner thread

Above we see the authorityMonitor thread observe the addition of a CA. It adds the CA to its internal map and attempts to initialise it, which fails because the key and certificate are not available, so it starts a KeyRetrieverRunner in a new thread.

[25/Jul/2016:15:45:56][KeyRetrieverRunner-660ad30b-7be4-4909-aa2c-2c7d874c84fd]: Running ExternalProcessKeyRetriever
[25/Jul/2016:15:45:56][KeyRetrieverRunner-660ad30b-7be4-4909-aa2c-2c7d874c84fd]: About to execute command: [/usr/libexec/ipa/ipa-pki-retrieve-key, caSigningCert cert-pki-ca 660ad30b-7be4-4909-aa2c-2c7d874c84fd, f24b-0.ipa.local]

The KeyRetrieverRunner thread invokes ipa-pki-retrieve-key with the nickname of the key it wants, and a host from which it can retrieve it. If a CA has multiple sources, the KeyRetrieverRunner will try these in order with multiple invocations of the helper, until one succeeds. If none succeed, the thread goes to sleep and retries when it wakes up initially after 10 seconds, then backing off exponentially.

[25/Jul/2016:15:47:13][KeyRetrieverRunner-660ad30b-7be4-4909-aa2c-2c7d874c84fd]: Importing key and cert
[25/Jul/2016:15:47:13][KeyRetrieverRunner-660ad30b-7be4-4909-aa2c-2c7d874c84fd]: Reinitialising SigningUnit
[25/Jul/2016:15:47:13][KeyRetrieverRunner-660ad30b-7be4-4909-aa2c-2c7d874c84fd]: ca.signing Signing Unit nickname caSigningCert cert-pki-ca 660ad30b-7be4-4909-aa2c-2c7d874c84fd
[25/Jul/2016:15:47:13][KeyRetrieverRunner-660ad30b-7be4-4909-aa2c-2c7d874c84fd]: Got token Internal Key Storage Token by name
[25/Jul/2016:15:47:13][KeyRetrieverRunner-660ad30b-7be4-4909-aa2c-2c7d874c84fd]: Found cert by nickname: 'caSigningCert cert-pki-ca 660ad30b-7be4-4909-aa2c-2c7d874c84fd' with serial number: 63
[25/Jul/2016:15:47:13][KeyRetrieverRunner-660ad30b-7be4-4909-aa2c-2c7d874c84fd]: Got private key from cert
[25/Jul/2016:15:47:13][KeyRetrieverRunner-660ad30b-7be4-4909-aa2c-2c7d874c84fd]: Got public key from cert
[25/Jul/2016:15:47:13][KeyRetrieverRunner-660ad30b-7be4-4909-aa2c-2c7d874c84fd]: in init - got CA name CN=Smart Card CA,O=IPA.LOCAL

The key retriever successfully returned the key data and import succeeded. The signing unit then gets initialised.

[25/Jul/2016:15:47:13][KeyRetrieverRunner-660ad30b-7be4-4909-aa2c-2c7d874c84fd]: Adding self to authorityKeyHosts attribute
[25/Jul/2016:15:47:13][KeyRetrieverRunner-660ad30b-7be4-4909-aa2c-2c7d874c84fd]: In LdapBoundConnFactory::getConn()
[25/Jul/2016:15:47:13][KeyRetrieverRunner-660ad30b-7be4-4909-aa2c-2c7d874c84fd]: postCommit: new entryUSN = 361
[25/Jul/2016:15:47:13][KeyRetrieverRunner-660ad30b-7be4-4909-aa2c-2c7d874c84fd]: postCommit: nsUniqueId = 4dd42782-4a4f11e6-b003b01c-c8916432
[25/Jul/2016:15:47:14][authorityMonitor]: authorityMonitor: Processed change controls.
[25/Jul/2016:15:47:14][authorityMonitor]: authorityMonitor: MODIFY
[25/Jul/2016:15:47:14][authorityMonitor]: readAuthority: new entryUSN = 361
[25/Jul/2016:15:47:14][authorityMonitor]: readAuthority: known entryUSN = 361
[25/Jul/2016:15:47:14][authorityMonitor]: readAuthority: data is current

Finally, the Dogtag clone adds itself to the CA’s authorityKeyHosts attribute. The authorityMonitor observes this change but ignores it because its view is current.

Certificate renewal

CA signing certificates will eventually expire, and therefore require renewal. Because the FreeIPA framework operates with low privileges, it cannot add a Certmonger tracking request for sub-CAs when it creates them. Furthermore, although the renewal (i.e. the actual signing of a new certificate for the CA) should only happen on one server, the certificate must be updated in the NSSDB of all Dogtag clones.

As mentioned earlier, the ipa-certupdate command has been enhanced to add Certmonger tracking requests for FreeIPA-managed lightweight CAs. The actual renewal will only be performed on whichever server is the renewal master when Certmonger decides it is time to renew the certificate (assuming that the tracking request has been added on that server).

Let’s run ipa-certupdate on the renewal master to add the tracking request for the new CA. First observe that the tracking request does not exist yet:

# getcert list -d /etc/pki/pki-tomcat/alias |grep subject
        subject: CN=CA Audit,O=IPA.LOCAL 201606201330
        subject: CN=OCSP Subsystem,O=IPA.LOCAL 201606201330
        subject: CN=CA Subsystem,O=IPA.LOCAL 201606201330
        subject: CN=Certificate Authority,O=IPA.LOCAL 201606201330
        subject: CN=f24b-0.ipa.local,O=IPA.LOCAL 201606201330

As expected, we do not see our sub-CA certificate above. After running ipa-certupdate the following tracking request appears:

Request ID '20160725222909':
        status: MONITORING
        stuck: no
        key pair storage: type=NSSDB,location='/etc/pki/pki-tomcat/alias',nickname='caSigningCert cert-pki-ca 660ad30b-7be4-4909-aa2c-2c7d874c84fd',token='NSS Certificate DB',pin set
        certificate: type=NSSDB,location='/etc/pki/pki-tomcat/alias',nickname='caSigningCert cert-pki-ca 660ad30b-7be4-4909-aa2c-2c7d874c84fd',token='NSS Certificate DB'
        CA: dogtag-ipa-ca-renew-agent
        issuer: CN=Certificate Authority,O=IPA.LOCAL 201606201330
        subject: CN=Smart Card CA,O=IPA.LOCAL
        expires: 2036-07-15 05:46:00 UTC
        key usage: digitalSignature,nonRepudiation,keyCertSign,cRLSign
        pre-save command: /usr/libexec/ipa/certmonger/stop_pkicad
        post-save command: /usr/libexec/ipa/certmonger/renew_ca_cert "caSigningCert cert-pki-ca 660ad30b-7be4-4909-aa2c-2c7d874c84fd"
        track: yes
        auto-renew: yes

As for updating the certificate in each clone’s NSSDB, Dogtag itself takes care of that. All that is required is for the renewal master to update the CA’s authoritySerial attribute in the Dogtag database. The renew_ca_cert Certmonger post-renewal hook script performs this step. Each Dogtag clone observes the update (in the monitor thread), looks up the certificate with the indicated serial number in its certificate repository (a new entry that will also have been recently replicated to the clone), and adds that certificate to its NSSDB. Again, let’s observe this process by forcing a certificate renewal:

# getcert resubmit -i 20160725222909
Resubmitting "20160725222909" to "dogtag-ipa-ca-renew-agent".

After about 30 seconds the renewal process is complete. When we examine the certificate in the NSSDB we see, as expected, a new serial number:

# certutil -d /etc/pki/pki-tomcat/alias -L \
    -n "caSigningCert cert-pki-ca 660ad30b-7be4-4909-aa2c-2c7d874c84fd" \
    | grep -i serial
        Serial Number: 74 (0x4a)

We also see that the renew_ca_cert script has updated the serial in Dogtag’s database:

# ldapsearch -D cn="Directory Manager" -w4me2Test -b o=ipaca \
    '(cn=660ad30b-7be4-4909-aa2c-2c7d874c84fd)' authoritySerial
dn: cn=660ad30b-7be4-4909-aa2c-2c7d874c84fd,ou=authorities,ou=ca,o=ipaca
authoritySerial: 74

Finally, if we look at the CA debug log on the clone, we’ll see that the the authority monitor observes the serial number change and updates the certificate in its own NSSDB (again, some irrelevant or low-information messages have been omitted):

[26/Jul/2016:10:43:28][authorityMonitor]: authorityMonitor: Processed change controls.
[26/Jul/2016:10:43:28][authorityMonitor]: authorityMonitor: MODIFY
[26/Jul/2016:10:43:28][authorityMonitor]: readAuthority: new entryUSN = 1832
[26/Jul/2016:10:43:28][authorityMonitor]: readAuthority: known entryUSN = 361
[26/Jul/2016:10:43:28][authorityMonitor]: CertificateAuthority init 
[26/Jul/2016:10:43:28][authorityMonitor]: ca.signing Signing Unit nickname caSigningCert cert-pki-ca 660ad30b-7be4-4909-aa2c-2c7d874c84fd
[26/Jul/2016:10:43:28][authorityMonitor]: Got token Internal Key Storage Token by name
[26/Jul/2016:10:43:28][authorityMonitor]: Found cert by nickname: 'caSigningCert cert-pki-ca 660ad30b-7be4-4909-aa2c-2c7d874c84fd' with serial number: 63
[26/Jul/2016:10:43:28][authorityMonitor]: Got private key from cert
[26/Jul/2016:10:43:28][authorityMonitor]: Got public key from cert
[26/Jul/2016:10:43:28][authorityMonitor]: CA signing unit inited
[26/Jul/2016:10:43:28][authorityMonitor]: in init - got CA name CN=Smart Card CA,O=IPA.LOCAL
[26/Jul/2016:10:43:28][authorityMonitor]: Updating certificate in NSSDB; new serial number: 74

When the authority monitor processes the change, it reinitialises the CA including its signing unit. Then it observes that the serial number of the certificate in its NSSDB differs from the serial number from LDAP. It pulls the certificate with the new serial number from its certificate repository, imports it into NSSDB, then reinitialises the signing unit once more and sees the correct serial number:

[26/Jul/2016:10:43:28][authorityMonitor]: ca.signing Signing Unit nickname caSigningCert cert-pki-ca 660ad30b-7be4-4909-aa2c-2c7d874c84fd
[26/Jul/2016:10:43:28][authorityMonitor]: Got token Internal Key Storage Token by name
[26/Jul/2016:10:43:28][authorityMonitor]: Found cert by nickname: 'caSigningCert cert-pki-ca 660ad30b-7be4-4909-aa2c-2c7d874c84fd' with serial number: 74
[26/Jul/2016:10:43:28][authorityMonitor]: Got private key from cert
[26/Jul/2016:10:43:28][authorityMonitor]: Got public key from cert
[26/Jul/2016:10:43:28][authorityMonitor]: CA signing unit inited
[26/Jul/2016:10:43:28][authorityMonitor]: in init - got CA name CN=Smart Card CA,O=IPA.LOCAL

Currently this update mechanism is only used for lightweight CAs, but it would work just as well for the main CA too, and we plan to switch at some stage so that the process is consistent for all CAs.

Wrapping up

I hope you have enjoyed this tour of some of the lightweight CA internals, and in particular seeing how the design actually plays out on your systems in the real world.

FreeIPA lightweight CAs has been the most complex and challenging project I have ever undertaken. It took the best part of a year from early design and proof of concept, to implementing the Dogtag lightweight CAs feature, then FreeIPA integration, and numerous bug fixes, refinements or outright redesigns along the way. Although there are still some rough edges, some important missing features and, I expect, many an RFE to come, I am pleased with what has been delivered and the overall design.

Thanks are due to all of my colleagues who contributed to the design and review of the feature; each bit of input from all of you has been valuable. I especially thank Ade Lee and Endi Dewata from the Dogtag team for their help with API design and many code reviews over a long period of time, and from the FreeIPA team Jan Cholasta and Martin Babinsky for a their invaluable input into the design, and much code review and testing. I could not have delivered this feature without your help; thank you for your collaboration!