July 27, 2016

Fedora Women Day 2016

CnZe3pLUAAAwGrm (1)

Fedora Women Day is celebrated to raise awareness and bring women contributors across Fedora Project together. This is a great time to network with other women in Fedora and talk about their contributions and work in Fedora Project.

The event was held at Netaji Subhash Engineering College (NSEC) in Kolkata, India on 15th July, 2016.

Fedora Women Day was also celebrated in Pune, India and Tirana, Albania. https://fedoraproject.org/wiki/Fedora_Women_Day_2016#Local_Events

The event started at 10:30 AM. Women started coming in and it was pretty nice crowd.

The event started with my talk. It was my first talk so I was really excited.

I talked about Fedora Women Day and the purpose. Then I started talking about the work I do in Fedora Project. Most of the part of my talk was regarding Fedora Infrastructure and Fedora Cloud.

Since my most of the contributions lie in Bodhi(Fedora Infrastructure) and Tunirtests(Fedora Cloud) so I specifically gave some insight on these projects. I explained the architecture of Bodhi and Tunirtests and how one can start contributing those specific projects.

I also shared my story on how I started contributing to Fedora Project.

Here is the slide of my talk: trishnaguha.github.io/trishnagslides-what-i-do-in-fedora-how-can-you-get-involved.html

After few hours of my talk I had to leave early for some urgent work. You will find the full event report here: Event Report.

I received Fedora stickers, F24 workstation DVD and Fedora T-shirt, but not sure I can put the T-shirt on, it seems so large😦.

Jpeg


Time de Espartanos do Brazil. Post Número 1
Time de Espartanos do Brazil.

Post Número 1



July 26, 2016

Séparation de F25 et de Rawhide

Ce mardi 26 juillet a été le jour d'une nouvelle étape dans l'élaboration de la prochaine version de Fedora à savoir Fedora 25.

C'est l'occasion où toute l'infrastructure interne du projet et ses contributeurs se mettent en branle pour accueillir une nouvelle branche de développement pour Fedora 25. Cela signifie également que tous les paquets à ce stade sont dupliqués, d'un côté pour Rawhide, de l'autre pour Fedora 25.

Dès maintenant, la mise à jour des deux systèmes sera différente. Les versions des paquets également. Fedora 25 va ainsi poursuivre sa route de stabilisation en suivant tous le processus habituel d'Alpha, Beta et Release Candidates en passant par les journées de tests et autres évènements destinés à améliorer sa qualité.

Pour ceux sur Rawhide, cela signifie devoir faire un choix entre poursuivre les tests de la branche en perpétuelle développement, ou participer à stabiliser Fedora 25. Cela revient à désactiver le dépôt dédié à Rawhide pour activer ceux de Fedora et lancer une synchronisation. Plus l'utilisateur atteint pour changer de voie, et plus le risque de soucis lors de l'opération augmente.

Cette journée est également une autre occasion, celle du tri des fonctionnalités retenus pour Fedora 25. En effet, il y a quelques semaines, les développeurs ont envoyé leurs listes de travaux à effectuer pour cette version. Certains ont été acceptés, d'autres non dès cette étape là.

Tandis qu'aujourd'hui, ceux qui ont été retenus préalablement, devront être testables. C'est-à-dire que l'essentiel de la nouveauté est dore et déjà en place. L'objectif est que les mois restants avant la sortie officielle de Fedora 25 servent à stabiliser ces changements, à les valider. Et pour le faire correctement, il faut du temps. D'où l'importance d'avoir ces fonctionnalités déjà opérationnelles ou presque.

Nous le dirons jamais assez, mais le Projet Fedora est une distribution communautaire où chacun peut apporter sa pierre à l'édifice. N'hésitez pas à installer Rawhide ou la future F25 (ou via une mise à niveau) pour remonter toute anomalie. Plus c'est fait tôt, plus les chances de corrections avant la sortie seront grandes.

Fedora 24: systemd-analyze (solved)

Odd Results

I read an article about systemd-analyze and tried it on my Dell XPS 13 9343.

Startup finished in 6.363s (firmware) + 6.029s (loader) + 1.020s (kernel) + 2.756s (initrd) + 1min 30.405s (userspace) = 1min 46.575s

Running this on my Lenovo T530 still running Fedora 23 resulted in the following:

Startup finished in 4.625s (firmware) + 6.467s (loader) + 1.898s (kernel) + 2.153s (initrd) + 7.929s (userspace) = 23.073s

So, to ensure that this is an issue with Fedora 24 and not my Dell I booted the T530 up to Fedora 24 and re-ran the systemd-analyze on it and got the following results:

Startup finished in 6.127s (firmware) + 15.969s (loader) + 4.068s (kernel) + 3.066s (initrd) + 1min 31.377s (userspace) = 2min 610ms

When the boot first completed and I was able to open a terminal window on the desktop to run the systemd-analyze command I got a message that boot was not finished.

Bootup is not yet finished. Please try again later.

This result was duplicated on the Dell XPS 13 as well. I then tried troubleshooting the issue with systemd-analyze blame and got the following:

          3.850s plymouth-quit-wait.service
1.872s dnf-makecache.service
1.613s plymouth-start.service
674ms systemd-udev-settle.service
598ms firewalld.service
591ms dev-mapper-fedora\x2droot.device
430ms lvm2-monitor.service
273ms lvm2-pvscan@8:3.service
231ms libvirtd.service
216ms accounts-daemon.service
205ms cups.service
177ms udisks2.service
147ms systemd-logind.service
116ms ModemManager.service
112ms user@42.service
111ms upower.service
98ms user@1000.service
97ms proc-fs-nfsd.mount
88ms abrtd.service
87ms systemd-udev-trigger.service
87ms packagekit.service
86ms polkit.service
82ms systemd-journald.service
76ms iio-sensor-proxy.service
68ms systemd-udevd.service
65ms systemd-journal-flush.service
64ms gdm.service
63ms systemd-vconsole-setup.service
61ms unbound-anchor.service
56ms NetworkManager.service
48ms abrt-ccpp.service
48ms bluetooth.service
48ms systemd-fsck@dev-disk-by\x2duuid-63a65c9c\x2dff2d\x2d4e27\x2dac05\x2dae7b80c0fb3b.service
47ms systemd-fsck@dev-disk-by\x2duuid-880D\x2dE9AE.service
47ms systemd-tmpfiles-setup-dev.service
46ms colord.service
43ms avahi-daemon.service
41ms fedora-readonly.service
40ms gssproxy.service
37ms rtkit-daemon.service
33ms systemd-fsck@dev-mapper-fedora\x2dhome.service
30ms chronyd.service
28ms systemd-fsck-root.service
24ms systemd-tmpfiles-setup.service
24ms systemd-rfkill.service
21ms dmraid-activation.service
21ms systemd-remount-fs.service
20ms livesys-late.service
20ms kmod-static-nodes.service
19ms home.mount
19ms livesys.service
19ms fedora-import-state.service
18ms dev-mapper-fedora\x2dswap.swap
18ms dev-hugepages.mount
17ms wpa_supplicant.service
16ms systemd-sysctl.service
16ms auditd.service
15ms systemd-tmpfiles-clean.service
14ms boot-efi.mount
13ms plymouth-read-write.service
12ms rpc-statd-notify.service
10ms systemd-user-sessions.service
10ms dev-mqueue.mount
9ms blk-availability.service
8ms sys-kernel-debug.mount
6ms systemd-random-seed.service
6ms sys-fs-fuse-connections.mount
6ms boot.mount
6ms systemd-backlight@leds:dell::kbd_backlight.service
5ms nfs-config.service
5ms tmp.mount
4ms systemd-update-utmp-runlevel.service
4ms systemd-update-utmp.service
4ms var-lib-nfs-rpc_pipefs.mount
4ms dracut-shutdown.service
3ms systemd-backlight@backlight:intel_backlight.service
2ms sys-kernel-config.mount

I am still searching for an explanation, but Google searches are not turning up much that is useful. In the end it is curiosity more than something that is actually impacting me as I am able to start working long before systemd-analyze is capable of giving me results and certainly it is not taking 1 minute and 30 seconds for the computer to boot. In fact, when I timed the boot it only took 18.5 seconds for me to get to the desktop.

Update:

thanks to the comment from Michal Schmidt I ran systemctl list-jobs after confirming that systemd analyze thought that the system was still booting up and the results are:

[cprofitt@tardis-xps ~]$ systemd-analyze
Bootup is not yet finished. Please try again later.
[cprofitt@tardis-xps ~]$ systemctl list-jobs
JOB UNIT                                              TYPE  STATE
243 hypervkvpd.service                                start waiting
246 sys-devices-virtual-misc-vmbus\x21hv_vss.device   start running
247 systemd-update-utmp-runlevel.service              start waiting
111 multi-user.target                                 start waiting
234 sys-devices-virtual-misc-vmbus\x21hv_fcopy.device start running
110 graphical.target                                  start waiting
244 sys-devices-virtual-misc-vmbus\x21hv_kvp.device   start running
245 hypervvssd.service                                start waiting
233 hypervfcopyd.service                              start waiting

9 jobs listed.

I have more items to go on now and will start doing more research.

Update 2:

I found a bug for Fedora regarding these services and further research found a blog post recommending to disable the following services:

systemctl disable hypervkvpd.service
systemctl disable hypervfcopyd.service
systemctl disable hypervvssd.service

With these services disabled I rebooted the laptop and was immediately able to get results from systemd-analyze:

Startup finished in 6.104s (firmware) + 5.973s (loader) + 1.025s (kernel) + 2.728s (initrd) + 5.969s (userspace) = 21.801s

A huge reduction in the time. While the services were not causing any ‘real’ impact they also appear to provide no functionality unless running on a hyper-v system.


Entwicklung von Fedora 25 kommt in die heiße Phase

Gemäß dem Zeitplan für Fedora 25 wurde heute im Fedora GIT der Entwicklungszweig für Fedora 25 vom Hauptzweig („master“) abgespalten, womit die Entwicklung von Fedora 25 in die heiße Phase einbiegt.

Falls sich jemand wundern sollte, das so bald nach dem Release von Fedora 24 schon das Branching von Fedora 25 erfolgt: Da bei Fedora 24 der Release-Termin unüblicherweise einige Monate hinter dem von Gnome lag, wurde für Fedora 25 ein verkürzter Release-Zyklus beschlossen, indem man z.B. auf ein komplettes Rebuild der Pakete für Fedora 25 verzichtet. Features und neue Funktionen, die ein solches Mass-Rebuild erfordern würden, wurden in Folge dessen auf Fedora 26 verschoben.

Sofern es zu keinen größeren Problemen während der Entwicklung von Fedora 25 kommt, soll Fedora 25 Anfang November freigeben werden.

Die veröffentlichten News werden nach bestem Wissen und Gewissen zusammengetragen. Eine Garantie für die Vollständigkeit und/oder Richtigkeit wird nicht übernommen.
Chromium ist jetzt in den offiziellen Fedora Repositories verfügbar

Kurz notiert: Seit gestern sind Chromium-Pakete im updates-testing Repository von Fedora 24 verfügbar. Ob es auch Pakete für Fedora 23 geben wird, ist zur Zeit noch nicht bekannt.

Wer nicht warten möchte, bis Chromium im regularen updates Repository landet, kann es einfach mittels

su -c'dnf install chromium --enablerepo=updates-testing'

installieren.

Es kann jedoch vorkommen, das Chromium einzelne Seiten nicht anzeigen kann, wenn parallel Google Chrome installiert und Chromium so konfiguriert ist, das es das Flash-Plugin von Google Chrome verwendet.

Update: Ich sehe gerade, das Chromium noch nicht im updates-testing Repository gelandet ist, sondern nur in der Warteschleife für den nächsten Push von Updates in die Repositories hängt. Es sollte aber trotzdem nicht mehr all zu lange dauern, bis Chromium in updates-testing gelandet ist.

Die veröffentlichten News werden nach bestem Wissen und Gewissen zusammengetragen. Eine Garantie für die Vollständigkeit und/oder Richtigkeit wird nicht übernommen.
Elections Retrospective, July 2016

The results are in! The Fedora Elections for the Fedora 24 release cycle of FESCo and the Council concluded on Tuesday, July 26th. The results are posted on the Fedora Voting Application and announced on the mailing lists. You can also find the full list of winning candidates below. I would also like to share some interesting statistics in this July 2016 Elections Retrospective.

2016 Elections Retrospective Report

In short, voter turnout is approximately on its average level (well, slightly below).

Fedora Engineering Steering Committee (FESCo)

We had four vacant seats and five nominations for the F24 cycle, with 196 voters casting their votes.

FESCo Winning Candidates Votes
Stephen Gallagher (sgallagh) [info] 655
Josh Boyer (jwb/jwboyer) [info] 619
Dennis Gilmore (dgilmore/ausil) [info] 557
Dominik Mierzejewski (rathann) [info] 474

Compared to the historical data, with 196 voters, we are slightly bellow the average of 213 voters, which seems to understandable due to the fact we are in the middle of the vacations season.

Fedora Project Elections Retrospective, July 2016

The statistics showing how many people voted each day during the voting period is also quite interesting.

Fedora Project Elections Retrospective, July 2016

For this Election, we sent reminders on the first, fourth, and last day of the voting period, which is reflected in the charts as an increase of voters comparing to a day before the reminder.

Out of the four elected nominees, three (sgallagh, jwboyer and dgilmore) have been elected for a repeat term. One elected nominee (rathann) has been elected for the first time (AFAIK).

Fedora Council

We had one vacant seat and two nominations for the Fedora 24 cycle, with 189 voters casting their votes.

Council Winning Candidate Votes
Langdon White (langdon) [info] 240

The Fedora Council came into existence in November 2014, and hence, we do not have much previous data. Historically, before there was a Council, there was a Board. On the chart below you can see the comparison between voter turnout for the Fedora Board elections vs Council Elections. The average voters turnout for Council & Board elections is 221, and for Council only is the average 206.

Fedora Project Elections Retrospective, July 2016

The profile for number of voters per day was similar to the one we saw for FESCo.

Fedora Project Elections Retrospective, July 2016

Langdon have been elected for a repeat term and will stay within Council at least for the next two release cycles.

You can find some more Council Election related metrics here.

Special Thanks

Congratulations to the winning candidates, and thank you to all the candidates who ran this election! Community governance is core to the Fedora Project, and we couldn’t do it without your involvement and support.

A special thanks to jflory7 and the members of the CommOps Team for helping organize another successful round of Elections!

And last but not least, thank YOU to all the Fedora community members who participated and voted this election cycle. Stay tuned for future Elections Retrospective articles for future Elections!

The post Elections Retrospective, July 2016 appeared first on Fedora Community Blog.

RISC-V on an FPGA, pt. 5

I’ve learned a few things about this process. These are just my random thoughts in no particular order:

  • You need the heavily proprietary Xilinx Vivado to compile the Verilog source code to a bitstream. However you can use free tools to write the bitstream to the FPGA (eg. xc3sprog).
  • The core lowRISC CPU is free, but that’s not everything that goes into the bitstream. Also in the bitstream are some peripherals, such as the UART, and those are Xilinx proprietary IP. So the bitstream isn’t fully free and more importantly isn’t redistributable. (Note, according to this talk there is a plan to fix this).
  • This is a nice summary of the v0.2 architecture, esp. page 4

RISC-V on an FPGA, pt. 4

It boots!

lowRISC boot program
=====================================
Load boot into memory
Load 11660 bytes to memory.
Read boot and load elf to DDR memory
Boot the loaded program...
[    0.000000] Linux version 3.14.41-g9a25e8d (rjones@moo.home.annexia.org) (gcc version 5.2.0 (GCC) ) #1 Mon Jul 25 19:07:50 BST 2016
[    0.000000] Available physical memory: 126MB
[    0.000000] Zone ranges:
[    0.000000]   Normal   [mem 0x00200000-0x07ffffff]
[    0.000000] Movable zone start for each node
[    0.000000] Early memory node ranges
[    0.000000]   node   0: [mem 0x00200000-0x07ffffff]
[    0.000000] Built 1 zonelists in Zone order, mobility grouping on.  Total pages: 31815
[    0.000000] Kernel command line: root=/dev/htifblk0
[    0.000000] PID hash table entries: 512 (order: 0, 4096 bytes)
[    0.000000] Dentry cache hash table entries: 16384 (order: 5, 131072 bytes)
[    0.000000] Inode-cache hash table entries: 8192 (order: 4, 65536 bytes)
[    0.000000] Sorting __ex_table...
[    0.000000] Memory: 124488K/129024K available (1725K kernel code, 120K rwdata, 356K rodata, 68K init, 211K bss, 4536K reserved)
[    0.000000] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=1, Nodes=1
[    0.000000] NR_IRQS:2
[    0.150000] Calibrating delay using timer specific routine.. 20.01 BogoMIPS (lpj=100097)
[    0.150000] pid_max: default: 32768 minimum: 301
[    0.150000] Mount-cache hash table entries: 512 (order: 0, 4096 bytes)
[    0.150000] Mountpoint-cache hash table entries: 512 (order: 0, 4096 bytes)
[    0.150000] devtmpfs: initialized
[    0.150000] NET: Registered protocol family 16
[    0.150000] bio: create slab  at 0
[    0.150000] Switched to clocksource riscv_clocksource
[    0.150000] NET: Registered protocol family 2
[    0.150000] TCP established hash table entries: 1024 (order: 1, 8192 bytes)
[    0.150000] TCP bind hash table entries: 1024 (order: 1, 8192 bytes)
[    0.150000] TCP: Hash tables configured (established 1024 bind 1024)
[    0.150000] TCP: reno registered
[    0.150000] UDP hash table entries: 256 (order: 1, 8192 bytes)
[    0.150000] UDP-Lite hash table entries: 256 (order: 1, 8192 bytes)
[    0.150000] NET: Registered protocol family 1
[    0.150000] futex hash table entries: 256 (order: 0, 6144 bytes)
[    0.150000] io scheduler noop registered
[    0.150000] io scheduler cfq registered (default)
[    0.180000] htifcon htif0: detected console
[    0.190000] console [htifcon0] enabled
[    0.190000] htifblk htif1: detected disk
[    0.190000] htifblk htif1: added htifblk0
[    0.190000] TCP: cubic registered
[    0.190000] VFS: Mounted root (ext2 filesystem) readonly on device 254:0.
[    0.190000] devtmpfs: mounted
[    0.190000] Freeing unused kernel memory: 68K (ffffffff80000000 - ffffffff80011000)
# uname -a
Linux ucbvax 3.14.41-g9a25e8d #1 Mon Jul 25 19:07:50 BST 2016 riscv GNU/Linux

Dale Raby: How do you Fedora?

We recently interviewed Dale Raby on how he uses Fedora. This is part of a series on the Fedora Magazine where we profile Fedora users and how they use Fedora to get things done. If you are interested in being interviewed for a further installment of this series, contact us on the feedback form.

Who is Dale Raby?

Dale started using Linux around 1999 when he became disconcerted with his Windows 95 computer and a young clerk in an office supply store told him about Linux. “I started reading some of the magazines, most notably Maximum Linux and eventually got to know their senior editor, Woody Hughes and Show Me the Code columnist Mae Ling Mak,” said Raby. His first distribution was Mandrake 6.5 which came in a box with a boot floppy.

Raby manages a small gun shop in Green Bay, Wisconsin. He is also an author with four published books: The Post-Apocalyptic Blacksmith, 777 Bon Mots for Gunslighers and Other Real Men, The Wives of Jacob I, and In the Beginning.

Dale’s hobbies include blacksmithing, hunting, shooting, printing with a vintage platen press, photography, writing, woodworking, and repairing computers. His childhood heroes were Jesus, Spiderman, Montgomery Scott and Mr. Spock.

Fedora Community

When asked about his first impression of the Fedora community, Raby said, “For the most part, they are a pretty awesome bunch.” When he has issues, he is able to find answers by searching Google. In the early days, when he needed hand-holding, he always found people to help.

Dale shares a philosophy with Doctor Leonard McCoy about change. “I know, change is good, but changes just for the sake of making changes benefits nobody.” Raby continued, “Part of this may be that I am a bearded old goat who hates changes to my well-ordered world, but why would anybody have to move my (for example) mailcap file or the way GnuPG is implemented?” His penchant for keeping things the same is not limited to Fedora or computers. Raby said, “Don’t feel bad though, I also want to get rid of the throttle-body fuel injection in my Ford Ranger and install a carburetor.”

What hardware?

Raby has a Dell Latitude E6400 with an Intel Core 2 Duo T9600 and 4GB of memory. He also has a CyberWorks desktop with an Intel Celeron 430, ASUSTeK P5GC-MX/1333 motherboard.

Dale Raby: How do you Fedora?

What software?

Raby uses LibreOffice, Mutt, Thunderbird, both GNOME and LXDE desktops, and GnuPG. He has tried to get LemonPOS up and running, but had little success.

Raby has two stores and must transfer inventory between them. “As there are two stores and merchandise frequently travels from one store to the other, I use Dropbox to transfer files. Guns have to be tracked carefully,” said Dale. “We always generate an electronic document that gets stored in a shared file on Dropbox as well as a hard copy that accompanies the gun when it is transferred. Even if the paperwork gets lost, the receiving clerk can look at the electronic file and check the serial number and description against the actual gun itself.”

The paperwork for transfers are generated using LibreOffice. The business also auctions guns and need to have pictures of them. Raby uses gThumbs to manage the images, and GIMP to edit them when necessary.

Dale Raby: How do you Fedora?

When asked about the books he writes, Raby said, “I use Focuswriter to write my books, then store them in a shared Dropbox directory where Diane, my proofreader, can access them and find all of my mistakes before I submit the manuscript to Book County for distribution. With retirement looming up in about ten years it would be nice to have some future royalty checks coming in to supplement my social security.”

New Taskotron Tasks

For a while now, we, Fedora QA, have been busy with building Taskotron core features and didn’t have much resources for additions to the tasks that Taskotron runs. That changed a few weeks back when we started running task-dockerautotest, task-abicheck and task-rpmgrill tasks in our dev environment. Since we have been happy with the results of having run those tasks, we deployed them to the production instance as well last week. Please note that the results of those tasks are informative only. Lets introduce the tasks briefly:

Task-dockerautotest

Task-dockerautotest is the first package-specific task that Taskotron runs. It is triggered on each completed docker build in Koji. Please note that we wrote only the taskotron wrapper for the dockerautotest test suite. The test suite’s git repo can be found here and the task’s git repo here.

Task-abicheck

Task-abicheck looks for incompatibilities in ABI between two versions of a package. The task is run only on a subset of packages in Fedora. Currently we run the task on a subset (we exclude firefox, thunderbird, kernel, kdelibs, kdepim and qt) of packages from critpath. The reason we limit packages we run the task on, for now, is due to memory consumption the task needs for some packages. Each time a build of a package from critpath is completed in Koji task-abicheck is run and compares ABIs of the build and its previous version. Kudos to Dodji Seketeli for the work on libabigail and Sinny Kumari for writing the task! The task’s git repo can be found on pagure.

Task-rpmgrill

Task-rpmgrill is run each time a build is completed in Koji and performs a set of analysis tests on all RPMs built from the build’s source RPM. The task expands capabilities of task-rpmlint with build log analysis or ELF checks, to name a few. Kudos to Ralph Bean for writing Taskotron wrapper for the task! The task’s git repo can be found here.

Looking at the task results

If you are interested in looking at the new tasks results, you can do so via ResultsDB frontend.

Glorious future

Recently, Miro Hroncok sent a review request for “Python Version Check” which we are planning to review and deploy in the not so distant future. We are also looking for “guinea pigs” to write package-specific tasks so we can test our upcoming feature, dist-git style tasks. If you have a task (or an idea of), either generic or packages-specific and want to run it in Taskotron, please do reach to us.

Receiving notifications about results

If you wish to receive notifications about any of new tasks results of you package, you can do so in FMN. If you are not sure how to set up notifications, my previous post on that topic should help with that.

Found an issue?

As usual, if you hit any issues and/or have any questions or suggestions, don’t hesitate to contact us either in #fedora-qa on Freenode IRC or on qa-devel mailing list. You can also file bugs or send RFE in our Phabricator instance.


RISC-V on an FPGA, pt. 3

Compiling the lowRISC software, full GCC, Linux and busybox was straightforward. You have to follow the documentation here very carefully. The only small problem I had was their environment variable script uses $TOP which is also used by something else (bash? – not sure, anyway you just have to modify their script so it doesn’t use that name).

The above gets you enough to boot Linux on Spike, the RISC-V functional emulator. That’s not interesting to me, let’s see it running on my FPGA instead.

LowRISC again provide detailed instructions for compiling the FPGA which you have to follow carefully. I had to remove the -version flags in a script, but otherwise it went fine.

It wasn’t clear to me what you’re supposed to do with the final bitstream (./lowrisc-chip-imp/lowrisc-chip-imp.runs/impl_1/chip_top.new.bit) file, but in fact you use it in Vivado to program the device:

Screenshot_2016-07-26_08-33-29

The simple hello world example was successful (output shown below is from /dev/ttyUSB1 connected to the dev board):

Screenshot_2016-07-26_08-36-42


Blivet-gui 2.0

Blivet-gui has reached another milestone few days ago -- I've released version 2.0. This happened mostly because of the new Blivet 2.0 and its new API, but there are some new features in blivet-gui as well.

FreeIPA Lightweight CA internals

In the preceding post, I explained the use cases for the FreeIPA lightweight sub-CAs feature, how to manage CAs and use them to issue certificates, and current limitations. In this post I detail some of the internals of how the feature works, including how signing keys are distributed to replicas, and how sub-CA certificate renewal works. I conclude with a brief retrospective on delivering the feature.

Full details of the design of the feature can be found on the design page. This post does not cover everything from the design page, but we will look at the aspects that are covered from the perspective of the system administrator, i.e. "what is happening on my systems?"

Dogtag lightweight CA creation

The PKI system used by FreeIPA is called Dogtag. It is a separate project with its own interfaces; most FreeIPA certificate management features are simply reflecting a subset of the corresponding Dogtag interface, often integrating some additional access controls or identity management concepts. This is certainly the case for FreeIPA sub-CAs. The Dogtag lightweight CAs feature was implemented initially to support the FreeIPA use case, yet not all aspects of the Dogtag feature are used in FreeIPA as of v4.4, and other consumers of the Dogtag feature are likely to emerge (in particular: OpenStack).

The Dogtag lightweight CAs feature has its own design page which documents the feature in detail, but it is worth mentioning some important aspects of the Dogtag feature and their impact on how FreeIPA uses the feature.

  • Dogtag lightweight CAs are managed via a REST API. The FreeIPA framework uses this API to create and manage lightweight CAs, using the privileged RA Agent certificate to authenticate. In a future release we hope to remove the RA Agent and authenticate as the FreeIPA user using GSS-API proxy credentials.
  • Each CA in a Dogtag instance, including the "main" CA, has an LDAP entry with object class authority. The schema includes fields such as subject and issuer DN, certificate serial number, and a UUID primary key, which is randomly generated for each CA. When FreeIPA creates a CA, it stores this UUID so that it can map the FreeIPA CA’s common name (CN) to the Dogtag authority ID in certificate requests or other management operations (e.g. CA deletion).
  • The "nickname" of the lightweight CA signing key and certificate in Dogtag’s NSSDB is the nickname of the "main" CA signing key, with the lightweight CA’s UUID appended. In general operation FreeIPA does not need to know this, but the ipa-certupdate program has been enhanced to set up Certmonger tracking requests for FreeIPA-managed lightweight CAs and therefore it needs to know the nicknames.
  • Dogtag lightweight CAs may be nested, but FreeIPA as of v4.4 does not make use of this capability.

So, let’s see what actually happens on a FreeIPA server when we add a lightweight CA. We will use the sc example from the previous post. The command executed to add the CA, with its output, was:

% ipa ca-add sc --subject "CN=Smart Card CA, O=IPA.LOCAL" \
    --desc "Smart Card CA"
---------------
Created CA "sc"
---------------
  Name: sc
  Description: Smart Card CA
  Authority ID: 660ad30b-7be4-4909-aa2c-2c7d874c84fd
  Subject DN: CN=Smart Card CA,O=IPA.LOCAL
  Issuer DN: CN=Certificate Authority,O=IPA.LOCAL 201606201330

The LDAP entry added to the Dogtag database was:

dn: cn=660ad30b-7be4-4909-aa2c-2c7d874c84fd,ou=authorities,ou=ca,o=ipaca
authoritySerial: 63
objectClass: authority
objectClass: top
cn: 660ad30b-7be4-4909-aa2c-2c7d874c84fd
authorityID: 660ad30b-7be4-4909-aa2c-2c7d874c84fd
authorityKeyNickname: caSigningCert cert-pki-ca 660ad30b-7be4-4909-aa2c-2c7d87
 4c84fd
authorityKeyHost: f24b-0.ipa.local:443
authorityEnabled: TRUE
authorityDN: CN=Smart Card CA,O=IPA.LOCAL
authorityParentDN: CN=Certificate Authority,O=IPA.LOCAL 201606201330
authorityParentID: d3e62e89-df27-4a89-bce4-e721042be730

We see the authority UUID in the authorityID attribute as well as cn and the DN. authorityKeyNickname records the nickname of the signing key in Dogtag’s NSSDB. authorityKeyHost records which hosts possess the signing key – currently just the host on which the CA was created. authoritySerial records the serial number of the certificate (more that that later). The meaning of the rest of the fields should be clear.

If we have a peek into Dogtag’s NSSDB, we can see the new CA’s certificate:

# certutil -d /etc/pki/pki-tomcat/alias -L

Certificate Nickname              Trust Attributes
                                  SSL,S/MIME,JAR/XPI

caSigningCert cert-pki-ca         CTu,Cu,Cu
auditSigningCert cert-pki-ca      u,u,Pu
Server-Cert cert-pki-ca           u,u,u
caSigningCert cert-pki-ca 660ad30b-7be4-4909-aa2c-2c7d874c84fd u,u,u
ocspSigningCert cert-pki-ca       u,u,u
subsystemCert cert-pki-ca         u,u,u

There it is, alongside the main CA signing certificate and other certificates used by Dogtag. The trust flags u,u,u indicate that the private key is also present in the NSSDB. If we pretty print the certificate we will see a few interesting things:

# certutil -d /etc/pki/pki-tomcat/alias -L \
    -n 'caSigningCert cert-pki-ca 660ad30b-7be4-4909-aa2c-2c7d874c84fd'
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 63 (0x3f)
        Signature Algorithm: PKCS #1 SHA-256 With RSA Encryption
        Issuer: "CN=Certificate Authority,O=IPA.LOCAL 201606201330"
        Validity:
            Not Before: Fri Jul 15 05:46:00 2016
            Not After : Tue Jul 15 05:46:00 2036
        Subject: "CN=Smart Card CA,O=IPA.LOCAL"
        ...
        Signed Extensions:
            ...
            Name: Certificate Basic Constraints
            Critical: True
            Data: Is a CA with no maximum path length.
            ...

Observe that:

  • The certificate is indeed a CA.
  • The serial number (63) agrees with the CA’s LDAP entry.
  • The validity period is 20 years, the default for CAs in Dogtag. This cannot be overridden on a per-CA basis right now, but addressing this is a priority.

Finally, let’s look at the raw entry for the CA in the FreeIPA database:

dn: cn=sc,cn=cas,cn=ca,dc=ipa,dc=local
cn: sc
ipaCaIssuerDN: CN=Certificate Authority,O=IPA.LOCAL 201606201330
objectClass: ipaca
objectClass: top
ipaCaSubjectDN: CN=Smart Card CA,O=IPA.LOCAL
ipaCaId: 660ad30b-7be4-4909-aa2c-2c7d874c84fd
description: Smart Card CA

We can see that this entry also contains the subject and issuer DNs, and the ipaCaId attribute holds the Dogtag authority ID, which allows the FreeIPA framework to dereference the local ID (sc) to the Dogtag ID as needed. We also see that the description attribute is local to FreeIPA; Dogtag also has a description attribute for lightweight CAs but FreeIPA uses its own.

Lightweight CA replication

FreeIPA servers replicate objects in the FreeIPA directory among themselves, as do Dogtag replicas (note: in Dogtag, the term clone is often used). All Dogtag instances in a replicated environment need to observe changes to lightweight CAs (creation, modification, deletion) that were performed on another replica and update their own view so that they can respond to requests consistently. This is accomplished via an LDAP persistent search which is run in a monitor thread. Care was needed to avoid race conditions. Fortunately, the solution for LDAP-based profile storage provided a fine starting point for the authority monitor; although lightweight CAs are more complex, many of the same race conditions can occur and these were already addressed in the LDAP profile monitor implementation.

But unlike LDAP-based profiles, a lightweight CA consists of more than just an LDAP object; there is also the signing key. The signing key lives in Dogtag’s NSSDB and for security reasons cannot be transported through LDAP. This means that when a Dogtag clone observes the addition of a lightweight CA, an out-of-band mechanism to transport the signing key must also be triggered.

This mechanism is covered in the design pages but the summarised process is:

  1. A Dogtag clone observes the creation of a CA on another server and starts a KeyRetriever thread. The KeyRetriever is implemented as part of Dogtag, but it is configured to run the /usr/libexec/ipa/ipa-pki-retrieve-key program, which is part of FreeIPA. The program is invoked with arguments of the server to request the key from (this was stored in the authorityKeyHost attribute mentioned earlier), and the nickname of the key to request.
  2. ipa-pki-retrieve-key requests the key from the Custodia daemon on the source server. It authenticates as the dogtag/<requestor-hostname>@REALM service principal. If authenticated and authorised, the Custodia daemon exports the signing key from Dogtag’s NSSDB wrapped by the main CA’s private key, and delivers it to the requesting server. ipa-pki-retrieve-key outputs the wrapped key then exits.
  3. The KeyRetriever reads the wrapped key and imports (unwraps) it into the Dogtag clone’s NSSDB. It then initialises the Dogtag CA’s Signing Unit allowing the CA to service signing requests on that clone, and adds its own hostname to the CA’s authorityKeyHost attribute.

Some excerpts of the CA debug log on the clone (not the server on which the sub-CA was first created) shows this process in action. The CA debug log is found at /var/log/pki/pki-tomcat/ca/debug. Some irrelevant messages have been omitted.

[25/Jul/2016:15:45:56][authorityMonitor]: authorityMonitor: Processed change controls.
[25/Jul/2016:15:45:56][authorityMonitor]: authorityMonitor: ADD
[25/Jul/2016:15:45:56][authorityMonitor]: readAuthority: new entryUSN = 109
[25/Jul/2016:15:45:56][authorityMonitor]: CertificateAuthority init 
[25/Jul/2016:15:45:56][authorityMonitor]: ca.signing Signing Unit nickname caSigningCert cert-pki-ca 660ad30b-7be4-4909-aa2c-2c7d874c84fd
[25/Jul/2016:15:45:56][authorityMonitor]: SigningUnit init: debug Certificate object not found
[25/Jul/2016:15:45:56][authorityMonitor]: CA signing key and cert not (yet) present in NSSDB
[25/Jul/2016:15:45:56][authorityMonitor]: Starting KeyRetrieverRunner thread

Above we see the authorityMonitor thread observe the addition of a CA. It adds the CA to its internal map and attempts to initialise it, which fails because the key and certificate are not available, so it starts a KeyRetrieverRunner in a new thread.

[25/Jul/2016:15:45:56][KeyRetrieverRunner-660ad30b-7be4-4909-aa2c-2c7d874c84fd]: Running ExternalProcessKeyRetriever
[25/Jul/2016:15:45:56][KeyRetrieverRunner-660ad30b-7be4-4909-aa2c-2c7d874c84fd]: About to execute command: [/usr/libexec/ipa/ipa-pki-retrieve-key, caSigningCert cert-pki-ca 660ad30b-7be4-4909-aa2c-2c7d874c84fd, f24b-0.ipa.local]

The KeyRetrieverRunner thread invokes ipa-pki-retrieve-key with the nickname of the key it wants, and a host from which it can retrieve it. If a CA has multiple sources, the KeyRetrieverRunner will try these in order with multiple invocations of the helper, until one succeeds. If none succeed, the thread goes to sleep and retries when it wakes up initially after 10 seconds, then backing off exponentially.

[25/Jul/2016:15:47:13][KeyRetrieverRunner-660ad30b-7be4-4909-aa2c-2c7d874c84fd]: Importing key and cert
[25/Jul/2016:15:47:13][KeyRetrieverRunner-660ad30b-7be4-4909-aa2c-2c7d874c84fd]: Reinitialising SigningUnit
[25/Jul/2016:15:47:13][KeyRetrieverRunner-660ad30b-7be4-4909-aa2c-2c7d874c84fd]: ca.signing Signing Unit nickname caSigningCert cert-pki-ca 660ad30b-7be4-4909-aa2c-2c7d874c84fd
[25/Jul/2016:15:47:13][KeyRetrieverRunner-660ad30b-7be4-4909-aa2c-2c7d874c84fd]: Got token Internal Key Storage Token by name
[25/Jul/2016:15:47:13][KeyRetrieverRunner-660ad30b-7be4-4909-aa2c-2c7d874c84fd]: Found cert by nickname: 'caSigningCert cert-pki-ca 660ad30b-7be4-4909-aa2c-2c7d874c84fd' with serial number: 63
[25/Jul/2016:15:47:13][KeyRetrieverRunner-660ad30b-7be4-4909-aa2c-2c7d874c84fd]: Got private key from cert
[25/Jul/2016:15:47:13][KeyRetrieverRunner-660ad30b-7be4-4909-aa2c-2c7d874c84fd]: Got public key from cert
[25/Jul/2016:15:47:13][KeyRetrieverRunner-660ad30b-7be4-4909-aa2c-2c7d874c84fd]: in init - got CA name CN=Smart Card CA,O=IPA.LOCAL

The key retriever successfully returned the key data and import succeeded. The signing unit then gets initialised.

[25/Jul/2016:15:47:13][KeyRetrieverRunner-660ad30b-7be4-4909-aa2c-2c7d874c84fd]: Adding self to authorityKeyHosts attribute
[25/Jul/2016:15:47:13][KeyRetrieverRunner-660ad30b-7be4-4909-aa2c-2c7d874c84fd]: In LdapBoundConnFactory::getConn()
[25/Jul/2016:15:47:13][KeyRetrieverRunner-660ad30b-7be4-4909-aa2c-2c7d874c84fd]: postCommit: new entryUSN = 361
[25/Jul/2016:15:47:13][KeyRetrieverRunner-660ad30b-7be4-4909-aa2c-2c7d874c84fd]: postCommit: nsUniqueId = 4dd42782-4a4f11e6-b003b01c-c8916432
[25/Jul/2016:15:47:14][authorityMonitor]: authorityMonitor: Processed change controls.
[25/Jul/2016:15:47:14][authorityMonitor]: authorityMonitor: MODIFY
[25/Jul/2016:15:47:14][authorityMonitor]: readAuthority: new entryUSN = 361
[25/Jul/2016:15:47:14][authorityMonitor]: readAuthority: known entryUSN = 361
[25/Jul/2016:15:47:14][authorityMonitor]: readAuthority: data is current

Finally, the Dogtag clone adds itself to the CA’s authorityKeyHosts attribute. The authorityMonitor observes this change but ignores it because its view is current.

Certificate renewal

CA signing certificates will eventually expire, and therefore require renewal. Because the FreeIPA framework operates with low privileges, it cannot add a Certmonger tracking request for sub-CAs when it creates them. Furthermore, although the renewal (i.e. the actual signing of a new certificate for the CA) should only happen on one server, the certificate must be updated in the NSSDB of all Dogtag clones.

As mentioned earlier, the ipa-certupdate command has been enhanced to add Certmonger tracking requests for FreeIPA-managed lightweight CAs. The actual renewal will only be performed on whichever server is the renewal master when Certmonger decides it is time to renew the certificate (assuming that the tracking request has been added on that server).

Let’s run ipa-certupdate on the renewal master to add the tracking request for the new CA. First observe that the tracking request does not exist yet:

# getcert list -d /etc/pki/pki-tomcat/alias |grep subject
        subject: CN=CA Audit,O=IPA.LOCAL 201606201330
        subject: CN=OCSP Subsystem,O=IPA.LOCAL 201606201330
        subject: CN=CA Subsystem,O=IPA.LOCAL 201606201330
        subject: CN=Certificate Authority,O=IPA.LOCAL 201606201330
        subject: CN=f24b-0.ipa.local,O=IPA.LOCAL 201606201330

As expected, we do not see our sub-CA certificate above. After running ipa-certupdate the following tracking request appears:

Request ID '20160725222909':
        status: MONITORING
        stuck: no
        key pair storage: type=NSSDB,location='/etc/pki/pki-tomcat/alias',nickname='caSigningCert cert-pki-ca 660ad30b-7be4-4909-aa2c-2c7d874c84fd',token='NSS Certificate DB',pin set
        certificate: type=NSSDB,location='/etc/pki/pki-tomcat/alias',nickname='caSigningCert cert-pki-ca 660ad30b-7be4-4909-aa2c-2c7d874c84fd',token='NSS Certificate DB'
        CA: dogtag-ipa-ca-renew-agent
        issuer: CN=Certificate Authority,O=IPA.LOCAL 201606201330
        subject: CN=Smart Card CA,O=IPA.LOCAL
        expires: 2036-07-15 05:46:00 UTC
        key usage: digitalSignature,nonRepudiation,keyCertSign,cRLSign
        pre-save command: /usr/libexec/ipa/certmonger/stop_pkicad
        post-save command: /usr/libexec/ipa/certmonger/renew_ca_cert "caSigningCert cert-pki-ca 660ad30b-7be4-4909-aa2c-2c7d874c84fd"
        track: yes
        auto-renew: yes

As for updating the certificate in each clone’s NSSDB, Dogtag itself takes care of that. All that is required is for the renewal master to update the CA’s authoritySerial attribute in the Dogtag database. The renew_ca_cert Certmonger post-renewal hook script performs this step. Each Dogtag clone observes the update (in the monitor thread), looks up the certificate with the indicated serial number in its certificate repository (a new entry that will also have been recently replicated to the clone), and adds that certificate to its NSSDB. Again, let’s observe this process by forcing a certificate renewal:

# getcert resubmit -i 20160725222909
Resubmitting "20160725222909" to "dogtag-ipa-ca-renew-agent".

After about 30 seconds the renewal process is complete. When we examine the certificate in the NSSDB we see, as expected, a new serial number:

# certutil -d /etc/pki/pki-tomcat/alias -L \
    -n "caSigningCert cert-pki-ca 660ad30b-7be4-4909-aa2c-2c7d874c84fd" \
    | grep -i serial
        Serial Number: 74 (0x4a)

We also see that the renew_ca_cert script has updated the serial in Dogtag’s database:

# ldapsearch -D cn="Directory Manager" -w4me2Test -b o=ipaca \
    '(cn=660ad30b-7be4-4909-aa2c-2c7d874c84fd)' authoritySerial
dn: cn=660ad30b-7be4-4909-aa2c-2c7d874c84fd,ou=authorities,ou=ca,o=ipaca
authoritySerial: 74

Finally, if we look at the CA debug log on the clone, we’ll see that the the authority monitor observes the serial number change and updates the certificate in its own NSSDB (again, some irrelevant or low-information messages have been omitted):

[26/Jul/2016:10:43:28][authorityMonitor]: authorityMonitor: Processed change controls.
[26/Jul/2016:10:43:28][authorityMonitor]: authorityMonitor: MODIFY
[26/Jul/2016:10:43:28][authorityMonitor]: readAuthority: new entryUSN = 1832
[26/Jul/2016:10:43:28][authorityMonitor]: readAuthority: known entryUSN = 361
[26/Jul/2016:10:43:28][authorityMonitor]: CertificateAuthority init 
[26/Jul/2016:10:43:28][authorityMonitor]: ca.signing Signing Unit nickname caSigningCert cert-pki-ca 660ad30b-7be4-4909-aa2c-2c7d874c84fd
[26/Jul/2016:10:43:28][authorityMonitor]: Got token Internal Key Storage Token by name
[26/Jul/2016:10:43:28][authorityMonitor]: Found cert by nickname: 'caSigningCert cert-pki-ca 660ad30b-7be4-4909-aa2c-2c7d874c84fd' with serial number: 63
[26/Jul/2016:10:43:28][authorityMonitor]: Got private key from cert
[26/Jul/2016:10:43:28][authorityMonitor]: Got public key from cert
[26/Jul/2016:10:43:28][authorityMonitor]: CA signing unit inited
[26/Jul/2016:10:43:28][authorityMonitor]: in init - got CA name CN=Smart Card CA,O=IPA.LOCAL
[26/Jul/2016:10:43:28][authorityMonitor]: Updating certificate in NSSDB; new serial number: 74

When the authority monitor processes the change, it reinitialises the CA including its signing unit. Then it observes that the serial number of the certificate in its NSSDB differs from the serial number from LDAP. It pulls the certificate with the new serial number from its certificate repository, imports it into NSSDB, then reinitialises the signing unit once more and sees the correct serial number:

[26/Jul/2016:10:43:28][authorityMonitor]: ca.signing Signing Unit nickname caSigningCert cert-pki-ca 660ad30b-7be4-4909-aa2c-2c7d874c84fd
[26/Jul/2016:10:43:28][authorityMonitor]: Got token Internal Key Storage Token by name
[26/Jul/2016:10:43:28][authorityMonitor]: Found cert by nickname: 'caSigningCert cert-pki-ca 660ad30b-7be4-4909-aa2c-2c7d874c84fd' with serial number: 74
[26/Jul/2016:10:43:28][authorityMonitor]: Got private key from cert
[26/Jul/2016:10:43:28][authorityMonitor]: Got public key from cert
[26/Jul/2016:10:43:28][authorityMonitor]: CA signing unit inited
[26/Jul/2016:10:43:28][authorityMonitor]: in init - got CA name CN=Smart Card CA,O=IPA.LOCAL

Currently this update mechanism is only used for lightweight CAs, but it would work just as well for the main CA too, and we plan to switch at some stage so that the process is consistent for all CAs.

Wrapping up

I hope you have enjoyed this tour of some of the lightweight CA internals, and in particular seeing how the design actually plays out on your systems in the real world.

FreeIPA lightweight CAs has been the most complex and challenging project I have ever undertaken. It took the best part of a year from early design and proof of concept, to implementing the Dogtag lightweight CAs feature, then FreeIPA integration, and numerous bug fixes, refinements or outright redesigns along the way. Although there are still some rough edges, some important missing features and, I expect, many an RFE to come, I am pleased with what has been delivered and the overall design.

Thanks are due to all of my colleagues who contributed to the design and review of the feature; each bit of input from all of you has been valuable. I especially thank Ade Lee and Endi Dewata from the Dogtag team for their help with API design and many code reviews over a long period of time, and from the FreeIPA team Jan Cholasta and Martin Babinsky for a their invaluable input into the design, and much code review and testing. I could not have delivered this feature without your help; thank you for your collaboration!

All systems go
New status good: Everything seems to be working. for services: The Koji Buildsystem, Package maintainers git repositories, Package Updates Manager

July 25, 2016

There are scheduled downtimes in progress
New status scheduled: scheduled outages in progress for services: The Koji Buildsystem, Package maintainers git repositories, Package Updates Manager
[Howto] Rebase feature branches in Git/Github

Git-Icon-1788CUpdating a feature branch to the actual state of the upstream main branch can be troublesome. Here is a workflow that works – at least for me.

Developing with Git is amazing, due to the possibilities to work with feature branches, remote repositories and so on. However, at some point, after some hours of development, the base of a feature branch will be outdated and it makes sense to update it before a pull request is send upstream. This is best done via rebasing. Here is a short work flow for a typical feature branch rebase I often need when developing for example Ansible modules.

  1. First, checkout the main branch, here devel.
  2. Update the main branch from the upstream repository.
  3. Rebase the local copy of the main branch.
  4. Push it to the remote origin, most likely your personal fork of the Git repo.
  5. Check out the feature branch
  6. Rebase the feature branch to the main branch.
  7. Force push the new history to the remote feature branch, most likely again your personal fork of the Git repo.

In terms of code this means:

$ git checkout devel
$ git fetch upstream devel
$ git rebase upstream/devel
$ git push
$ git checkout feature_branch
$ git rebase origin/devel
$ git push -f

This looks rather clean and easy – but I have to admit it took me quite some errors and Git cherry picking to finally get what is needed and what actually works.


Filed under: Ansible, Debian & Ubuntu, Fedora & RHEL, Linux, Shell, Short Tip, Technology
RISC-V on an FPGA, pt. 2

The first step is to install the enormous, proprietary Xilinx Vivado software. (Yes, all FPGA stuff is proprietary and strange). You can follow the general instructions here. The install took a total of 41GB of disk space (no, that is not a mistake), and took a few hours, but is otherwise straightforward.

The difficult bit was getting the Vivado software to actually see the hardware. It turns out that Xilinx use a proprietary USB driver, and, well, long story short you have to run the install_drivers script buried deep in the Vivado directory tree. All it does is put a couple of files under /etc/udev/rules.d, but it didn’t do udevadm control --reload-rules so you have to do that as well, replug the cable, and it should be detectable in Vivado:

Screenshot_2016-07-25_15-42-19


Looking forward to flock 2016

Just over one week until flock ( https://flocktofedora.org ), Fedora’s main yearly conference. This time it’s in Kraków, Poland. This of course means a long time traveling for myself and other North American Fedorans, but it’s always well worth it.

In addition to seeing old and new friends, I’m looking forward to quite a lot of the talks (see https://flock2016.sched.org/ for the full schedule) along with workshops and hallway discussions.

I’m going to be giving a talk on using Fedora Rawhide as your daily driver OS. Hopefully lots of tips and workflows that people will find useful. Also, I am co-presenting with Pierre-Yves Chibon (pingou) on the current state of Fedora Infrastructure along with lots of upcoming plans.

If you aren’t able to make it to Kraków in person, do follow along on #fedora-flock ( on freenode ) and in blogs and other project communications.

Obteniendo ayuda en Linux #LPIC
Cuando estamos utilizan Linux, siempre nos saldrá una duda. Si bien es para saber cómo podemos ejecutar tal cosa, o como el sistema funciona en sí mismo, con qué parámetros podemos hacer que sea más efectivo nuestro script....etc

Por ende, Linux es uno de los sistemas operativos más documentados al estar respaldado por una gran comunidad de usuarios y/o empresas que no paran de crecer día tras día. Multitud de libros específicos para determinadas cosas como "tunear" los diferentes tipos de sistemas de ficheros (EXT4, XFS, ReiserFS, BTRFS...), como optimizar el stack TCP/IP.

Además de las plataformas de soporte que van desde un simple blog hasta pasar a grandes foros con tráficos increíblemente altos y sorprendentes; además de las páginas específicas y solo centradas a reporte de errores como es el caso de Bugzilla de Red Hat o de Gentoo, Archlinux..., o de seguridad como CVEdetails.

En caso de que quieras recurrir a la ayuda de un determinado comando, programa...etc tienes las siguientes alternativas:

  • Haciendo uso de los parámetros y/u opciones --help; -h; --h que tenga el programa en cuestión
  • Utilizando el comando man, info del sistema
  • Utilizando los ficheros README que suelen dejar alojados en un directorio dentro de tu sistema
  • Acudiendo a las páginas oficiales del proyecto y/o independientes del mismo, comando...etc

En primer lugar, los comandos suelen tener una pequeña ayuda resumida que nos explica lo que hace cada parámetro. No contiene ejemplos para mostrarnos como funciona, pero si nos despeja alguna duda rápida que tengamos con respecto al programa, app...

Salida acortada del comando tar:
netsys@keys0 ~ $ tar
tar: You must specify one of the '-Acdtrux', '--delete' or '--test-label' options
Try 'tar --help' or 'tar --usage' for more information.
Aquí nos dice que para obtener más información utilicemos la opción --help. 
netsys@keys0 ~ $ tar --help
Usage: tar [OPTION...] [FILE]...
GNU 'tar' saves many files together into a single tape or disk archive, and can
restore individual files from the archive.
Examples:
  tar -cf archive.tar foo bar  # Create archive.tar from files foo and bar.
  tar -tvf archive.tar         # List all files in archive.tar verbosely.
  tar -xf archive.tar          # Extract all files from archive.tar.
 Main operation mode:
  -A, --catenate, --concatenate   append tar files to an archive
  -c, --create               create a new archive
  -d, --diff, --compare      find differences between archive and file system
      --delete               delete from the archive (not on mag tapes!)
  -r, --append               append files to the end of an archive
[...] 
Como podemos ver, ahora sí que nos sale una salida mucha más extensa (tuve que recortarla) que nos especifica para que sirven determinados parámetros he incluso, han sido más generosos y nos han dado algunos ejemplos de uso más comunes.

Si queremos obtener una ayuda más extensa y mucho más explicativa deberemos hacer uso del comando man.

El comando man se utiliza para indagar en los manuales de cada programa, aplicación o script que contenga dicha documentación almacenada en el sistema. En caso de andar en otro ordenador tenemos disponible una versión online de man para acceder desde cualquier parte.

Al haber comandos y/o aplicaciones que puedan contener el mismo nombre. Man puede acceder a los manuales por secciones, evitando que se habra la página errónea del comando y/o programa; aparte de otro criterio adicional de clasificación.

  1. Ejecutables o programas basados en shell
  2. Llamadas del sistema o conocidas en inglés como System calls (funciones proporcionadas por el kernel)
  3. Llamadas a librerías (funciones que están dentro de librerías)
  4. Ficheros especiales (Se suelen encontrar en el directorio /dev)
  5. Formatos de ficheros y convenciones como por ejemplo /etc/passwd
  6. Juegos
  7. Miscelanea  (que incluyen macro paquetes y convenciones), ejemplos: man(7), groff(7)
  8. Comandos de administración del sistema (Generalmente solo para superusuario)
  9. Routinas del kernel [Aún no estandarizado]

Su uso es muy sencillo. Basta con escribir:
netsys@keys0 ~ $ man 1 sh
Para obtener la siguiente salida:
SH(1P)     POSIX Programmer's Manual          SH(1P)

PROLOG
       This  manual  page is part of the POSIX Programmer's Manual.  The Linux
       implementation of this interface may differ (consult the  corresponding
       Linux  manual page for details of Linux behavior), or the interface may
       not be implemented on Linux.

NAME
       sh — shell, the standard command language interpreter

SYNOPSIS
       sh [−abCefhimnuvx] [−o option]... [+abCefhimnuvx] [+o option]...
           [command_file [argument...]]

       sh −c [−abCefhimnuvx] [−o option]... [+abCefhimnuvx] [+o option]...
           command_string [command_name [argument...]]

       sh −s [−abCefhimnuvx] [−o option]... [+abCefhimnuvx] [+o option]...
           [argument...]

Hemos tenido que acortarla como en el caso anterior porque es una salida muy extensa.

También tenemos el comando info que es parecido a man pero con otro estilo de navegación.

Ejemplo de salida de info:
netsys@keys0 ~ $ info bash
Next: Introduction,  Prev: (dir),  Up: (dir)
Bash Features
*************
This text is a brief description of the features that are present in
the Bash shell (version 4.3, 2 February 2014).  The Bash home page is
`http://www.gnu.org/software/bash/'.
   This is Edition 4.3, last updated 2 February 2014, of `The GNU Bash
Reference Manual', for `Bash', Version 4.3.

   Bash contains features that appear in other popular shells, and some
features that only appear in Bash.  Some of the shells that Bash has
borrowed concepts from are the Bourne Shell (`sh'), the Korn Shell
(`ksh'), and the C-shell (`csh' and its successor, `tcsh').  The
following menu breaks the features up into categories, noting which
features were inspired by other shells and which are specific to Bash.

   This manual is meant as a brief introduction to features found in
Bash.  The Bash manual page should be used as the definitive reference
on shell behavior.
[...]
Para saber más sobre sus funcionalidades, controles de navegación y como sacarle el máximo partido si utilizas scripts. Puedes acceder a su documentación desde el mismo man: $ man man ó $ man info; $ info man; $ info info

Por otro lado, tenemos a los famosos README que suelen acompañar a los programas que descargamos bien en código fuente (tarball); o incluso si son binarios o directamente paquetes.
Si haces una búsqueda con el comando locate README (un comando que permite buscar ficheros utilizando su DB) puedes encontrar READMEs que ni te imaginabas que estaban ahí y que contienen información en muchos casos relevantes, y son recomendables al menos, echarles un vistazo por encima.

Y por último, accediendo a las páginas Web oficiales de los proyectos las cuáles suelen contener una Wiki adicional o de página principal que alojan tutoriales, workarounds y muchas más cosas interesantes como por ejemplo la wiki de Gentoo o de ALSA. También podemos aprovechar aquellos foros, o comunidades ajenas a los proyectos.

Referencias

  • Gentoo wiki
  • Arch Linux wiki
  • Eni ediciones ~ Preparación para la certificación LPIC-1
  • Man man
  • Info man
Privacy – why is it mostly a buzzword?
Nowadays even mainstream media is full with privacy related concerns. Ever since the Snowden leaks, people are aware that they are watched by different governmental agencies in the US and around the world. We give up our privacy for cheap / free on-line services, like Gmail. Facebook knows more about us than our own parents. […]
AArch64 desktop hardware?

Soon there will be four years since I started working on AArch64 architecture. Lot of software things changed during that time. Lot in a hardware too. But machines availability still sucks badly.

In 2012 all we had was software model. It was slow, terribly slow. Common joke was AArch64 developers standing in a queue for 10GHz x86-64 cpus. So I was generating working binaries by using cross compilation. But many distributions only do native builds. In models. Imagine Qt4 building for 3-4 days…

In 2013 I got access to first server hardware. With first silicon version of CPU. Highly unstable, we could use just one core etc. GCC was crashing like hell but we managed to get stable build results from it. Qt4 was building in few hours now.

Then amount of hardware at Red Hat was growing and growing. Farms of APM Mustangs, AMD Seattle and several other servers appeared, got racked and available to use. In 2014 one Mustang even landed on my desk (as first such machine in Poland).

But this was server land. Each of those machines costed about 1000 USD (if not more). And availability was hard too.

Linaro tried to do something about it and created 96boards project.

First came ‘Consumer Edition’ range. Yet another small form factor boards with functionality stripped as much as possible. No Ethernet, no storage other than emmc/usb, low amount of memory, chips taken from mobile phones etc. But it was selling! But only because people were hungry to get ANYTHING with AArch64 cores. First was HiKey then DragonBoard410 got released. Then few other boards. All with same set of issues: non-mainline kernel, weird bootloaders, binary blobs for this or that…

Then so called ‘Enterprise Edition’ got announced. With another ridiculous form factor (and microATX as an option). And that was it. There was a leak of Husky board which shown how fucked up design it was. Ports all around the edges, memory above and under board and of course incompatible with any industrial form factor. I would like to know what they were smoking…

Time passed by. Husky got forgotten for another year. Then Cello was announced as a “new EE 96boards board” while it looked as redesigned Husky with two SATA ports less (because who needs more than two SATA, right?). Last time I heard about Cello it was still ‘maybe soon, maybe another two weeks’. Prototypes looked like hand soldered, USB controller mounted rotated, dead on-board Ethernet etc.

In meantime we got few devices from other companies. Pine64 had big campaign on Kickstarter and shipped to developers. Hardkernel started selling ODROID-C2, Geekbox released their TV box and probably something else got released as well. But all those boards were limited to 1-2GB of memory, often lacked SATA and used mobile processors with their own set of bootloaders etc causing extra work for distributions.

Overdrive 1000 was announced. Without any options for expansion it looked like SoftIron wanted customers to buy Overdrive 3000 if they want to use PCI Express card.

So we have 2016 now. Four years of my work on AArch64 passed. Most of distributions support this architecture by building on proper servers but most of this effort is not used because developers do not have sane hardware to play with (sane means expandable, supported by distributions, capable).

There is no standard form factor mainboards (mini-itx, microATX, ATX) available on mass market. 96boards failed here, server vendors are not interested, small Chinese companies prefer to release yet-another-fruit/Pi with mobile processor. Nothing, null, nada, nic.

Developers know where to buy normal computer cases, storage, memory, graphics cards, USB controllers, SATA controllers and peripherals. So vendors do not have to worry/deal with this part. But still there is nothing to put those cards into. No mainboards which can be mounted into normal PC case, have some graphics plugged in, few SSD/HDD connected, mouse/keyboard, monitors and just be used.

Sometimes it is really hard to convince software developers to make changes for platform they are unable to test on. And current hardware situation does not help. All those projects of hardware being available “in a cloud” helps only for subset of projects — ever tried to run GNOME/KDE session over the network? With OpenGL acceleration etc?

So where is my AArch64 workstation? In desktop or laptop form.

Post written after my Google+ post where similar discussion happened in comments.

RISC-V on an FPGA, pt. 1

Last year I had open source instruction set RISC-V running Linux emulated in qemu. However to really get into the architecture, and restore my very rusty FPGA skills, wouldn’t it be fun to have RISC-V working in real hardware.

The world of RISC-V is pretty confusing for outsiders. There are a bunch of affiliated companies, researchers who are producing actual silicon (nothing you can buy of course), and the affiliated(?) lowRISC project which is trying to produce a fully open source chip. I’m starting with lowRISC since they have three iterations of a design that you can install on reasonably cheap FPGA development boards like the one above. (I’m going to try to install “Untether 0.2” which is the second iteration of their FPGA design.)

There are two FPGA development kits supported by lowRISC. They are the Xilinx Artix-7-based Nexys 4 DDR, pictured above, which I bought from Digi-Key for £261.54 (that price included tax and next day delivery from the US).

There is also the KC705, but that board is over £1,300.

The main differences are speed and available RAM. The Nexys has 128MB of RAM only, which is pretty tight to run Linux. The KC705 has 1GB of RAM.

I’m also going to look at the dev kits recommended by SiFive, which start at US$150 (also based on the Xilinx Artix-7).


FISL17

A few days ago we had in the Brazil 17th edition of FISL (International Free Software Forum), had less public compared to previous years but it was quite engaged, especially in the area of communities and we as #Fedora-LatAm ambassadors were there.

ambassadors

Ana Mativi, Rino (@Villadalmine), Itamar Jp, Ezequiel (QliXed) Brizuela, Bruno R. Zanuzzo, Eduardo Echeverria, Junior Wolnei e Daniel Lara. I personally knew only two of those people so it’s nice to see new faces behind the nicknames.

The week at FISL was intense, a lot of people to talk (especially students, both college as schools) we help some of them install Fedora 24, and even guide that distinct group of people interested in contribute with the project in some way.

IMG_20160715_191844 IMG_20160715_160220 Jpeg 28315006896_7a2824849e_k Jpeg Jpeg

We certainly had fun doing the quiz (offering stickers), it was an easy way to keep their attention and to talk about the Fedora ecosystem, our four foundations (freedom, friends, features, first) and the effort of so many people around the world helping to build this project to finally get to users. How, Why, What to contribute was also major topics in our talk presented by the whole team.

talk1

The event also was cool because we had experienced packagers (@echevemaster, @itamarjp, @pcpa) who were with us, helped me, Ana and Daniel take the first steps with packaging.

meeting dinner

And by the way, have you ever attend a IRC meeting in a restaurant? Well, we did and we just did not have a stable connection. It happens.

Have a nice day.


Introduction to Modularity

What is Modularity?

Modularity is an exciting, new initiative aimed at resolving the issue of diverging (and occasionally conflicting) lifecycles of different “components” within Fedora. A great example of a diverging and conflicting lifecycle is the Ruby on Rails (RoR) lifecycle, whereby Fedora stipulates that itself can only have one version of RoR at any point in time – but that doesn’t mean Fedora’s version of RoR won’t conflict with another version of RoR used in an application. Therefore, we want to avoid having “components”, like RoR, conflict with other existing components within Fedora.

Although RoR can be thought of as a component, the definition of “component” is actually a work-in-progress. In other words, another example of a component might be a “LAMP module”, where module is defined as a well-integrated and well-tested set of smaller components that provide functionality. The LAMP module would contain the necessary smaller components required to build and deploy a dynamic, high-performance Apache web server that utilizes MariaDB and PHP. Such a module would be completely independent of all other modules.

When we have module “independence”, we avoid the issue of conflicting lifecycles, such as the RoR lifecycle issue discussed earlier. For example, if we chose to update the LAMP module, updating this module would not interfere (or conflict) with the functionality of other components within Fedora . This independence is the prime benefit of Modularity.

Staying up to date with Modularity

https://dp1eoqdp1qht7.cloudfront.net/community/migrated/c56/d89/212801/image

Source: CloudFront

Currently, there are three fantastic ways to keep up to date with the Modularity effort:

The first way is, of course, to read our blog posts! We will provide you with updates whenever we write new documentation, create videos, or have more exciting information to share with you.

The second way is to watch videos on our YouTube channel, which provides numerous short demos of the ongoing work in Modularity. Since our YouTube channel is updated on a biweekly basis, you can be sure to find regular, detailed updates in all of our videos. Just watch our updates by playlist.

The third and final way is to read our wiki pages on Fedoraproject.org, which hosts documentation for all of our work. Reading these wiki pages are a great way to view progress of the Modularity effort since new pages are added as new functionality/infrastructure is added.

And don’t forget – if you are interested in playing around with any of the Fedora Pagure repos discussed in our YouTube videos, blog posts, or wiki pages, we highly recommend for you to check them out here. Any and all of our work will be hosted on Pagure for your viewing and coding pleasure.

How can I get involved?

http://cdn.makeuseof.com/wp-content/uploads/2015/05/contribute-open-source-community.png

Source: MakeUseOf

Are you interested in the Modularity effort and would like to contribute or learn more? If so, you can always find us on #fedora-modularity on freenode (to join freenode and chat with us, see freenode.net).

The post Introduction to Modularity appeared first on Fedora Community Blog.

PreLinuxDay: Talk about Fedora QA and L10N
Linux Day is a global celebration of Linux. According to the event site, there is currently 9 teams in 5 countries. One of these teams is from my country, Panama. The responsible of doing this is Jose Reyes, our newest Panamanian Fedora Ambassador.


As part of the promotion of the event, there were several workshops related to Linux. Most of them were specific to Fedora. I had the opportunity to share about Fedora Translation and Fedora Quality Assurance.


The talk was held at Universidad Interamericana de Panama. It had good attendance. Many interested people, most of them students. I taught them how to get involved with these two important teams.


At the end of the talk, a representative of the university gave me a certificate recognizing my effort. It was nice.

 
Thanks to Jose Reyes, Floss-Pa and Universidad Interamericana de Panama involved for the invitation. I'm looking forward representing Fedora on the main event. 

See you there!
Lightweight Sub-CAs in FreeIPA 4.4

Last year FreeIPA 4.2 brought us some great new certificate management features, including custom certificate profiles and user certificates. The upcoming FreeIPA 4.4 release builds upon this groundwork and introduces lightweight sub-CAs, a feature that lets admins to mint new CAs under the main FreeIPA CA and allows certificates for different purposes to be issued in different certificate domains. In this post I will review the use cases and demonstrate the process of creating, managing and issuing certificates from sub-CAs. (A follow-up post will detail some of the mechanisms that operate behind the scenes to make the feature work.)

Use cases

Currently, all certificates issued by FreeIPA are issued by a single CA. Say you want to issue certificates for various purposes: regular server certificates, and user certificates for VPN authentication, and authentication to a particular web service. Currently, assuming the certificate bore the appropriate Key Usage and Extended Key Usages extensions (with the default profile, they do), a certificate issued for one of these purposes could be used for all of the other purposes.

Issuing certificates for particular purposes (especially client authentication scenarios) from a sub-CA allows an administrator to configure the endpoint authenticating the clients to use the immediate issuer certificate for validation client certificates. Therefore, if you had a sub-CA for issuing VPN authentication certificates, and a different sub-CA for issuing certificates for authenticating to the web service, one could configure these services to accept certificates issued by the relevant CA only. Thus, where previously the scope of usability may have been unacceptably broad, administrators now have more fine-grained control over how certificates can be used.

Finally, another important consideration is that while revoking the main IPA CA is usually out of the question, it is now possible to revoke an intermediate CA certificate. If you create a CA for a particular organisational unit (e.g. some department or working group) or service, if or when that unit or service ceases to operate or exist, the related CA certificate can be revoked, rendering certificates issued by that CA useless, as long as relying endpoints perform CRL or OCSP checks.

Creating and managing sub-CAs

In this scenario, we will add a sub-CA that will be used to issue certificates for users’ smart cards. We assume that a profile for this purpose already exists, called userSmartCard.

To begin with, we are authenticated as admin or another user that has CA management privileges. Let’s see what CAs FreeIPA already knows about:

% ipa ca-find
------------
1 CA matched
------------
  Name: ipa
  Description: IPA CA
  Authority ID: d3e62e89-df27-4a89-bce4-e721042be730
  Subject DN: CN=Certificate Authority,O=IPA.LOCAL 201606201330
  Issuer DN: CN=Certificate Authority,O=IPA.LOCAL 201606201330
----------------------------
Number of entries returned 1
----------------------------

We can see that FreeIPA knows about the ipa CA. This is the "main" CA in the FreeIPA infrastructure. Depending on how FreeIPA was installed, it could be a root CA or it could be chained to an external CA. The ipa CA entry is added automatically when installing or upgrading to FreeIPA 4.4.

Now, let’s add a new sub-CA called sc:

% ipa ca-add sc --subject "CN=Smart Card CA, O=IPA.LOCAL" \
    --desc "Smart Card CA"
---------------
Created CA "sc"
---------------
  Name: sc
  Description: Smart Card CA
  Authority ID: 660ad30b-7be4-4909-aa2c-2c7d874c84fd
  Subject DN: CN=Smart Card CA,O=IPA.LOCAL
  Issuer DN: CN=Certificate Authority,O=IPA.LOCAL 201606201330

The --subject option gives the full Subject Distinguished Name for the new CA; it is mandatory, and must be unique among CAs managed by FreeIPA. An optional description can be given with --desc. In the output we see that the Issuer DN is that of the IPA CA.

Having created the new CA, we must add it to one or more CA ACLs to allow it to be used. CA ACLs were added in FreeIPA 4.2 for defining policies about which profiles could be used for issuing certificates to which subject principals (note: the subject principal is not necessarily the principal performing the certificate request). In FreeIPA 4.4 the CA ACL concept has been extended to also include which CA is being asked to issue the certificate.

We will add a CA ACL called user-sc-userSmartCard and associate it with all users, with the userSmartCard profile, and with the sc CA:

% ipa caacl-add user-sc-userSmartCard --usercat=all
------------------------------------
Added CA ACL "user-sc-userSmartCard"
------------------------------------
  ACL name: user-sc-userSmartCard
  Enabled: TRUE
  User category: all

% ipa caacl-add-profile user-sc-userSmartCard --certprofile userSmartCard
  ACL name: user-sc-userSmartCard
  Enabled: TRUE
  User category: all
  CAs: sc
  Profiles: userSmartCard
-------------------------
Number of members added 1
-------------------------

% ipa caacl-add-ca user-sc-userSmartCard --ca sc
  ACL name: user-sc-userSmartCard
  Enabled: TRUE
  User category: all
  CAs: sc
-------------------------
Number of members added 1
-------------------------

A CA ACL can reference multiple CAs individually, or, like we saw with users above, we can associate a CA ACL with all CAs by setting --cacat=all when we create the CA ACL, or via the ipa ca-mod command.

A special behaviour of CA ACLs with respect to CAs must be mentioned: if a CA ACL is associated with no CAs (either individually or by category), then it allows access to the ipa CA (and only that CA). This behaviour, though inconsistent with other aspects of CA ACLs, is for compatibility with pre-sub-CAs CA ACLs. An alternative approach is being discussed and could be implemented before the final release.

Requesting certificates from sub-CAs

The ipa cert-request command has learned the --ca argument for directing the certificate request to a particular sub-CA. If it is not given, it defaults to ipa.

alice already has a CSR for the key in her smart card, so now she can request a certificate from the sc CA:

% ipa cert-request --principal alice \
    --profile userSmartCard --ca sc /path/to/csr.req
  Certificate: MIIDmDCCAoCgAwIBAgIBQDANBgkqhkiG9w0BA...
  Subject: CN=alice,O=IPA.LOCAL
  Issuer: CN=Smart Card CA,O=IPA.LOCAL
  Not Before: Fri Jul 15 05:57:04 2016 UTC
  Not After: Mon Jul 16 05:57:04 2018 UTC
  Fingerprint (MD5): 6f:67:ab:4e:0c:3d:37:7e:e6:02:fc:bb:5d:fe:aa:88
  Fingerprint (SHA1): 0d:52:a7:c4:e1:b9:33:56:0e:94:8e:24:8b:2d:85:6e:9d:26:e6:aa
  Serial number: 64
  Serial number (hex): 0x40

Certmonger has also learned the -X/--issuer option for specifying that the request be directed to the named issuer. There is a clash of terminology here; the "CA" terminology in Certmonger is already used to refer to a particular CA "endpoint". Various kinds of CAs and multiple instances thereof are supported. But now, with Dogtag and FreeIPA, a single CA may actually host many CAs. Conceptually this is similar to HTTP virtual hosts, with the -X option corresponding to the Host: header for disambiguating the CA to be used.

If the -X option was given when creating the tracking request, the Certmonger FreeIPA submit helper uses its value in the --ca option to ipa cert-request. These requests are subject to CA ACLs.

Limitations

It is worth mentioning a few of the limitations of the sub-CAs feature, as it will be delivered in FreeIPA 4.4.

All sub-CAs are signed by the ipa CA; there is no support for "nesting" CAs. This limitation is imposed by FreeIPA – the lightweight CAs feature in Dogtag does not have this limitation. It could be easily lifted in a future release, if there is a demand for it.

There is no support for introducing unrelated CAs into the infrastructure, either by creating a new root CA or by importing an unrelated external CA. Dogtag does not have support for this yet, either, but the lightweight CAs feature was designed so that this would be possible to implement. This is also why all the commands and argument names mention "CA" instead of "Sub-CA". I expect that there will be demand for this feature at some stage in the future.

Currently, the key type and size are fixed at RSA 2048. Same is true in Dogtag, and this is a fairly high priority to address. Similarly, the validity period is fixed, and we will need to address this also, probably by allowing custom CA profiles to be used.

Conclusion

The Sub-CAs feature will round out FreeIPA’s certificate management capabilities making FreeIPA a more attractive solution for organisations with sophisticated certificate requirements. Multiple security domains can be created for issuing certificates with different purposes or scopes. Administrators have a simple interface for creating and managing CAs, and rules for how those CAs can be used.

There are some limitations which may be addressed in a future release; the ability to control key type/size and CA validity period will be the highest priority among them.

This post examined the use cases and high-level user/administrator experience of sub-CAs. In the next post, I will detail some of the machinery that makes the sub-CAs feature work.

July 24, 2016

Reworking Docs

In May of this year the docs team, with the help of some great folks from Red Hat and the CentOS project held a Documentation FAD. During that event we discussed a lot of important topics including the docs team's publishing toolchain, and the barrier to entry that is docbook.

Over the course of the FAD, and after creating a lot of User Stories, the group came to the following conclusions:

  1. We need to find ways to help enable community members contribute to documentation
  2. Sharing documentation with Red Hat Content Services is good for everyone
  3. The current Publican setup is not meeting our needs
  4. Most people dislike1 docbook

This lead us to create a complex requirements matrix that Remy did his best at capturing online, leading to the conclusion that the best solution is to keep with a static publishing toolchain that supports a less user hostile markup language.

To that end the team suggested that we use Shaun McCance's2 Pintail for publishing, AsciiDoc as the markup language, and a new format for our documentation. The most common question about all of this is "Why?" and the easy answer is that they had the best results against the requirements, but I hope to give a little more insight into that.

Pintail

Publican has been great for the Fedora Docs team of the years, but its real strength is in docbook based full length docs, and that is not want the docs teams is trying to write anymore. When coupled with the fact that the site is generated with an extremely old version of Publican, last supported on Fedora ~18, it was time for us to move on.

Pintail, fits the build, because it is simple to use, a simple code base, and well supported by a responsive upstream. Being written in Python means that members of the Docs Team, and the Fedora community at large, are already able to troubleshoot and fix bugs; something that was not easy to do with Publican. Additional, Shaun has been extremely helpful showing us the ropes and working on feature requests that are only needed by Fedora.

Finally, the tool supports our current and future markdown languages, more on why this is important in a bit.

Asciidoc

Markdown is probably the most popular markup language around right now, but for a documentation project it is missing a lot of features and because of that many "flavors" of markdown exist. The issue is simple, markdown was not designed for writing documentation it was developed so that its creator did not have to write HTML tags. This means that it lacks support for many of the structural elements that make good documentation great. Asciidoc was built to support everything that makes docbook great, while keeping users from endlessly writing <tags>.3

The fact that it is a great markup language for documentation was only one part of the reasoning, it turns out that Red Hat is starting to move more and more towards AsciiDoc as well, and using the same markup will help the teams collaborate that much easier. So while AsiiDoc many not be the most popular markup language on the planet, and it still may have a learning curve for some users, it fits the needs of our documentation better than the alternatives.

New Format

Books are great, everyone reads them (or at least did in school), but they are hard to write and harder to maintain. So we are not going to write them any more, instead smaller single page topic based documents will be written that can be grouped into collections of a larger topic.

For example, a page may be written about disk formatting and be included in a collection about system configuration, but that same topic could also be reused in another collection where formatting a disk is something that needs done. This will not only help to reduce the amount of documentation that the team needs to maintain, but it will once again allow us to share content with Red Hat more easily.

The Plan

All of this is a lot of work. Any one of the sections above is a lot of work, but when you put it all together it is a great deal more. Waiting for everything to be complete would mean that we would not see the fruits of this plan for at least a year, but in reality it would probably be several years. Since fruit is good, as it motivates all of us to keep working, the plan is to work on this in phases.

The first phase will be the implementation of Pintail and a system for continuous integration and delivery. This means that once the new site is ready we will begin to use it to publish the current docbook books that are on docs.fedoraproject.org right now.

Once that is done, we will start the long process of rewriting documents to fit the new style, using AsciiDoc rather than docbook, publishing new collections of topics as the team decides that they are ready to publish. That will mean for a period of time we will have both styles of docs on the site, but it also means that we can focus on writing new documentation without having to also work on all of the old style docs for every release until we are done.

Help Wanted

As mentioned before this is a big project, and we need help. Everything from design and engineering to writing. If you want to help join us in #fedora-docs or take a look at the tracking ticket for phase one on Pagure.


  1. Hate is probably just as accurate

  2. He may prefer we refer to it as Project Mallard's Pintail

  3. After thinking about it Sparks and I had the same conversation about this at FUDCon Lawrence in 2013.

Looking for Andre

My Brother sent out the following message. Signal boosting it here.

“A few weeks ago I started talking to a few guys on the street. (Homeless) Let’s call them James and Anthony. Let’s just skip ahead. I bought them lunch. Ok. I bought $42 worth of Wendy’s $1 burgers and nuggets and a case of water. On top of their lunch. They gathered up all their friends by the Library in Copley sq and made sure that everyone ate. It was like a cookout. You should have seen how happy everyone was. It gave me a feeling that was unexplainable.

“This morning I was in Downtown crossings. I got the feeling in my gut again. That do something better today feeling. I saw a blind guy. His eyes were a mess. He was thin. Almost emaciated. Let’s call him Andre’ he is 30 years old.

looking_for_andre

Andre’

I bought him lunch. I sat with him at a table while he ate. We talked. Andre’s back story…8 years ago he was in college. He was a plumbers apprentice. He was going on a date. As he walked up to the door to knock for the girl. Someone came up and shot him twice in the temple. Andre’ woke up in the hospital blind. To this day he has no idea who or why he was shot. The only possessions Andre’ had was the way-too-warm clothes on his back, his blind cane. His sign, and his cup. I took Andre’ to TJ Maxx. It’s 90 degrees at at 9:30am. I got him a t-shirt, shorts, clean socks and underwear and a back pack. After I paid, I took him back to the dressing room so he could have some privacy while he changed. I told the lady at the dressing room that he was going in to change. She told me that wasn’t allowed. I kindly informed her that I wasn’t asking… She looked at me and quickly realized it wasn’t a request. More of a statement. I must have had a look on my face.

I get those sometimes.

She nodded her understanding. In the dressing room Andre’ cried. He was ashamed for crying. I didn’t say much. Just put my hand on his back for a second to let him knew I understood. After he changed I took him back to where I originally met him and found out his routine. Where he goes when and such. I left Andre’ in his spot and went to go find James and Anthony. You remember them from the beginning of this story. They were in the same spot as a few weeks ago. They remembered me. I told them it was time to return the favor. I explained to them that I wanted them to look out for Andre’ to make sure he was safe. Andre’ has been repeatedly mugged. Who the fuck mugs a hungry homeless blind guy? Well. They must have seen the look in my face saying this wasn’t a request.

I apparently get that look sometimes.

They came with me from Copley all the way to downtown crossings. We went looking for Andre’. We looked all over but couldn’t find him. We went all over south station and back up all over downtown crossings. (For those not familiar, Google a map of Boston) we couldn’t find Andre’. Anthony said he’s seen him around and knew who I was talking about. They promised me they would look for him everyday. I know they will too. They look out for theirs. Remember all the food I bought them and how they made sure everyone ate? James doesn’t like bullies. He sure as shit won’t tolerate someone stealing from a blind and scared homeless guy. Anthony spends his mornings in south station. He promised me that he will find him and try to bring him to where they stay. It’s safer in numbers and when you have a crew watching your back. You have to know who to trust. That’s what they told me. I gave James and Anthony some money for their time and bought them each a cold drink.

“It’s fucking hot out.

“These guys are all on hard times. Some of them fucked up. Some were just unlucky. Andre’…now that’s some shit luck. That’s just not fucking fair. I’ve never met someone like Andre’. How in the hell would I survive if I couldn’t see? I have an amazing family and a great group of friends. Andre’ has no one. Did I change his life? Nope. Did I make his day better? I honestly hope so. I talked to him like a man. I didn’t let him know how horrible I felt for him. No matter how far you fall in life. If you have the strength to get up each day and try to feed your self, you still have pride, you still have hope. I didn’t want to take away any of his pride. He doesn’t have much to begin with. But he must have a little. I will continue to look for Andre’ every day. I met him near my office. I can look during my lunch. I have to find him and keep an eye on him.

“No matter how bad things get. No matter how unfair you feel you have been treated. Pretty much no matter what your lot in life is. Think of Andre’ when you feel down. If he has the strength to go on… So do you.

“I didn’t write this to say ‘look what great things I did.’ I wish I could write this with out being part of the story. There is no way I could express how much this meeting of Andre’ has effected me with out letting you know this is what I did today. ..

“I just got home from this experience. I’ll update this when I find Andre’ and let you know how he’s doing. If anyone in Boston reads this and sees a black guy about my height. Thinner than me…Obviously blind.

“Please hashtag ‪#‎lookingforAndre‬ and tell me where you saw him. Like I said. South station or downtown crossings are the areas that I know of. Thank you for reading this. Help me find Andre’.”

And then he sent this

“I found Andre’. He is meeting me for breakfast tomorrow.”

 

UPDATE:

Billy Set up a fundraising account for Andre.

 

New website from Fedora Latam people. http://fedora-latam.org/
New website from Fedora Latam people.

http://fedora-latam.org/


FISL 17. This year fisl was small than previou's years due to less money from goverment sponsoring ...
FISL 17.

This year fisl was small than previou's years due to less money from goverment sponsoring them. (political crisis in Brazil)

Who needs the goverment ? , Fisl was small but who attended liked it, less booth's from goverment and more space for free software communities.

+Ana Mativi helped fedora booth, She is a very skilled girl that will become a fedora packager anytime soon. +Paulo César Pereira de Andrade will probably sponsor her.

+Eduardo Javier Echeverria Alvarado approved +Daniel Lara and +Bruno Zanuzzo into packager group.

+Wolnei Junior did several talks about fedora and how to be a sucessfull contributor.

To have a good chimarrao never let the water boil I am started to like it, very recommended for cold...
To have a good chimarrao never let the water boil

I am started to like it, very recommended for cold days
Militar Museum in Porto Alegre. old stuff
Militar Museum in Porto Alegre.

old stuff
Final phase

Prototypes

Last week I finished up the prototype for the release widget fully and started coding the calendar widget monthly view and weekly view. So far the implementation consists of the main view and weekly view (link). I am hoping to finish this by Monday evening and concentrate on prototyping the empty state widget.

More issues!

I have taken some more issues for myself to solve this week, one includes a ticket in design trac namely “This week in fedora” and another which involves mobile prototyping and mockups for the hubs project. This seems like an interesting challenge and I hope to concentrate on it the coming week.

Hyperkitty

I finally finished up the hyperkitty heuristics evaluation report after getting my feedback! The report can be viewed here

Bodhi

I finally had a chat with Luke about the redesign project in which I explained him about the progress we have done so far. He particularly liked the idea about the koji link in the web ui so that it becomes easier to download the packages, Currently users have to go to another website to download the packages. Further, he suggested to open an RFE for this idea as well. The RFE can be found here

Further, the first draft of pre-interview survey is also ready and can be viewed here. I am open to suggestions in improving the survey!

July 23, 2016

Fedora Brazil in telegram. https://telegram.me/fedorabr
Fedora Brazil in telegram.

https://telegram.me/fedorabr



All systems go
New status good: Everything seems to be working. for services: Fedora Wiki, Fedora People, Zodbot IRC bot, The Koji Buildsystem, Darkserver, Tagger, Package Database, Fedora pastebin service, Blockerbugs, Badges, FedoraHosted.org Services, Mirror Manager, Koschei Continuous Integration, Ipsilon, Mirror List, Fedora Infrastructure Cloud, Account System, Package maintainers git repositories, Fedora websites, Documentation website, COPR Build System, Package Updates Manager, Ask Fedora, Fedora Packages App, FreeMedia, Fedora Messaging Bus, Fedora elections, Mailing Lists, Fedora Calendar
Minor service disruption
New status minor: Unknown network outage occured for services: Fedora Wiki, Fedora People, Zodbot IRC bot, The Koji Buildsystem, Darkserver, Tagger, Package Database, Fedora pastebin service, Blockerbugs, Badges, FedoraHosted.org Services, Mirror Manager, Koschei Continuous Integration, Ipsilon, Mirror List, Fedora Infrastructure Cloud, Account System, Package maintainers git repositories, Fedora websites, Documentation website, COPR Build System, Package Updates Manager, Ask Fedora, Fedora Packages App, FreeMedia, Fedora Messaging Bus, Fedora elections, Mailing Lists, Fedora Calendar
Major service disruption
New status major: Network disruption to main datacenter for services: Fedora Wiki, Fedora People, Zodbot IRC bot, The Koji Buildsystem, Darkserver, Tagger, Package Database, Fedora pastebin service, Blockerbugs, Badges, FedoraHosted.org Services, Mirror Manager, Koschei Continuous Integration, Ipsilon, Mirror List, Fedora Infrastructure Cloud, Account System, Package maintainers git repositories, Fedora websites, Documentation website, COPR Build System, Package Updates Manager, Ask Fedora, Fedora Packages App, FreeMedia, Fedora Messaging Bus, Fedora elections, Mailing Lists, Fedora Calendar
PHP on the road to the 7.1.0 release

Version 7.1.0beta1 is released. It's now enter the stabilisation phase for the developpers, and the test phase for the users.

RPM are available in the remi-php71 repository for Fedora  23 and Enterprise Linux  6 (RHEL, CentOS) and as Software Collection in the remi-safe repository.

 

emblem-important-4-24.pngThe repository provides developement versions which are not suitable for production usage.

Also read:

emblem-notice-24.pngInstallation : read the Repository configuration or the Configuration Wizard and choose installation mode.

Replacement of default PHP by version 7.1 installation (simplest):

yum-config-manager --enable remi-php71
yum update php\*

Parallel installation of version 7.1 as Software Collection (x86_64 only, recommended for tests):

yum install php71

emblem-important-2-24.pngTo be noticed :

  • EL7 rpm are build using RHEL-7.2
  • EL6 rpm are build using RHEL-6.8
  • various extensions are also available, see the PECL extension RPM status page
  • follow the comments on this page for update until final version.

emblem-notice-24.pngInformations, lire :

Base packages (php)

Software Collections (php70)

New "remi-php71" repository

I've just open the remi-php71 repository for Fedora ≥ 23 and Enterprise Linux ≥ 6.

Current version is PHP 7.1.0beta1 with about 75 extensions which are already compatible.

emblem-important-4-24.pngThis repository provides developement versions which are not suitable for production usage.

The repository configuration is provided by the latest version of the remi-release package:

  • remi-release-23-4.fc23.remi
  • remi-release-24-2.fc24.remi
  • remi-release-6.8-1.el6.remi
  • remi-release-7.2-1.el7.remi

emblem-notice-24.pngAs for other remi's repositories, it is disabled by default, so the update is an administrator choice.

E.g. to update the PHP system version:

yum --enablerepo=remi update remi-release
yum --enablerepo=remi-php71 update php\*

emblem-important-2-24.pngAs some extensions are not yet available, the update may fail, in this case you have to remove not yet compatible extensions, or wait for their update.

PHP 7.1 as Software Collection stay in "remi-safe" as there is no conflicts with the base packages.

Arrancamos con el temario de la LPIC
Hace un tiempo se me pasaba por la cabeza llevar a acabo los exámenes de la LPIC para obtener bajo papel mi conocimiento sobre Linux, y desde entonces no he parado de darle vueltas hasta llegar a sentarme y ponerme a ello. A finales de este año, espero al menos obtener LPIC 1 y 2.

Para quiénes no sepan que es la LPIC (Linux Professional Institute Certification), son unas certificaciones ofertadas por la LPI, la Linux Professional Institute. Una organización sin ánimo de lucro fundada en Canadá en diciembre del 1999 que trata de impulsar la tecnología Linux, Software Libre y de Open Source mediante los programas de certificación.

LPI logo, obtenido de LinkedIN
La diferencia de este programa frente a las certificaciones de Red Hat o Novell, es que se encuentra dirigida a todas las distribuciones Linux. No solo a los productos de Red Hat como RHEL (Red Hat Enterprise Linux); o Novell con SUSE por poneros un ejemplo.

Este programa de certificación está compuesto por 3 niveles:
  • LPIC-1: Compuesta por dos exámenes 101 y 102 en el que obtienes nivel Junior. 
  • LPIC-2: Como LPIC-1, son dos exámenes 201 y 202, por el cuál obtienes el nivel Intermedio.
  • LPIC-3: Este nivel sin embargo, está compuesto por 3 exámenes, siendo el último escalón del programa y recibirías el nivel de Senior.
En cuánto a los exámenes, según el libro Preparación para la certificación LPIC-I de la editorial Eni, pueden ser en español pero cabe la posibilidad de que tengas que realizar el examen en inglés. Y el tiempo del examen es de 90 minutos en los que no se podrá salir del aula o zona habilitada para ello.

Este pueden contener 3 tipos de respuestas para las preguntas que se les hace:
  • Única respuesta para dar (Escrita o introducida)
  • Examen tipo test con una única respuesta correcta.
  • Tipo test con varias respuestas posibles.
Hay que aprobar el examen con un procentaje del 60%. Y se puede hacer tanto en un equipo como en un papel. Si es a través de un ordenador, se sabe el resultado de inmediato, pero en caso de ser en papel, la respuesta tardará en un periodo de tres a cinco semanas. Si se suspende un examen, se tiene una oportunidad gratuita.

Estos exámenes conllevan un precio que supongo que varía según el centro asociado en el que te examinas. Por ejemplo en PUE Barcelona tiene un costo/examen de 226€ + (21%) IVA y gastos de administración incluidos, puedes buscar tu centro desde aquí. Y como me dijo el responsable de turno de LPI en España, se puede llevar a cabo los dos exámenes, a pesar de no saber la nota del primero.

Cuando finalices cada uno de estos niveles, obtendrás una certificación independiente que respalda tus conocimientos sobre el amplio abanico de distribuciones del mercado como por ejemplo:
  • Trabajar en línea con los comandos Linux
  • Ejecutar tareas de mantenimiento desde las más sencillas hasta las más complejas
    Tux la mascota de Linux.
    Imagen obtenida de Wikipedia.
  • Gestionar usuarios y grupos desde GUI o línea de comandos
  • Instalar y configurar una estación de trabajo
  • Diseñar, instalar, mantener y dotar de seguridad a sistemas Linux
  • Gestionar servidores
  • Trabajar con FTP, LDAP, SSH, PAM...
  • Gestionar systemd
  • Automatizar tareas
Y mucho más

En caso de que se animen ha hacerlo, que sepan que el temario está descentralizado, es decir, tienes la posibilidad de elaborarte tus propios apuntes en base a los objetivos de cada examen ya que no es obligado adquirir un libro. Y si te gusta seguir un curso tutorizado, hay academias que imparten cursos formativos de los distintos niveles de LPIC incluyendo bonos, ofertas..etc, pero obviamente ya encarece el precio llegando hasta los 1000€ por nivel.

Como estoy haciendo con CCNA, iré publicando contenidos relacionados con la LPIC, los cuáles contendrán la palabra clave LPIC.

Referencias

PhantomJS 2.1.1 in Ubuntu different from upstream

For some time now I've been hitting PhantomJS #12506 with the latest 2.1.1 version. The problem is supposedly fixed in 2.1.0 but this is not always the case. If you use a .deb package from the latest Ubuntu then the problem still exists, see Ubuntu #1605628.

It turns out the root cause of this, and probably other problems, is the way PhantomJS packages are built. Ubuntu builds the package against their stock Qt5WebKit libraries which leads to

$ ldd usr/lib/phantomjs/phantomjs | grep -i qt
    libQt5WebKitWidgets.so.5 => /lib64/libQt5WebKitWidgets.so.5 (0x00007f5173ebf000)
    libQt5PrintSupport.so.5 => /lib64/libQt5PrintSupport.so.5 (0x00007f5173e4d000)
    libQt5Widgets.so.5 => /lib64/libQt5Widgets.so.5 (0x00007f51737b6000)
    libQt5WebKit.so.5 => /lib64/libQt5WebKit.so.5 (0x00007f5171342000)
    libQt5Gui.so.5 => /lib64/libQt5Gui.so.5 (0x00007f5170df8000)
    libQt5Network.so.5 => /lib64/libQt5Network.so.5 (0x00007f5170c9a000)
    libQt5Core.so.5 => /lib64/libQt5Core.so.5 (0x00007f517080d000)
    libQt5Sensors.so.5 => /lib64/libQt5Sensors.so.5 (0x00007f516b218000)
    libQt5Positioning.so.5 => /lib64/libQt5Positioning.so.5 (0x00007f516b1d7000)
    libQt5OpenGL.so.5 => /lib64/libQt5OpenGL.so.5 (0x00007f516b17c000)
    libQt5Sql.so.5 => /lib64/libQt5Sql.so.5 (0x00007f516b136000)
    libQt5Quick.so.5 => /lib64/libQt5Quick.so.5 (0x00007f5169dad000)
    libQt5Qml.so.5 => /lib64/libQt5Qml.so.5 (0x00007f5169999000)
    libQt5WebChannel.so.5 => /lib64/libQt5WebChannel.so.5 (0x00007f5169978000)

While building from the upstream sources gives

$ ldd /tmp/bin/phantomjs | grep -i qt

If you take a closer look at PhantomJS's sources you will notice there are 3 git submodules in their repository - 3rdparty, qtbase and qtwebkit. Then in their build.py you can clearly see that this local fork of QtWebKit is built first, then the phantomjs binary is built against it.

The problem is that these custom forks include additional patches to make WebKit suitable for Phantom's needs. And these patches are not available in the stock WebKit library that Ubuntu uses.

Yes, that's correct. We need additional functionality that vanilla QtWebKit doesn't have. That's why we use custom version.

Vitaly Slobodin, PhantomJS

At the moment of this writing Vitaly's qtwebkit fork is 28 commits ahead and 39 commits behind qt:dev. I'm surprised Ubuntu's PhantomJS even works.

The solution IMO is to bundle the additional sources into the src.deb package and use the same building procedure as upstream.

July 22, 2016

Festival une arte e tecnologia

Festival Internacional de Linguagem Eletrônica

Festival une arte e tecnologia

Festival une arte e tecnologia Robinson, obra de Taiwan e Reino Unido, possui várias expressões faciais que mudam de acordo com a narrativa de uma história contata ao fundo.

No mês de Julho a cidade de São Paulo recebe mais uma edição da File (Festival Internacional de Linguagem Eletrônica), mostra mais de obras que unem a arte e a tecnologia, como instalações, projeções, video-arte, animações, games e mídias para os óculos 3d. Além da exibição interna, o prédio da Fiesp, na Av Paulista, onde acontece o evento, é iluminado pro projeções todas as noites com video-artes que dialogam com a proposta do evento.

Trazer para uma via de ampla circulação um evento gratuito com o que é de mais atual no meio da arte é sempre extraordinário. As obras procuram trazer novas visões para a tecnologia, menos utilitarista e hierárquica, mas sempre conversando com seu fator efêmero dos eletrônicos na era contemporânea. Mas o mais interessante é a quantidade de obras interativas: o contato não se limita às grandes televisões que formavam corredores no salão com controles e puffs para o publico jogar o que tem de mais original no mercado de games alternativos, alguns com óculos 3d e fones. Grandes objetos coloridos e barulhentos vindos do outro lado do oceano pediam para serem manipulados. Até uma máquina de Pinball totalmente digital estava disponível aos visitantes, e um óculos 3d que nos leva para dentro das pinturas de Van Gogh.

Festival une arte e tecnologia Podia ser nossa amizade, mas você não colabora: The Indivisible veio do Japão e projeta píxels coloridos por uma parede

Uma das obras que mais chavama a atenção era a TAPE, que transpassava as paredes do evento e corria pelo lado externo do prédio. Era uma teia feita de fita adesiva, formando duto por onde era possível caminhar. A atração feita por artistas croatas e austríacos era cercada por uma extensa fila de pessoas ávidas a passear por ela.

Outra atração concorrida era "Be boy Be girl", obra holandesa que nos levava a uma praia através de óculos 3d, fones, aquecedores e uma leve brisa marinha feita por um ventilador enquanto deitamos em uma espreguiçadeira. Ao iniciar a atividade, podemos escolher se queremos um corpo feminino ou masculino, o qual teremos a impressão de nos pertencer durante o procedimento.

Além das obras puramente interativas, a exposição também conta com animações, video-arte e uma galeria de gifs. Aos domingos, conta também com shows musicais brasileiros, como a cantora Érica Alves. Alguns workshops também estarão acontecendo, consulte a programação no site.

O evento está em sua 17ª edição me pareceu ligeiramente menor do que a edição do ano passado. O site do evento também traz seu catalogo de obras, porem tem uma navegação difícil, o que é bem irônico. O catálogo físico pode ser adquirido na loja ao lado do salão de exposição por 30 golpes.

Festival une arte e tecnologia Kalenjdoskop é uma obra alemã e que muda de cor ao ser tocada.

Evento

FILE 2016 – Festival Internacional de Linguagem Eletrônica
12 de julho a 28 de agosto

Galeria de Arte do SESI-SP | Centro Cultural Fiesp – Ruth Cardoso
Avenida Paulista, 1.313 – em frente à estação Trianon-Masp do Metrô

Diariamente, das 10h às 20h (entrada até às 19h40)
Livre para todos os públicos. Entrada gratuita.
Mais informações no site www.file.org.br
Agendamentos de grupos: segunda a sexta, das 10h às 16h,
pelo telefone (11) 3146-7439
http://www.sesisp.org.br/cultura/exposicao/file-2016.html

Setting up a telnet handler for OpenStack Zuul CI jobs in GNOME 3

The OpenStack Zuul system has gone through some big changes recently, and one of those changes is around how you monitor a running CI job. I work on OpenStack-Ansible quite often, and the gate jobs can take almost an hour to complete at times. It can be helpful to watch the output of a Zuul job to catch a problem or follow a breakpoint.

New Zuul

In the previous version of Zuul, you could access the Jenkins server that was running the CI job and monitor its progress right in your browser. Today, you can monitor the progress of a job via telnet. It’s much easier to use and it’s a lighter-weight way to review a bunch of text.

Some of you might be saying: “It’s 2016. Telnet? Unencrypted? Seriously?”

Before you get out the pitchforks, all of the data is read-only in the telnet session, and nothing sensitive is transmitted. Anything that comes through the telnet session is content that exists in an open source repository within OpenStack. If someone steals the output of the job, they’re not getting anything valuable.

I was having a lot of trouble figuring out how to set up a handler for telnet:// URL’s that I clicked in Chrome or Firefox. If I clicked a link in Chrome, it would be passed off to xdg-open. I’d press OK on the window and then nothing happened.

Creating a script

First off, I needed a script that would take the URL coming from an application and actually do something with it. The script will receive a URL as an argument that looks like telnet://SERVER_ADDRESS:PORT and that must be handed off to the telnet executable. Here’s my basic script:

#!/bin/bash

# Remove the telnet:// and change the colon before the port
# number to a space.
TELNET_STRING=$(echo $1 | sed -e 's/telnet:\/\///' -e 's/:/ /')

# Telnet to the remote session
/usr/bin/telnet $TELNET_STRING

# Don't close out the terminal unless we are done
read -p "Press a key to exit"

I saved that in ~/bin/telnet.sh. A quick test with localhost should verify that the script works:

$ chmod +x ~/bin/telnet.sh
$ ~/bin/telnet.sh telnet://127.0.0.1:12345
Trying 127.0.0.1...
telnet: connect to address 127.0.0.1: Connection refused
Press a key to exit

Linking up with GNOME

We need a .desktop file so that GNOME knows how to run our script. Save a file like this to ~/.local/share/applications/telnet.desktop:

[Desktop Entry]
Version=1.0
Name=Telnet
GenericName=Telnet
Comment=Telnet Client
Exec=/home/major/bin/telnet.sh %U
Terminal=true
Type=Application
Categories=TerminalEmulator;Network;Telnet;Internet;BBS;
MimeType=x-scheme/telnet
X-KDE-Protocols=telnet
Keywords=Terminal;Emulator;Network;Internet;BBS;Telnet;Client;

Change the path in Exec to match where you placed your script.

We need to tell GNOME how to handle the x-scheme-handler/telnet mime type. We do that with xdg utilities:

$ xdg-mime default telnet.desktop x-scheme-handler/telnet
$ xdg-mime query default x-scheme-handler/telnet
telnet.desktop

Awesome! When you click a link in Chrome, the following should happen:

  • Chrome will realize it has no built-in handler and will hand off to xdg-open
  • xdg-open will check its list of mime types for a telnet handler
  • xdg-open will parse telnet.desktop and run the command in the Exec line within a terminal
  • Our telnet.sh script runs with the telnet:// URI provided as an argument
  • The remote telnet session is connected

The post Setting up a telnet handler for OpenStack Zuul CI jobs in GNOME 3 appeared first on major.io.

First round of Fedora 24 Updated Lives now available. (torrents expected later this week)

As noted by my colleague on his blog   the first round of  F24  Updated Lives are now available and carry the date 20160720, Also as mentioned last week on his blog F23 Respins are not  going to be actively made, however we and the rest of the volunteer team will field off-off requests as time and resources permit.  We are considering a new/second tracker for the Updated Spins but as of  today there are only  .ISO files available at https://alt.fedoraproject.org/pub/alt/live-respins [shortlink] F24 Live-Respins .  The F24 respins carry the 4.6.4-200 Kernel and roughly ~500M of updates since the Gold ISOs were released just 5 weeks ago.  (some ISOs have more updates, some less)

CHECKSUM512-20160720 is hosted on above link as well as on my  usual people space fedorapeople hosting

HASHSUM512-20160720 is also hosted on my  fedorapeople space tracker should be back up and running within the week.

Updates & Notables:

4.6.4-200 Kernel

 


Filed under: Community, F24, Fedora, Fedora 24 Torrents, PSAs, Volunteer Tagged: Fedora, Fedora 24, Fedora 24 Respins, Fedora 24 Torrents, Fedora Live Respins, Torrents
Video Thumbnail in Dolphin – Fedora 23/24

After a long, long, long time, finally I can view my videos thumbnails in Dolphin.

I reported a Bug in RPMFusion Bugzilla back in Januay 2016 and earlier this morning I got the surpise that it was Resolved!

I want to thank Leigh Scott for getting this done. I’m really happy about it, since I have lots and lots of videos and being able to look at a small thumbnail is really helpful.

So, to get the package installed, all you need to do is:

$sudo dnf install ffmpegthumbs

After that, just need to enable it in Dolphin by going to Preferences > Configure Dolphin > General > Previews > Video files (ffmpegthumbs) and that’s it!

Thanks again for making this happen.


How to set the hostname on Fedora

A Fedora system has a hostname that helps it identify and distinguish itself on a network. Sometimes this name appears as part of a fully qualified domain name (FQDN). A FQDN includes not just the system’s name, but the Internet domain, separated by periods (.).

Hostname conventions

To be valid, a hostname may only contain letters a-z, numerals 0-9, and dashes (-). An example is office-01. The FQDN for that machine might be office-01.example.com, where example.com is the domain.

Each Fedora system also has a special reserved name, localhost, which it sometimes uses to refer to itself. This may sound like overkill, but it’s useful. The localhost lets the system easily access services that it is providing itself. You may also see this reserved name as a FQDN in the form localhost.localdomain.

Setting the hostname

To set the name of a single, modern Fedora system such as a home computer that isn’t part of a network, use the hostnamectl command:

hostnamectl set-hostname new-name

Getting fancy

The hostnamectl utility distinguishes between three different kinds of names:

  1. The static name used by default at system bootup
  2. The transient name assigned by network configuration
  3. The pretty name which may be more descriptive, like “Mary’s living room laptop”

The pretty name isn’t limited to just the valid characters for static or transient name.

The command above sets all names to the same value. To set only one of them, use one or more of the options –static, –transient, or –pretty.

The static name is stored in the /etc/hostname file for later reference. You can also check the current status of all names with this command:

hostnamectl status

The utility tracks other information, such as icons that may be used to represent the system in graphical interfaces. For more information, check out this Freedesktop.org page.

Using Cockpit

You can also use Cockpit to control the hostname settings in your system, or a remote system. If you’re not familiar with Cockpit, check out this overview we published earlier. Cockpit allows you to set the system name with a point and click operation.

First, from the dashboard select System. Notice this dashboard refers to localhost, which is your local system itself.

Cockpit dashboard view - System

Select the Host Name to modify the current settings:

Cockpit - changing the system names


Image courtesy Travis Wise – originally posted to Flickr as Hello My Name Is.

My work on changing CirrOS images

What is CirrOS and why I was working on it? This was quite common question when I mentioned what I am working on during last weeks.

So, CirrOS is small image to run in a cloud. OpenStack developers use it to test their projects.

Technically it is yet another Frankenstein OS. Built using Buildroot 2015.05 uses uclibc or glibc (depending on target architecture). Then Ubuntu 16.04 kernel is applied on top and “grub” (also from Ubuntu) is used to make it bootable.

The problem was that it was not done in UEFI bootable way…

My first changes were: switch images to GPT, create EFI system partition and put some bootloader there. I first used CentOS “grub2-efi” packages (as they provided ready to use EFI binaries) and later switched to Ubuntu ones as upstream maintainer (Scott Moser) prefers to have all external binaries to came from one source.

When he was on vacations (so merge request had to wait) I started digging more and more into scripts.

Fixed getopt use as arguments passed between scripts were read partly via getopt, partially by assigning variables to ${X} (where X is a number).

All scripts were moved to use Bash (as /bin/sh in Ubuntu is usually Dash which is minimalist POSIX shell), whitespace got unified between all scripts and some other stuff happened as well.

At one moment all scripts had 1835 lines and my diff was 2250 lines (+1018/-603) long. Hopefully Scott was back and we got most of that stuff merged.

Recent (2016.07.21) images are available and work fine on all platforms. If someone uses them with OpenStack then please remember about setting “short_id” property to “ubuntu16.04” — otherwise there may be a problem with finding rootfs (no virtio-scsi in disk images).

Summary:

architecture booting before booting after
aarch64 direct kernel UEFI or direct kernel
arm direct kernel UEFI or direct kernel
i386 BIOS or direct kernel BIOS, UEFI or direct kernel
powerpc direct kernel direct kernel
ppc64 direct kernel direct kernel
ppc64le direct kernel direct kernel
x86-64 BIOS or direct kernel BIOS, UEFI or direct kernel
CCNA 5.0 Nivel 1 Capítulo 1
Buenos días,

Aquí os traigo los primeros apuntes de CCNA 5.0 en español del nivel 1 empezando por el capítulo 1. Para que puedan disfrutarlos e ir más directo a las cosas importantes. Ya que, desafortunadamente, este capítulo contiene mucha, pero que mucha información irrelevante para decorar lo importante.

Os hago un pequeño resumen para que entiendan que contiene este primer tema:

  • Evolución de los distintos modelos y tendencias de redes
  • Algunas formas de comunicación haciendo uso de redes
  • Escalamiento de redes, domésticas, SOHO, medianas y/o grandes y mundiales
  • Ejemplos de redes compartidas
  • Equipos servidor y/o cliente
  • Redes punto a punto
  • Categorías de componentes de infraestructuras de red
  • Ejemplo de dispositivos de red como switches, routers...
  • Criterios de selección de medios de red
  • Tipos de diagramas
  • Y mucho más...
Para descargarlo, tenéis acceso a dos repositorios de git llamados #ptlabs. Tengo un archivo CHANGELOG que registra todos los cambios del repositorio, es decir, si hay alguna modificación de alguna errata, información extra...etc queda registrado en él.

También he añadido un fichero CHECKSUM para que puedan ustedes comprobar la integridad del archivo en caso de que se os haya descargado mal, saberlo fehacientemente.
Recuerden que si queréis practicar con el software Packet Tracer y no os arranca el programa, pueden optar por utilizar mi script válido para openSUSE Leap, Debian, Fedora y Gentoo.
Make your Gnome title bar smaller – Fedora 24 update

I wrote similar post before about making the massive title bar in Gnome smaller. But it doesn’t work anymore in Fedora 24!

Don’t worry, I have created an update for you – so you can enjoy more space on your desktop again!

gnome-window-title-barAll you need to do is to put the following css code into ~/.config/gtk-3.0/gtk.css

window.ssd headerbar.titlebar {
    padding-top: 4px;
    padding-bottom: 4px;
    min-height: 0;
}

window.ssd headerbar.titlebar button.titlebutton {
    padding: 0px;
    min-height: 0;
    min-width: 0;
}