September 03, 2015

How I organize my photos

Not long ago, there was a talk about how people deal with their photos: organize, edit, archive and such, and I gave then a partial answer. Why partial? Because I follow two slightly different processes, one when the photos are made for fun and the other when they are for work. Since that answer was partial and made behind a walled garden, I feel the need to expand it in a public piece. I don't pretend what I do is perfect, actually I recognize some flaws myself, but I got there after years of improvements and is not final.

As a sidenote, I do use a Linux desktop, MATE under Fedora, and almost exclusively Free Software, GIMP, darktable, ImageMagick, UFRaw, G'MIC, but what I do is pretty generic, can be done with various other tools. I may follow with another piece on using these tools.

Fun is fun

organize photos

When I talk about pictures made for fun, I mean they are not made for a paying customer, period. This can include anything from photos made for exhibitions, snapshots with the daughter, pictures for my blog, for Wikipedia and whatnot. Usually I take them with an older APS-C DSLR, a Canon 600D, but sometime I bring the FF DSLR. For the most part, I try to protect the better camera, but sometime I am lazy and grab whatever is closer or greedy and want prettier pictures.

The first thing to be noted is that for fun pictures, in the large majority of cases I shoot in JPEG. ...yes, I hear the outrage for such a blasphemy, but the truth is, JPEG is good enough for most of those pics, RAW would be a waste of space and time. When I feel the shoot is important or the light is really difficult, I do use RAW, even for fun pictures.

As a matter of discipline and to keep myself in shape, I try to take pictures as often as possible, ideally every day, and as soon as possible I download the pictures in my computer and then erase the memory cards. The camera has to be ready at any moment to take as much pictures as possible.

I do not use any fancy software to organize the pictures, just the file manager and a directory structure. Of course, it helps that the file manager, with the right plugin, can display thumbnails even for RAWs. The photos made in a day go into a folder with a name like YYYY-MM-DD, for example yesterday pics are in the folder 2015-09-02. Sometime, when I want to find the folder easier, I add a keyword, as there I have a 2015-08-14-seaside

organize photos

As soon as the pictures are downloaded, I try to process them - the next day probably will come others and the newest are always the most exciting. So, I enter the folder and delete some photos: those which are failed or boring. I still don't delete enough (or still take too many), but I'm getting there, improving continuously (soace is cheap will say some). From the too many left undeleted pictures, I copy a few in a working folder, to be edited and then published. Every year I have a new working folder, and when there are more pictures from a certain event (say, more than 10), they go in a subfolder.

Almost exclusively I edit my 'for fun' pictures with GIMP, this is the editing software I feel the most comfortable with and the one that gives the most control. There are not many pictures, so I can take my time with them. If there are RAWs, GIMP will call UFRaw for the import, and in the rare cases it is needed, G'MIC will provide some advanced filters. For batch operations like mass resize or mass watermarking, there is ImageMagick.

Speaking of watermarks, I almost never do it, but there are are a few exceptions, like the pictures which I suspect have the potential to be 'stolen' by newspapers (it happened a few times, even when the pictures were watermarked). I do believe a watermark will destroy the image, so I try to avoid that.

Again, because next day may come another pictures, I try to publish my photos as soon as possible. Still, I don't want to spam my viewers, so sometimes there is a delay. For the photography blog, I don't post more than 4 items a day and for photography sites (the likes of 500px) I post only once in a while. Social media is something I still have to work on: I lost a lot of readers (or at least interactions with readers) a couple of years ago, I blame that on posting too much and try to work on it. Publishing go hand in hand with license, so almost everything shoot for fun is published under a CC-BY-SA license: free to use.

Of course, there is archiving. From time to time (not on a schedule, mostly when I run out of space) I move the unedited pictures, with their directory structure, fro the computer's hard drive to two external drives, a manual process. The edited pictures stay on the computer for the entire year, maybe even next year. They have copies online and at least the copy on G+ is high quality (do you know Facebook destroys your pictures with aggressive compression and metadata removal?)

Flaws

As I said before, I recognize some flaws. The most important couple of them:

  • I do not have continuous backup, there is one only when pictures are moved to external drives. What is currently on the computer is at danger of data loss. Still, they are 'for fun' pictures and I am lazy, do the loss won't be huge, only at most a few weeks of 'for fun' pictures;
  • When I am away for a while, in a trip or vacation, I can't properly process the photos, so when returning home a lot of work will pile-up. For a while I will have to process both old and new images.

Work is serious

organize photos

For work, you have to deliver the best result from a technical point of view, so when is a paying customer I use my full frame DSLR, which happens to be a Canon 6D, a camera recognized for its good low-light performance. As for shooting, the pictures are taken as RAW and JPEG. JPEG is there as a backup, while the RAW is the one to be edited. Here I need 1) to get the most possible from the pictures and 2) deal with low-light situations which happens o lot when doing event photography.

Again, as soon as I get home, I download the pictures from the memory cards. But I do not delete the cards, I put them in a closet, to have a backup somewhere until the processing is done. Processing the photos for an event can take up to a few weeks.

I have a different directory hierarchy for the work photos, so I copy there all the files, in a directory named after the specific client or work. If the work was an event, the first thing is to make a quick and small selection (10-20 pictures) which I edit fast and deliver the same day, as a preview. The idea is for the client to have something really fast, and if he wants to post pictures on social media while it's hot, he can post pictures from me, not some crappy phone-made images.

Then I parse the files with the file manager and its native image viewer, deleting only very few, and make a selection with images to be edited and delivered. From this selection I copy all the RAWs in a different, working folder.

organize photos

Considering the large amount of images (for a wedding it can pe around 1000 pictures), editing with GIMP would be a poor option, so I use darktable instead. After a few days or weeks, depending of the size of the work, images are exported with darktable at a resolution good for large prints. Then for some images that I think need more advanced editing, I open and process them further with GIMP.

After that, I deliver to the client the images, in two sets: one at big, printable, resolution, and another resized for web use. Of course, there is no watermark in sight, the client paid for the images, they are not to be tainted in any way.

If the job requires it, then I start working on the printed album. Here the work is done with GIMP ...blasphemy I hear again? Why not use Scribus? Simple: the print shop requires sRGB JPEGs, and they do a very nice job with that. When there is to be made an engraving on the album's leather cover, I prepare it with Inkscape.

Only after the printed album was delivered to the client I can consider the job done. Then I move the files (sources, edits, album pages) to the two external drives and erase the memory cards.

Of course, somewhere during this process, when I get the time, a few pictures are added to my online portfolios. I have to advertise myself, right?

Flaws

  • Since there is a lot of time from when the pictures are taken and until I get them in the backup system, for a while the memory cards are the backup. I probably change that and save them faster;
  • I still have a lot do do with promotion.

organize photos
¡Hola mundo! ¡Hello World! (Versión 1.1)

Esta es la primera publicación desde este nuevo Blog. Cambiando el nombre, mas personalizado, con categorías variadas pero con el enfoque principal: Fedora.

FedWikiJR

Fedora Wiki: José A. Reyes H.

Adiós y gracias: Conocefedora.


More Fedora 22 scrollbar annoyances (fixed)

After my previous encounter with scrollbar annoyances there was one more to fix: the 1px dead zone between the scrollbar and screen edge in Firefox. There was a bug report[/url] for quite some time but not fix, yet. Fortunately, there was [exturl=https://bugs.launchpad.net/ubuntu/+source/firefox/+bug/125734]another bug a few years ago for which a workaround had been posted.

So, if you are also using a track point or just like to grab the scroll bar with your mouse after doing a swift "hit the right edge" create a file ~/.mozilla/firefox/<your profile="profile">/chrome/userChrome.css and add:

hbox#browser { margin-right: -1px !important; }

Hopefully that's the last annoyance (well, almost at least). Besides avoiding the duck and cover for robot development for some time there are other reasons to stick with a Fedora version for longer than just six months...

From a diary of AArch porter – POSIX.1 functionality

During years of development GCC got several switches which are considered obsolete/deprecated now. And as such they are not available for new ports. Guess what? AArch64 has such status too.

One of switches is “-posix” one. It is not needed anymore as “_POSIX_SOURCE” macro deprecated it:

Macro: _POSIX_SOURCE

If you define this macro, then the functionality from the POSIX.1 standard (IEEE Standard 1003.1) is available, as well as all of the ISO C facilities.

But it happens sometimes (I saw it in pdfedit 0.4.5 which is so old that it still uses Qt3). So if you find it somewhere then please save world with “s/-posix/-D_POSIX_SOURCE/g” :)

Flock Rochester

I’m not going to do a day by day outline of what I did at flock, if I did it would basically be “blah blah blah I talked a lot to a lot of people about a lot of tech topics” and anyone that’s ever met me would have guessed that! It was, as in the past, a great conference. A big shout out to the organisers for an excellent event with two excellent evening events! So I’m going to give a brief summary to my talks and link to slides and video recordings.

My first talk was an overview of the state of aarch64 and POWER as secondary architectures. The slides aren’t particularly interesting as they’re just words for discussion points. The video has all the interesting bits. A related talk was Dennis’s Standardising ARMv7 booting with a memorial quote by Jon Masters 😉

My second talk was about using Fedora as a base for IoT. Slides are here but the talk was quite a bit different to the slides and is more interesting so I suggest watching the video.

I also actively participated in Dennis’s Fedora Release Engineering going forward because well obviously I’m part of it 😉 and it was interesting for where we’re going, and even where we’ve come from in the last year or so :)

Finally I loved the Keynote Be an inspiration, not an impostor by Major Hayden. He’s published a follow up blog post with a FAQ too.

The least memorable bit was the terrible Amtrak ride back to New York City. On the plus side it makes the worst of the British National Rail service seem amazingly on time! NEVER AGAIN!

Trackpoint falling off Page 10? NOT ON MY WATCH

Sure, sure the mouse won on desktop and the trackpad won on laptop. IBM's magnificent Trackpoint is a tiny minority share of the pointer market on both, maybe even headed for extinction. Even Lenovo has been leaving it off some 'Thinkpads'.

I use a Trackpoint keyboard on my workstation. I own Thinkpads almost entirely because of Trackpoint (and because they used to have a keyboard layout that didn't suck. Too bad about that part. It's another reason to fear for Trackpoint when Lenovo's goal seems to be 'corner the market on cheap black Macbooks').

Anyway I realized a few things the other day...

Although I do use a mouse in GIMP and Inkscape about half the time, I use Trackpoint exclusively for everything else. Even gaming. I just can't get the same kind of reaction speed or precision with the mouse. It feels wrong. Don't even get me started about trackpads-- I have to actively battle those things.

Anyway, I've been using Trackpoint (and the seven-row Thinkpad keyboard) for 22 years, ever since the original Thinkpad 700C of 1993. That's more than half my life. I don't want to give either one up, especially not Trackpoint. Not even on the desktop.

And you know what? I don't have to. I'm an engineer. If it comes to it, I can make them my own bloody self.

(The patents are also expired, which means I can sell them ;-)

All that said, there are a few things I dislike about desktop Trackpoints currently available. If I'm going to make one, I'm going to make the one I actually want.

  1. Stick in the usual place, but one that can take soft dome, rim or cat-tongue caps. In fact, paging Captain Obvious, it should use the same caps as sold for Thinkpads (the RT3200 keyboard gets this right). The weird narrow-mount Trackpoint II caps used by the M13 are getting impossible to find anyway.
  2. Speaking of the M13 (which is the most disappointing of all Model M keyboards-- especially the black ones made by Maxi Switch were often crap build quality), having only two buttons was unforgivable even at the time.
  3. The three-button Trackpoint IV is better but still an obsolete configuration. I've had a middle-button scroll wheel on my mouse for about 15 years now. Scroll wheel won. It's standard equipment. It's hard to live without.

    Sure, sure, there are tricks to have side and vertical scroll on Trackpoint IV, but that's overloading the functionality in ways that mistrigger constantly. 'Push harder to scroll' is a decision right up there with 'we don't need physical mouse buttons, we'll just paint colored strips on the trackpad'.

  4. Naturally, this all should be implemented in accessible code on something like an Xwhatsit, Teensy, Arduino, etc, that can output native USB or even PS2 without translation layers. Enough of this 'you can only have Trackpoint by buying a licensed chip with the firmware already on it'.
  5. A Trackpoint with scroll wheel must look, feel and seem right. It should make purists think "what took so long?" not scream bloody murder.

So how about something simple and straightforward like this?

(Another possible alternate is to keep the middle button as it usually is on a modern Trackpoint IV, and put the wheel in the middle of a split spacebar. That prevents the problem of 'scroll when I want to click' and vise versa. I think the placement is significantly inferior however. I mention it mainly to establish prior art ;-)

Silly as it seems, I think a blue rubber O-ring or stripe on the wheel is key visually. It ties the whole room together. I'm so totally making this thing. And while I'm at it... it's going into a model F.

It's so obvious. It's so obviously right. WHY HAS NO ONE ELSE MADE THIS YET?

  1. Your keyboard's integrated pointing device just became even more lewd.

(a final aside: Peter Bright at ArsTechnica has a fantastic shades-of-Jon-Stewart rant about Lenovo's recent keyboards. It captures my feelings on the subject quite well.)

Cross-compiling a PowerPC64 LE kernel and hitting a GCC bug

Being new at OzLabs I’m dipping my toes into various projects and having a play with PowerPC and so I thought I’d cross-compile the Linux kernel on Fedora. Traditionally PowerPC has been big endian, however it also supports little endian so I wanted to build all the things.

Fedora uses a single cross toolchain that can build all four variants, whereas Debian/Ubuntu splits this out into two different toolchains (a BE and an LE one).

Install dependencies in Fedora:
$ sudo dnf install gcc make binutils-powerpc64-linux-gnu gcc-powerpc64-linux-gnu gcc-c++-powerpc64-linux-gnu bc ncurses-devel

Get the v4.2 kernel:
$ git clone https://github.com/torvalds/linux.git --branch v4.2 --depth 1 && cd linux

Successful big endian build of the kernel, using the default config for pseries:
$ ARCH=powerpc CROSS_COMPILE=powerpc64-linux-gnu- make pseries_defconfig
$ ARCH=powerpc CROSS_COMPILE=powerpc64-linux-gnu- make -j$(nproc)
# clean after success
$ ARCH=powerpc CROSS_COMPILE=powerpc64-linux-gnu- make clean
$ ARCH=powerpc CROSS_COMPILE=powerpc64-linux-gnu- make mrproper

Building a little endian kernel however, resulted in a linker problem:
$ ARCH=powerpc CROSS_COMPILE=powerpc64-linux-gnu- make pseries_defconfig
$ ARCH=powerpc CROSS_COMPILE=powerpc64-linux-gnu- make menuconfig
# change architecture to little endian:
# Endianness selection (Build big endian kernel) --->
# (X) Build little endian kernel
$ ARCH=powerpc CROSS_COMPILE=powerpc64-linux-gnu- make V=1

Here was the result:
powerpc64-linux-gnu-gcc -mlittle-endian -mno-strict-align -m64 -Wp,-MD,arch/powerpc/kernel/vdso64/.vdso64.so.dbg.d -nostdinc -isystem /usr/lib/gcc/powerpc64-linux-gnu/5.2.1/include -I./arch/powerpc/include -Iarch/powerpc/include/generated/uapi -Iarch/powerpc/include/generated -Iinclude -I./arch/powerpc/include/uapi -Iarch/powerpc/include/generated/uapi -I./include/uapi -Iinclude/generated/uapi -include ./include/linux/kconfig.h -D__KERNEL__ -Iarch/powerpc -DHAVE_AS_ATHIGH=1 -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs -fno-strict-aliasing -fno-common -Werror-implicit-function-declaration -Wno-format-security -std=gnu89 -msoft-float -pipe -Iarch/powerpc -mtraceback=no -mabi=elfv2 -mcmodel=medium -mno-pointers-to-nested-functions -mcpu=power7 -mno-altivec -mno-vsx -mno-spe -mspe=no -funit-at-a-time -fno-dwarf2-cfi-asm -mno-string -Wa,-maltivec -fno-delete-null-pointer-checks -O2 --param=allow-store-data-races=0 -Wframe-larger-than=2048 -fno-stack-protector -Wno-unused-but-set-variable -fomit-frame-pointer -fno-var-tracking-assignments -Wdeclaration-after-statement -Wno-pointer-sign -fno-strict-overflow -fconserve-stack -Werror=implicit-int -Werror=strict-prototypes -Werror=date-time -DCC_HAVE_ASM_GOTO -Werror -shared -fno-common -fno-builtin -nostdlib -Wl,-soname=linux-vdso64.so.1 -Wl,--hash-style=sysv -D"KBUILD_STR(s)=#s" -D"KBUILD_BASENAME=KBUILD_STR(vdso64.so)" -D"KBUILD_MODNAME=KBUILD_STR(vdso64.so)" -Wl,-T arch/powerpc/kernel/vdso64/vdso64.lds arch/powerpc/kernel/vdso64/sigtramp.o arch/powerpc/kernel/vdso64/gettimeofday.o arch/powerpc/kernel/vdso64/datapage.o arch/powerpc/kernel/vdso64/cacheflush.o arch/powerpc/kernel/vdso64/note.o arch/powerpc/kernel/vdso64/getcpu.o -o arch/powerpc/kernel/vdso64/vdso64.so.dbg
/usr/bin/powerpc64-linux-gnu-ld: arch/powerpc/kernel/vdso64/sigtramp.o: file class ELFCLASS64 incompatible with ELFCLASS32
/usr/bin/powerpc64-linux-gnu-ld: final link failed: File in wrong format
collect2: error: ld returned 1 exit status
arch/powerpc/kernel/vdso64/Makefile:26: recipe for target 'arch/powerpc/kernel/vdso64/vdso64.so.dbg' failed
make[2]: *** [arch/powerpc/kernel/vdso64/vdso64.so.dbg] Error 1
scripts/Makefile.build:403: recipe for target 'arch/powerpc/kernel/vdso64' failed
make[1]: *** [arch/powerpc/kernel/vdso64] Error 2
Makefile:949: recipe for target 'arch/powerpc/kernel' failed
make: *** [arch/powerpc/kernel] Error 2

All those files were 64bit, however:
arch/powerpc/kernel/vdso64/cacheflush.o: ELF 64-bit LSB relocatable, 64-bit PowerPC or cisco 7500, version 1 (SYSV), not stripped
arch/powerpc/kernel/vdso64/datapage.o: ELF 64-bit LSB relocatable, 64-bit PowerPC or cisco 7500, version 1 (SYSV), not stripped
arch/powerpc/kernel/vdso64/getcpu.o: ELF 64-bit LSB relocatable, 64-bit PowerPC or cisco 7500, version 1 (SYSV), not stripped
arch/powerpc/kernel/vdso64/gettimeofday.o: ELF 64-bit LSB relocatable, 64-bit PowerPC or cisco 7500, version 1 (SYSV), not stripped
arch/powerpc/kernel/vdso64/note.o: ELF 64-bit LSB relocatable, 64-bit PowerPC or cisco 7500, version 1 (SYSV), not stripped
arch/powerpc/kernel/vdso64/sigtramp.o: ELF 64-bit LSB relocatable, 64-bit PowerPC or cisco 7500, version 1 (SYSV), not stripped

An strace of the failing powerpc64-linux-gnu-gcc command above showed that collect2 (and ld) were being called with an option setting the format to 32bit:
24904 execve("/usr/libexec/gcc/powerpc64-linux-gnu/5.2.1/collect2", ["/usr/libexec/gcc/powerpc64-linux"..., "-plugin", "/usr/libexec/gcc/powerpc64-linux"..., "-plugin-opt=/usr/libexec/gcc/pow"..., "-plugin-opt=-fresolution=/tmp/cc"..., "--sysroot=/usr/powerpc64-linux-g"..., "--build-id", "--no-add-needed", "--eh-frame-hdr", "--hash-style=gnu", "-shared", "--oformat", "elf32-powerpcle", "-m", "elf64lppc", "-o", ...], [/* 66 vars */] <unfinished>

Alan Modra tracked it down to some 32bit hard-coded entries in GCC sysv4.h and sysv4le.h and submitted a patch to the GCC mailing list (Red Hat bug).

I re-built the Fedora cross-gcc package with his patch and it solved the linker problem for me. Hurrah!

Instala JDownloader para Fedora 22 (64 bits) en 3 sencillos pasos.

Posted on 23 agosto, 2015

Hola, sigue estos sencillos pasos para la correcta instalación del JDownloader 2 BETA:

Jd2

  1. Descargar JDownloader de su sitio oficial: http://installer.jdownloader.org/JD2Setup_x64.sh
  2. Ejecutar desde la terminal el archivo: D2Setup_x64.sh (./JD2Setup_x64.sh) (también puedes darle doble clic al archivo y escoger: ejecutar en un terminal.)
  3. Realiza la instalación (/home/tu-nombre-usuario/.jd2)

A disfrutar.

jDownloader es un gestor de descargas especialmente diseñado para automatizar el proceso de descarga de archivos de portales que ofrecen servicio de almacenamiento como, por ejemplo, Mega o Mediafire.
Con jDownloader el proceso de descarga es mucho más sencillo. Sólo tienes que añadir todos los enlaces del contenido que quieres descargar e iniciar la descarga para que el programa se encargue de todo: accede al enlace, espera el tiempo necesario, valida los mensajes e inicia la descarga. Así con todos los enlaces incluidos en la cola.
Desde sus opciones puedes añadir o borrar enlaces, guardarlos para recuperar la descarga en cualquier momento, configurar un usuario Premium para acelerar la descarga, limitar el ancho de banda, entre otras posibilidades.

PD.: Si tienen algún problema o comentario, escriban abajo y los ayudamos.


¡Hola People!

Les doy la bienvenida a mi blog.

¡Bienvenidos – Welcome!

Aquí les hablare de todos los temas que me interesen desde mi punto de vista (de todo un poco, espero la pasen bien leyendo el blog), también explicare y daré mi opinión sobre Software Libre y Fedora, lo que iré aprendiendo en mi día a día, espero les sea de utilidad.

Hoy les comparto la captura de pantalla de como esta mi Desktop.

Desktop by yosef7 Agosto 2015


September 02, 2015

All systems go
New status good: Everything seems to be working. for services: Fedora Wiki, Fedora People, Zodbot IRC bot, The Koji Buildsystem, Darkserver, Tagger, Package Database, Fedora pastebin service, Blockerbugs, Badges, FedoraHosted.org Services, Mirror Manager, Koschei Continuous Integration, Ipsilon, Mirror List, Package maintainers git repositories, Account System, Fedora websites, Documentation website, COPR Build System, Package Updates Manager, Ask Fedora, Fedora Packages App, FreeMedia, Fedora Messaging Bus, Fedora elections, Mailing Lists, Fedora Calendar
There are scheduled downtimes in progress
New status scheduled: scheduled outage for services: Fedora Wiki, Fedora People, Zodbot IRC bot, The Koji Buildsystem, Darkserver, Tagger, Package Database, Fedora pastebin service, Blockerbugs, Badges, FedoraHosted.org Services, Mirror Manager, Koschei Continuous Integration, Ipsilon, Mirror List, Package maintainers git repositories, Account System, Fedora websites, Documentation website, COPR Build System, Package Updates Manager, Ask Fedora, Fedora Packages App, FreeMedia, Fedora Messaging Bus, Fedora elections, Mailing Lists, Fedora Calendar
Write the Docs 2015

I’ve just come home from beautiful Prague, where Write the docs conference took place on 31 August – 1 September.

NI2S0028 Mikey Ariel invited me to be a volunteer and help out. And though I’m not a technical writer, I saw it as an opportunity to get into the hang of things (conference-wise), travel a bit and meet a lot of talented and inspiring people from around the world. Here they are, btw ;)

NI2S0884

Being a volunteer, I didn’t get to see all the talks. That’s not really a problem: they were recorded on video and hopefully will soon be available. But I did get to see all the people, as I helped with registration and organizational moments. Also this being a writers’ conference, many attendees were taking notes. And quite surprisingly among the talks I did actually attend, some happened to be interesting even for me.  Here’s me helping out:

NI2S0041

Writing documentation is not only about writing, but actually a lot about layout, accessibility, UX and UI, too. So I actually enjoyed listening to Beth Aitman, for example (here are here slides).  Among the most memorable were Elijah Caine with his talk about writing emails, which I really really hope more people could listen to, and Christina Elmore talking about creative problem solving. One of my personal favorites was a lightning talk by Marcin Warpechowski about laptop stickers! TL;DR – stickers are a great way to engage employees and the community!  Got me (and actually everybody) excited about stickers even more and willing to create some. GitHub’s octocat also contributed to my feelings about stickers. They actually produce a special version for all conferences they attend! Also I think it was ladies from GitHub taking most the notes (or maybe I just happened to seat behind them ;) ).

2015-09-02

As a girl working at Red Hat I got really excited to see how many women actually work as technical writers and that possibly half (if not more) of the attendees and speakers were beautiful ladies! This conference is really special in that way, as I’ve been told, that usually IT conferences get a majority of male attendees. I really truly wish it will change.

Frankly, although documentation is not really my cup of tea, I still enjoyed my time there, especially the feeling of community and inspiration coming not only from speakers, but everybody being passionate about what they do. It was an amazing experience and hopefully in the future I will be able to attend more conferences, some being about what I do and am passionate about. Like this one, hopefully?) And I’ve already signed up as a volunteer for DevConf 2016.

PS We had a contest of badge bling-blings and here’s my submission (didn’t win, no problem!)

unnamed

PPS Some of the photos were taken by Jiri Folta.


Day 4 of Flock 2015

Day four of flock started at 10AM, later than the usual 9am, which was really good as everyone needed that extra hour of sleep. Though I was getting up generally around 5AM, but I managed to get enough sleep on that morning. Came down to the lobby, found people slowly moving into different rooms. I went in the SPC workshop by Dan Walsh. The session started in very informal way. As I already missed his talk on the same topic due to clash with another talk, this was my chance to catch up with updates from him. I also found more copies of “Containers coloring book”, another excellent work from Mizmo and Dan’s collaboration. Feel free to download, and print the PDF copy. It really explains the security ideas in layman terms.

During lunch I went out with Kevin, Patrick, and Pierre-Yves. The salad was one of the best I had, it was heavy too. Came back to the venue with a full stomach. Only Patrick’s explanation about remote client authentication system made sure that I did not fall asleep. He also helped me to enable 2-factor authentication for laptop drive encryption. We had many more discussions about best practices, and how to stay paranoid about security :) He also showed me the great documentation from python-cryptography project. I will explain the use case in a future blog post.

The day ended with another trip to the Belgian beer place. After dinner, many went to more social interactions. But I chose to come back as I had to wake up early next day for the next part of my road trip.

This Flock seems to be very useful as many discussions happened, which in turn helped to resolve many open issues. We also added many new items in our TODO lists, but that is what we expect from any good conference like this one. Having the event venue in the same hotel also helped a lot, many got the required sleep in between without spending time going back and forth between venue and hotel.

Impostor syndrome talk: FAQs and follow-ups

I’ve had a great time talking to people about my “Be an inspiration, not an impostor” talk that I delivered in August. I spoke to audiences at Fedora Flock 2015, Texas Linux Fest, and at Rackspace. The biggest lesson I learned is that delivering talks is exhausting!

Frequently Asked Questions

Someone asked a good one at Fedora Flock:

How do you deal with situations where you are an impostor for a reason you can’t change? For example, if you’re the only woman in a male group or you’re the youngest person in a mostly older group?

I touched on this a bit in the presentation, but it’s a great question. This is one of those times where you have to persevere and overcome the things you can’t change by improving in all of the areas where you can change.

For example, if you’re the youngest in the group, find ways to relate to the older group. Find out what they value and what they don’t. If they prefer communication in person over electronic methods, change your communication style and medium. However, you shouldn’t have to change your complete identity just for the rest of the group. Just make an adjustment so that you get the right response.

Also, impostor syndrome isn’t restricted to a particular gender or age group. I’ve seen it in both men and women in equal amounts, and I’ve even seen it in people with 40 years of deep experience. It affects us all from time to time, and we need structured frameworks (like OODA) to fight it.

How do I battle impostor syndrome without becoming cocky and overconfident?

The opposite of impostor syndrome, often called the Dunning-Kruger Effect, is just as dangerous. Go back the observe and orient steps of the OODA loop (see the slides toward the end of the presentation) to be sure that you’re getting good feedback from your peers and leaders. Back up your assertions with facts and solid reasoning to avoid cognitive bias. Bounce those ideas and assertions off the people you trust.

When I make an assertion or try to get someone else to change what they’re doing, I’ll often end with “Am I off-base here?” or “Let me know if I’m on the right track” to give others an opportunity to provide criticism. The added benefit is that these phrases could drag someone with impostor syndrome out of the shadows and into the discussion.

That leads into another good question I received:

How can we reduce impostor syndrome in open source communities as a whole?

The key here is to find ways to get people involved, and then get them more involved over time. If someone is interested in participating but they aren’t sure how to start, come up with ways they can get involved in less-formal ways. This could be through bug triaging, fixing simple bugs, writing documentation, or simply joining some IRC meetings. I’ve seen several communities go through a process of tagging bugs with “easy” tags so that beginners can try to fix them.

Another more direct option is to call upon people to do certain things in the community and assign them a mentor to help them do it. If someone isn’t talking during an IRC meeting or piping up on a mailing list, call them out — gently. It could be something as simple as: “Hey, [name], we know you’re knowledgeable in [topic]. Do you think this is a good idea?” Do that a few times and you’ll find their confidence to participate will rise quickly.

Follow-ups

Insides vs. outsides

Someone stopped me outside the talk room at Texas Linux Fest and said a leader at his church summarized impostor syndrome as “comparing your insides to someone else’s outsides”. That led me to do some thinking.

Each and every one of us has strengths and weaknesses. I’d wager that we all have at least once vice (I have plenty), and there are things about ourselves that we don’t like. Everyone has insecurities about something in their life, whether it’s personal or professional. These are things we can’t see from looking at someone on the outside. We’re taking our laundry list of issues and comparing it to something we think is close to perfection.

Don’t do that. It’s on my last slide in the presentation.

You know at least one thing someone else wants to know

After doing the talk at Rackspace, I was pulled into quite a few hallway conversations and I received feedback about my presentation. In addition, many people talked about their desire to get up and do a talk, too. What I heard most often was: “I want to do a talk, but I don’t know what to talk about.”

It reminds me of a post I wrote about writing technical blogs. There is at least one thing you know that someone else wants to know. You might be surprised that the most hit post on my blog is an old one about deleting an iptables rule. Deleting an iptables rule is an extremely basic step in system administration but it’s tough to remember how to do it if you don’t use the iptables syntax regularly.

Rackspace holds Tech Talk Tuesdays during lunch at our headquarters in San Antonio each week. It’s open to Rackers and escorted guests only for now, but our topic list is wide open. Rackers have talked about highly technical topics and they’ve also talked about how to brew beer. I’ve encouraged my coworkers to think about something within their domain of expertise and deliver a talk on that topic.

Talk about your qualifications and experience without bragging

You can be humble and talk about your strengths at the same time. They aren’t mutually exclusive. It can be a challenge to bring these things up during social settings, especially job interviews. My strategy is to weave these aspects about myself into a story. Humans love stories.

As an example, if you’re asked about your experience with Linux, tell a short story about a troubleshooting issue from your past and how you solved it. If you’re asked about your python development experience, talk about a project you created or a hard problem you solved in someone else’s project. Through the story, talk about your thought process when you were solving the problem. Try your best to keep it brief. These stories will keep the other people in the room interested and it won’t come off as bragging.

The post Impostor syndrome talk: FAQs and follow-ups appeared first on major.io.

The summer is over, the wallpapers are not
I know that after the calendar the sun is over, but looking outside, it looks and feels like the middle of the summer. And the truth is, for the last month I was all the time on the road, so even if the pictures are taken a while ago, only now I got the opportunity to edit them as desktop wallpapers for a blog post. If you like any of them, feel free to use (CC-BY-SA).
sea wallpaper
sea wallpaper
sea wallpaper
sea wallpaper
sea wallpaper
Factoring RSA Keys With TLS Perfect Forward Secrecy

What is being disclosed today?

Back in 1996, Arjen Lenstra described an attack against an optimization (called the Chinese Remainder Theorem optimization, or RSA-CRT for short). If a fault happened during the computation of a signature (using the RSA-CRT optimization), an attacker might be able to recover the private key from the signature (an “RSA-CRT key leak”). At the time, use of cryptography on the Internet was uncommon, and even ten years later, most TLS (or HTTPS) connections were immune to this problem by design because they did not use RSA signatures. This changed gradually, when forward secrecy for TLS was recommended and introduced by many web sites.

We evaluated the source code of several free software TLS implementations to see if they implement hardening against this particular side-channel attack, and discovered that it is missing in some of these implementations. In addition, we used a TLS crawler to perform TLS handshakes with servers on the Internet, and collected evidence that this kind of hardening is still needed, and missing in some of the server implementations: We saw several RSA-CRT key leaks, where we should not have observed any at all.

The technical report, “Factoring RSA Keys With TLS Perfect Forward Secrecy”, is available in PDF format.

What is the impact of this vulnerability?

An observer of the private key leak can use this information to cryptographically impersonate the server, after redirecting network traffic, conducting a man-in-the-middle attack. Either the client making the TLS handshake can see this leak, or a passive observer capturing network traffic. The key leak also enables decryption of connections which do not use forward secrecy, without the need for a man-in-the-middle attack. However, forward secrecy must be enabled in the server for this kind of key leak to happen in the first place, and with such a server configuration, most clients will use forward secrecy, so an active attack will be required for configurations which can theoretically lead to RSA-CRT key leaks.

Does this break RSA?

No. Lenstra’s attack is a so-called side-channel attack, which means that it does not attack RSA directly. Rather, it exploits unexpected implementation behavior. RSA, and the RSA-CRT optimization with appropriate hardening, is still considered secure.

Are Red Hat products affected?

The short answer is: no.

The longer answer is that some of our products do not implement the recommend hardening that protects against RSA-CRT key leaks. (OpenSSL and NSS already have RSA-CRT hardening.) We will continue to work with upstream projects and help them to implement this additional defense, as we did with Oracle in OpenJDK (which led to the CVE-2015-0478 fix in April this year). None of the key leaks we observed in the wild could be attributed to these open-source projects, and no key leaks showed up in our lab testing, which is why this additional hardening, while certainly desirable to have, does not seem critical at this time.

In the process of this disclosure, we consulted some of our partners and suppliers, particularly those involved in the distribution of RPM packages. They indicated that they already implement RSA-CRT hardening, at least in the configurations we use.

What would an attack look like?

The attack itself is unobservable because the attacker performs an off-line mathematical computation on data extracted from the TLS handshake. The leak itself could be noticed by an intrusion detection system if it checks all TLS handshakes for mathematical correctness.

For the key leaks we have observed, we do not think there is a way for remote attackers to produce key leaks at will, in the sense that an attacker could manipulate the server over the network in such a way that the probability of a key leak in a particular TLS handshake increases. The only thing the attacker can do is to capture as many handshakes as possible, perhaps by initiating many such handshakes themselves.

How difficult is the mathematical computation required to recover the key?

Once the necessary data is collected, the actual computation is marginally more complicated than a regular RSA signature verification. In short, it is quite cheap in terms of computing cost, particularly in comparison to other cryptographic attacks.

Does it make sense to disable forward secrecy, as a precaution?

No. If you expect that a key leak might happen in the future, it could well have happened already. Disabling forward secrecy would enable passive observers of past key leaks to decrypt future TLS sessions, from passively captured network traffic, without having to redirect client connections. This means that disabling forward secrecy generally makes things worse. (Disabling forward secrecy and replacing the server certificate with a new one would work, though.)

How can something called Perfect Forward Secrecy expose servers to additional vulnerabilities?

“Perfect Forward Secrecy“ is just a name given to a particular tweak of the TLS protocol. It does not magically turn TLS into a perfect protocol (that is, resistant to all attacks), particularly if the implementation is incorrect or runs on faulty hardware.

Have you notified the affected vendors?

We tried to notify the affected vendors, and several of them engaged in a productive conversation. All browser PKI certificates for which we observed key leaks have been replaced and revoked.

Does this vulnerability have an name?

We think that “RSA-CRT hardening” (for the countermeasure) and “RSA-CRT key leaks” (for a successful side-channel attack) is sufficiently short and descriptive, and no branding is appropriate. We expect that several CVE IDs will be assigned for the underlying vulnerabilties leading to RSA-CRT key leaks. Some vendors may also assign CVE IDs for RSA-CRT hardening, although no key leaks have been seen in practice so far.

F23 Cloud Base Test Day September 8th!

cross posted from this fedora magazine post

Hey everyone! Fedora 23 has been baking in the oven. The Fedora Cloud WG has elected to do a temperature check on September 8th.

For this test day we are going to concentrate on the base image. We will have vagrant boxes (see this page for how to set up your machine), qcow images, raw images, and AWS EC2 images. In a later test day we will focus on the Atomic images and Docker images.

The landing page for the Fedora Cloud Base test day is here. If you're available to test on the test day (or any other time) please go there and fill out your name and test results. Also, don't forget that you can use some of our new projects testcloud (copr link) and/or Tunir to aid in testing.

Happy testing and we hope to see you on test day!

Dusty

II OpenSource tools Workshop at Córdoba
F23 Cloud Base Test Day September 8th!

Hey everyone! Fedora 23 has been baking in the oven. The Fedora Cloud WG has elected to do a temperature check on September 8th.

For this test day we are going to concentrate on the base image. We will have vagrant boxes (see this page for how to set up your machine), qcow images, raw images, and AWS EC2 images. In a later test day we will focus on the Atomic images and Docker images.

The landing page for the Fedora Cloud Base test day is here. If you’re available to test on the test day (or any other time) please go there and fill out your name and test results. Also, don’t forget that you can use some of our new projects testcloud (copr link) and/or Tunir to aid in testing.

Happy testing and we hope to see you on test day!

Delegating certificate issuance in FreeIPA

FreeIPA 4.2 brings several certificate management improvements including custom profiles and user certificates. Along with the explosion in certificate use cases that are now support comes the question of how to manage certificate issuance, along two dimensions: which entities can be issued what kinds of certificates, and who can actually request a certificate? The first aspect is managed via CA ACLs, which were explained in a previous article. In this post I detail how FreeIPA decides whether a requesting principal is allowed to request a certificate for the subject principal, and how to delegate the authority to issue certificates.

Self-service requests

The simplest scenario is a principal using cert-request to request a certificate for itself as the certificate subject. This action is permitted for user and host principals but the request is still subject to CA ACLs; if no CA ACL permits issuance for the combination of subject principal and certificate profile, the request will fail.

Implementation-wise, self-service works because there are directory server ACIs that permit bound principals to modify their own userCertificate attribute; there is no explicit permission object.

Hosts

Hosts may request certificates for any hosts and services that are managed by the requesting host. These relationships are managed via the ipa host-{add,remove}-managedby commands, and a single host or service may be managed by multiple hosts.

This rule is implemented using directory server ACIs that allow hosts to write the userCertificate attribute when the managedby relationship exists, otherwise not. In the IPA framework, we conduct a permission check to see if the bound (requesting) principal can write the subject principal’s attribute. This is nicer (and probably faster) than interpreting the managedby attribute in the FreeIPA framework.

If you are interested, the ACI rules look like this:

dn: cn=services,cn=accounts,$SUFFIX
aci: (targetattr="userCertificate || krbPrincipalKey")(version 3.0;
      acl "Hosts can manage service Certificates and kerberos keys";
      allow(write) userattr = "parent[0,1].managedby#USERDN";)

dn: cn=computers,cn=accounts,$SUFFIX
aci: (targetattr="userCertificate || krbPrincipalKey")(version 3.0;
      acl "Hosts can manage other host Certificates and kerberos keys";
      allow(write) userattr = "parent[0,1].managedby#USERDN";)

As usual, these requests are also subject to CA ACLs.

Finally, subjectAltName dNSName values are matched against hosts (if the subject principal is a host) or services (if it’s a service); they are treated as additional subject principals and the same permission and CA ACL checks are carried out for each.

Users

FreeIPA’s Role Based Access Control (RBAC) system is used to assign certificate issuance permissions to users (or other principal types). There are several permissions related to certificate management:

Request Certificate

The main permission that allows a user to request certificates for other principals.

Request Certificate with SubjectAltName

This permission allows a user (one who already has Request Certificate permission) to request a certificate with the subjectAltName extension (the check is skipped when the request is self-service or initated by a host principal). Regardless of this permission we comprehensively validate the SAN extension whenever present in a CSR (and always have), so I’m not sure why this exists as a separate permission. I proposed to remove this permission and allow SAN by default but the conversation died.

Request Certificate ignoring CA ACLs (new in FreeIPA 4.2)

The main use case for this permission is where a certain profile is not appropriate for self-service. For example, if you want to issue certificates bearing some estoeric or custom extension unknown to (and therefore not validatable by) FreeIPA, you can define a profile that copies the extension data verbatim from the CSR. Such a profile ought not be made available for self-service via CA ACLs, but this permission will allow a privileged user to issue the certificates on behalf of others.

System: Manage User Certificates (new in FreeIPA 4.2.1)

Permits writing the userCertificate attribute of user entries.

System: Manage Host Certificates

Permits writing the userCertificate attribute of host entries.

System: Modify Services

Permits writing the userCertificate attribute of service entries.

There are other permissions related to revocation and retrieving certificate information from the Dogtag CA. It might make sense for certificate administrators to have some of these permissions but they are not needed for issuance and I will not detail them here.

The RBAC system is used to group permissions into privileges and privileges into roles. Users, user groups, hosts, host groups and services can then be assigned to a role. Let’s walk through an example: we want members of the user-cert-managers group to be able to issue certificates for users. The SAN extension will be allowed, but CA ACLs may not be bypassed.

It bears mention that there is a default privilege called Certificate Administrators that contains most of the certificate management permissions; for this example we will create a new privilege that contains only the required permissions. We will use the ipa CLI program to implement this scenario, but it can also be done using the web UI. Assuming we have a privileged Kerberos ticket, let’s first create a new privilege and add to it the required permissions:

ftweedal% ipa privilege-add "Issue User Certificate"
----------------------------------------
Added privilege "Issue User Certificate"
----------------------------------------
  Privilege name: Issue User Certificate

ftweedal% ipa privilege-add-permission "Issue User Certificate" \
    --permission "Request Certificate" \
    --permission "Request Certificate with SubjectAltName" \
    --permission "System: Manage User Certificates"
  Privilege name: Issue User Certificate
  Permissions: Request Certificate,
               Request Certificate with SubjectAltName,
               System: Manage User Certificates
-----------------------------
Number of permissions added 3
-----------------------------

Next we create a new role and add the privilege we just created:

ftweedal% ipa role-add "User Certificate Manager"
-------------------------------------
Added role "User Certificate Manager"
-------------------------------------
  Role name: User Certificate Manager

ftweedal% ipa role-add-privilege "User Certificate Manager" \
    --privilege "Issue User Certificate"
  Role name: User Certificate Manager
  Privileges: Issue User Certificate
----------------------------
Number of privileges added 1
----------------------------

Finally we add the user-cert-managers group (which we assume already exists) to the role:

ftweedal% ipa role-add-member "User Certificate Manager" \
    --groups user-cert-managers
  Role name: User Certificate Manager
  Member groups: user-cert-managers
  Privileges: Issue User Certificate
-------------------------
Number of members added 1
-------------------------

With that, users who are members of the user-cert-managers group will be able to request certificates for all users.

Conclusion

In addition to self-service, FreeIPA offers a couple of ways to delegate certificate request permissions. For hosts, the managedby relationship grants permission to request certificates for services and other hosts. For users, RBAC can be used to grant permission to manage user, host and service principals, even separately as needs dictate. In all cases except where the RBAC Request Certificate ignoring CA ACLs permission applies, CA ACLs are enforced.

Looking ahead, I can see scope for augmenting or complementing CA ACLs – which currently are concerned with the subject or target principal and care nothing about the requesting principal – with a mechanism to control which principals may issue requests involving a particular profile. But how much this is wanted we will wait and see; it is one of many possible improvents to FreeIPA’s certificate management and all will have to be judged according to the demand and impact.

Comments on the Alliance for Open Media, or, "Oh Man, What a Day"

I assume folks who follow video codecs and digital media have already noticed the brand new Alliance for Open Media jointly announced by Amazon, Cisco, Google, Intel, Microsoft, Mozilla and Netflix. I expect the list of member companies to grow somewhat in the near future.

One thing that's come up several times today: People contacting Xiph to see if we're worried this detracts from the IETF's NETVC codec effort. The way the aomedia.org website reads right now, it might sound as if this is competing development. It's not; it's something quite different and complementary.

Open source codec developers need a place to collaborate on and share patent analysis in a forum protected by client-attorney privilege, something the IETF can't provide. AOMedia is to be that forum. I'm sure some development discussion will happen there, probably quite a bit in fact, but pooled IP review is the reason it exists.

It's also probably accurate to view the Alliance for Open Media (the Rebel Alliance?) as part of an industry pushback against the licensing lunacy made obvious by HEVCAdvance. Dan Rayburn at Streaming Media reports a third HEVC licensing pool is about to surface. To-date, we've not yet seen licensing terms on more than half of the known HEVC patents out there.

In any case, HEVC is becoming rather expensive, and yet increasingly uncertain licensing-wise. Licensing uncertainty gives responsible companies the tummy troubles. Some of the largest companies in the world are seriously considering starting over rather than bet on the mess...

Is this, at long last, what a tipping point feels like?

Oh, and one more thing--

As of today, just after Microsoft announced its membership in the Open Media Alliance, they also quietly changed the internal development status of Vorbis, Opus, WebM and VP9 to indicate they intend to ship all of the above in the new Windows Edge browser. Cue spooky X-files theme music.

September 01, 2015

Getting back into the groove after vacation

To those who have been filing bugs against packages I maintain and have not seen a response - I have been in the middle of moving (>1000 miles) for the last few days and setting up.

With the dust settling now, I will back to take care of issues slowly.

Thanks!
"Counting the Steps" http://www.savagechickens.com/2015/08/counting-the-steps.html #amusements...
All systems go
New status good: Everything seems to be working. for services: The Koji Buildsystem, Koschei Continuous Integration, Package maintainers git repositories
There are scheduled downtimes in progress
New status scheduled: planned outage for services: The Koji Buildsystem, Koschei Continuous Integration, Package maintainers git repositories
D-bus signaling performance

While working on lvm-dubstep the question was posed if D-bus could handle the number of changes that could happen in a short period of time, especially PropertiesChanged signals when a large number of logical volumes or physical volumes were present on the system (eg. 120K PVs and 10K+ LVs).  To test this idea I put together a simple server and client which simply tries to send an arbitrary number of signals as fast as it possibly can.  The number settled upon was 10K because during early testing I was running into time-out exceptions when trying to send more in a row.  Initial testing was done using the dbus-python library and even though numbers seemed sufficient, people asked about sd-bus and sd-bus utilizing kdbus, so the experiment was expanded to include these as well.  Source code for the testing is available here.

 

Test configuration

  • One client listening for signals.
  • Python tests were run on a Fedora 22 VM.
  • Sd-bus was run on a similarly configured VM running Fedora 23 alpha utilizing systemd 222-2.  Kdbus was built as a kernel module from the latest systemd/kdbus repo.  F23 kernel version: 4.2.0-0.rc8.git0.1.fc23.x86_64.
  • I tried running all the tests on the F23 VM, but the python tests were failing horribly with journal entries: Sep 01 10:53:14 sdbus systemd-bus-proxyd[663]: Dropped messages due to queue overflow of local peer (pid: 2615 uid: 0)

 

The results:

  • Python utilizing dbus-python library was able to send and receive 10K signals in a row, varying payload size from 32-128K without any issues.  As I mentioned before if I try to send more than that in a row, especially at larger payload sizes I do get time-outs on the send.
  • sd-bus without using kdbus I was only able to send up to 512 byte payloads before the server would error out with: Failed to emit signal: No buffer space available.
  • sd-bus with kdbus the test completes, seemingly without error, but the number of signals received is not as expected and appears to vary with payload size.

Messages/second is the total number of messages divided by the total time to receive them.

l_dbus_msg_sec

 

MiB/Second is the messages per second multiplied by the payload size.

l_dbus_mib_sec

 

Average time delta is the time difference from when the signal was placed on the bus until is was read by the signal handler.  This shows quite a bit of buffering for the python test case at larger payload sizes.

l_dbus_time_delta

 

Percentage of signals that were received by the signal handler.  As you can see once the payload > 2048, kdbus appears to silently discard signals away as nothing appeared in kernel output and the return codes were all good in user space.

l_dbus_percent_received

Conclusions

  • The C implementation using sd-bus without kdbus performs slightly poorer than the dbus-python which surprised me.
  • kdbus is by far the best performing with payloads < 2048 bytes
  • Signals are by definition sent without a response.  Having a reliable way for the sender to know that a signal was not sent seems pretty important, so kdbus seems to have a bug in that no error was being returned to the transmitter or that there is a bug in my test code.
  • Code likely has bugs and/or does something sub optimal, please do a pull request and I will incorporate the changes and re-run the tests.
Fedora Perú en CONEISC 2015 – Arequipa

Por segunda vez consecutiva la Comunidad Fedora Perú fue invitado a participar al XXIII CONEISC – Congreso Nacional de Estudiantes de Ingeniería de Sistemas y Computación – Emprendimiento Tecnológico en el siglo XXI, presentamos coordinadamente una lista con más de 15 temas entre charlas y talleres y finalmente nos aprobaron 10 que mostramos en las siguientes líneas.

Bernardo C. Hermitaño Atencio

  • Perdiendo el miedo al terminal de Linux
  • Como usar Fedora y no morir en el Intento
  • Usuarios y Permisos en Linux con Fedora

Tonet Pascualet Jallo Colquehuanca

  • Hablemos de Android y su corazón Linux
  • Python en la Web: Flask

Anthony Angel Fernando Mogrovejo Mamani

  • Administraciòn de paquetes con Spacewalk.
  • ¿Como ser un DEVOPS?
  • Introducción al Pentesting.

Alex Irmel Oviedo Solis

  • Tu propia nube con Openstack y Openshift.
  • Fedora Server Rolekit y Cockpit

Todas las actividades se desarrollaron el orden del cronograma que presentó la organización http://coneisc.pe/doc/cronograma.pdf

Como era de esperarse por ser unas de las únicas comunidades que representan a las distibuciones de Linux las charlas y talleres tuvieron buena aceptación y acogida. Además la organización nos brindó un espacio (stand) para hablar, consultar temas, exponer puntos particulares de Fedora como S.O. y de Fedora como comunidad.

Es necesario también hacer mención que tuvimos el apoyo como colaboradores de Rene Lujano y Aly Yuliza Machaca quienes permitieron llegar a más participantes mediante sus aportes en la charlas y el stand de la feria tecnológica.

Adjunto las imágenes del evento para que tener una mejor descripción de lo vivido en la Semana del Coneisc 2015. Felicitamos a la organización y esperamos vernos en los siguientes eventos..

Imágenes del Miercoles 19 de Agosto

SAMSUNG CSC

SAMSUNG CSC

SAMSUNG CSC

SAMSUNG CSC

SAMSUNG CSC

SAMSUNG CSC

SAMSUNG CSC

SAMSUNG CSC

SAMSUNG CSC

SAMSUNG CSC

SAMSUNG CSC

SAMSUNG CSC

SAMSUNG CSC

SAMSUNG CSC

SAMSUNG CSC

SAMSUNG CSC

SAMSUNG CSC

SAMSUNG CSC

SAMSUNG CSC

SAMSUNG CSC

SAMSUNG CSC

IMG_20150819_105323865_HDR IMG_20150819_105327906_HDR IMG_20150819_105339745 IMG_20150819_105341775 IMG_20150819_105359640 IMG_20150819_105411202 IMG_20150819_111306281 IMG_20150819_111314963_HDR IMG_20150819_130440111 IMG_20150819_130444463 IMG_20150819_152344444 IMG_20150819_165151085

Imágenes del Jueves 20 de Agosto

20150819_153154 20150820_150627 20150820_153433 20150820_175956 IMG_20150820_110243609 IMG_20150820_110253101 IMG_20150820_110303159 IMG_20150820_110309028 IMG_20150820_110314154 IMG_20150820_110348053 IMG_20150820_112655184 IMG_20150820_155326264 IMG_20150820_173648522 IMG_20150820_173753915

Imágenes del Viernes 21 de Agosto

20150821_102022 20150821_102025 20150821_102039 20150821_102048 20150821_102049 20150821_103703 20150821_103707 20150821_103712 20150821_103825 IMG_20150821_090734958 IMG_20150821_090840196_HDR IMG_20150821_091318750 IMG_20150821_091902062 IMG_20150821_091914015 IMG_20150821_105911976_HDR IMG_20150821_110104688_HDR IMG_20150821_110200796_HDR IMG_20150821_110653666 IMG_20150821_111240610 IMG_20150821_111344692 IMG_20150821_112129114_HDR IMG_20150821_112132710 IMG_20150821_112214095 IMG_20150821_112218444 IMG_20150821_112237934_HDR IMG_20150821_112242348 IMG_20150821_112243413 IMG_20150821_112247578 11144942_10208015484929052_8599384677435009873_n 11923202_10208015484769048_7005921995270896715_n


Fedora 23: kein MP3-Support im Moment

Nutzer der momentan in der Entwicklung befindlichen Version 23 von Fedora müssen zur Zeit ohne MP3-Support auskommen, da sich das Paket gstreamer1-plugins-bad-freeworld, welches das MP3-Plugin für gstreamer 1.x enthält, nicht installieren lässt.

Fedora 23 Internationalization test day today !!
We have planned Fedora 23 i18n test day today. Most of the details are available on test day page regarding how to execute test day.

From the changes side we have 3 changes this time
Basic motivation behind this test day is to make sure all essentials components for languages working fine.
  • Enconding's 
  • Fonts
    • Default selected fonts are appropriate for languages.
  • IME's
    • IBus
    • Default input methods
  • Rendering & Printing
    • For complex scripts Left-to-Right etc.
  • Locales
    • Locale available with default installation
    • Its processing
  • I18n Tools
    • fonts tweak tools
    • dnf langpacks
    • IM settings
    • spell checkers 
 Above are just few top of the mind things for languages. Requesting all Fedora users and developers participate in test day and make sure your language components work fine in Fedora 23.
Feels like more of an accomplishment than the previous 12 levels. Now for the next 1500 km. :)...
Feels like more of an accomplishment than the previous 12 levels. Now for the next 1500 km. :) #ingress 

August 31, 2015

i18n Test Day coming up tomorrow (2015-09-01)

It’s time for another Test Day tomorrow (2015-09-01)! Continuing the “all those places in the world that aren’t America” theme with i18n, it’s the i18n Test Day! Here we’ll be testing things like complex language input, rendering of non-Latin text, and DNF langpack installation. If you have some time to stop by and help out, please do – we always want to make sure everyone using Fedora gets the best possible experience regardless of the language they speak!

As always for Test Days, the live action is in #fedora-test-day on Freenode IRC. If you don’t know how to use IRC, you can read these instructions, or just use WebIRC.

Working with the kernel keyring
The Linux kernel keyring is effectively a mechanism to allow shoving blobs of data into the kernel and then setting access controls on them. It's convenient for a couple of reasons: the first is that these blobs are available to the kernel itself (so it can use them for things like NFSv4 authentication or module signing keys), and the second is that once they're locked down there's no way for even root to modify them.

But there's a corner case that can be somewhat confusing here, and it's one that I managed to crash into multiple times when I was implementing some code that works with this. Keys can be "possessed" by a process, and have permissions that are granted to the possessor orthogonally to any permissions granted to the user or group that owns the key. This is important because it allows for the creation of keyrings that are only visible to specific processes - if my userspace keyring manager is using the kernel keyring as a backing store for decrypted material, I don't want any arbitrary process running as me to be able to obtain those keys[1]. As described in keyrings(7), keyrings exist at the session, process and thread levels of granularity.

This is absolutely fine in the normal case, but gets confusing when you start using sudo. sudo by default doesn't create a new login session - when you're working with sudo, you're still working with key posession that's tied to the original user. This makes sense when you consider that you often want applications you run with sudo to have access to the keys that you own, but it becomes a pain when you're trying to work with keys that need to be accessible to a user no matter whether that user owns the login session or not.

I spent a while talking to David Howells about this and he explained the easiest way to handle this. If you do something like the following:
$ sudo keyctl add user testkey testdata @u
a new key will be created and added to UID 0's user keyring (indicated by @u). This is possible because the keyring defaults to 0x3f3f0000 permissions, giving both the possessor and the user read/write access to the keyring. But if you then try to do something like:
$ sudo keyctl setperm 678913344 0x3f3f0000
where 678913344 is the ID of the key we created in the previous command, you'll get permission denied. This is because the default permissions on a key are 0x3f010000, meaning that the possessor has permission to do anything to the key but the user only has permission to view its attributes. The cause of this confusion is that although we have permission to write to UID 0's keyring (because the permissions are 0x3f3f0000), we don't possess it - the only permissions we have for this key are the user ones, and the default state for user permissions on new keys only gives us permission to view the attributes, not change them.

But! There's a way around this. If we instead do:
$ sudo keyctl add user testkey testdata @s
then the key is added to the current session keyring (@s). Because the session keyring belongs to us, we possess any keys within it and so we have permission to modify the permissions further. We can then do:
$ sudo keyctl setperm 678913344 0x3f3f0000
and it works. Hurrah! Except that if we log in as root, we'll be part of another session and won't be able to see that key. Boo. So, after setting the permissions, we should:
$ sudo keyctl link 678913344 @u
which ties it to UID 0's user keyring. Someone who logs in as root will then be able to see the key, as will any processes running as root via sudo. But we probably also want to remove it from the unprivileged user's session keyring, because that's readable/writable by the unprivileged user - they'd be able to revoke the key from underneath us!
$ sudo keyctl unlink 678913344 @s
will achieve this, and now the key is configured appropriately - UID 0 can read, modify and delete the key, other users can't.

This is part of our ongoing work at CoreOS to make rkt more secure. Moving the signing keys into the kernel is the first step towards rkt no longer having to trust the local writable filesystem[2]. Once keys have been enrolled the keyring can be locked down - rkt will then refuse to run any images unless they're signed with one of these keys, and even root will be unable to alter them.

[1] (obviously it should also be impossible to ptrace() my userspace keyring manager)
[2] Part of our Secure Boot work has been the integration of dm-verity into CoreOS. Once deployed this will mean that the /usr partition is cryptographically verified by the kernel at runtime, making it impossible for anybody to modify it underneath the kernel. / remains writable in order to permit local configuration and to act as a data store, and right now rkt stores its trusted keys there.

comment count unavailable comments
Hack fonts for Fedora and Epel

I just made a package and submitted to upstream fedora package review the newly release Hack fonts.
http://sourcefoundry.org/hack/

The copr repository can be found here:
https://copr.fedoraproject.org/coprs/heliocastro/hack-fonts

Enjoy

Fedora Group Chat on Telegram

I created a Telegram group chat for Flock 2015 and it turned out to be very popular. Around 70 people joined the chat which is half the attendance of the conference. It was also pretty useful, some found an adapter to borrow, some found a ride back to Boston etc.

When Flock was over, I was going to delete the group chat, but some people suggested that we could keep it as a general chat for Fedora users and I was like “why not”. So if you’re using Telegram and want to chat with other fellow Fedorians, join bit.ly/fedoratg.


Day 3 of Flock 2015

Woke up late, late enough to miss most of the morning keynote, even though I decided to skip breakfast. Spent the time talking till our “Cloud Working Group” meeting started. Brian Exelbierd took some excellent notes from the meeting. Many of the ideas/action items from the meeting are already being worked upon. While the meeting was on, I saw an urgent ping about one of our AWS accounts billing going insane. The whole team jumped into the incident in the next room, first the running instances were taken down, and passwords/tokens got changed. A more detailed look into those instances revealed that they were running due a bug in fedimg (which was already fixed in production, thanks to Ralph), just terminating them was enough to stop any more damage. The whole process once again demonstrated why I feel proud to work with such an excellent team. Sometimes (read always) being paranoid about security is important :) Anyway, I missed a big part of the cloud meeting due to this incident. But I was back before some important discussions took place.

Went out for lunch with a bunch of people from the cloud meeting room. Some good Ethopian food, but sadly not spicy enough :) After lunch, most of the time was spent on talking to many people, these hallway tracks are always the most important part of any conference. We also enjoyed the amazing ginger ale Toshio brought over from South Carolina. Later many of us moved into the docs team tooling meet, and also attended the GPG key signing party in the next room.

In the evening there was another party in the house of George Eastman :)

Disable warping scroll bars

Ever since Gnome 3 I was rather annoyed by the new scroll bar behavior that makes you jump to a place where you click, instead of moving one page in that direction. Most of the time, it's ok since you have a mouse wheel (and I can only guess that was the rationale behind this change) -- most of the time...

With a new laptop (and again with a trackpoint and without a scroll wheel) I was once again annoyed by this behavior and decided to look for a fix. And I found one.

The feature is called "primary button warps slider" and is present in Gtk2 and Gtk3. With Gtk 3 the default changed the default to set it to on. At least on Fedora, the Gtk2 theme also overrides it to set it to true. So the fix is to disable it for Gtk3 and Gtk2 (the latter is crucial to fix it in Firefox).

For Gtk3, edit or create ~/.config/gtk-3.0/settings.ini and add or set:

[Settings]
gtk-primary-button-warps-slider = false

For Gtk2 there is a small catch, edit ~/.gnome2/gtkrc-2.0. Not that editing ~/.gtkrc-2.0 does not work. The reason is that the latter is read before the theme file and consequently settings made there are overwritten, while the one in the ~/.gnome2 directory is read after the theme and allows to overwrite theme values. Thanks to the strace tool for helping me find this. So, fixing Gtk2 is then as simple as adding:

gtk-primary-button-warps-slider = 0

Note that it indeed needs to be zero, false does not work here.

I would guess this should be something for the awesome gnome-tweak-tool.

Update: Turns out that the Gtk2 trick with the ~/.gnome2/gtkrc-2.0 only worked on older versions. I had done the modifications on F20 and F22 at the same time. There seems to be no gtkrc file read after the theme file on F22 (according to strace logs). Therefore, the only way to fix this for Gtk2 (e.g., Thunderbird) on F22 is to modify the theme file. Pity!

All systems go
Service 'COPR Build System' now has status: good: Everything seems to be working.
Major service disruption
Service 'COPR Build System' now has status: major: Code issue prevent COPR from starting new jobs
10th FrOSCon 2015

A picture can say more then a thousend words….

And this is how FrOSCon 10th edtion was, great including the all free drinks and sausages, steaks and ice cream at the social event. The rest was like all the years before great talks including Andrew Tanenbaum and Madog Hall as speakers, nice but small exhibition. Just a good Fedora dev room was again missing. Maybe next year again

COSCUP 2015 Day 2

The second day of COSCUP 2015 was August 16, Sunday. Since we had FreedomKnight and zerng07 at the booth, and we did not find very intersting talks in the early morning, tonghuix and I gave ourselves a bit more rest and did not get up quite early. We both found the talks this year were not as attractive as last year. Part of the reason might be that this year’s theme “Open Culture” is kind of too general and it is not easy to talk about it. Or in a sense it is align with COSCUP’s philosoph “more social than tech”, which means it is more a chance of letting open source people to meet than diving into specific technical topics.

Around the booth we found some interesting swags that might be useful for future Fedora events. One is the retractable network cable as shown in the middle figure above. Such cables were distributed in COSCUP previously as gifts. I think it is quite nice because: 1) it is so useful that one can start to use it directly at the venue; 2) the central circular part is perfect for branding logo etc.; 3) it is not expensive and can be cheap with mass production. Another nice swag is shown in the rightmost figure above, a toy moe girl (optionally, with a toy laptop). As soon as she appeared on our neighboring booth, a crowd of people and cameras came around the booth. Westerners might not understand, but moe culture is quite popular among East Asia countries, and it is getting mainstream. For example, in Taiwan you can find quite a few moe girls on various posters in public. Besides, ".moe" has become a registrable top-level domain name. Since she is so eye-catching, I guess it is worth trying to make one for Fedora. Regarding design, the Fedora moe girl might be a good starting point.

At the booth I also discussed with FreedomKnight and zerng07 about Fedora community development. zerng07 is getting busier with his work, and they are trying to develop a new ambassador. It seems that currently the Taiwan contributors prefer to use the Facebook group of Fedora Chinese community for discussion. I encourage them to keep an eye on our mailing lists and attend weekly IRC meetings, where activities are more visible to the Chinese community and the whole Fedora community. I also encourage them to go outside for better communication and cooperation. When the local community grows strong enough, Taiwan can be a good candidate for FUDCon APAC.

At around 15:00 PM, almost all attendees gathered into the large hall to listent to jserv‘s last speech “Retrospect on Taiwan Open Source Ecosystem”. jserv is a long term open source contributor in Taiwan, and he has contributed talks to COSCUP for all the ten years. In the morning I caught up with his experience sharing of open source in education. To encourage new contributors, he decided this talk to be his last speech in COSCUP. In the talk, he introduced the history of open source development in Taiwan and showed quite a few impressive early projects by Taiwan open source contributors. He also motivated the young generation in Taiwan to contribute more to open source in the world.

After that was the lightning talk session. In COSCUP, a lightning talk can be not a “talk” at all, because all kinds of performance are also welcome. I had learnt the rule for a while, but this time I finally had the chance to hear an ocarina show as a lightning talk by a COSCUP volunteer! Why ocarina show? Simply because others learnt that he can play ocarina during face-to-face discussion.

At last, after a short closing speech from the lead organizer came the group photo session. All volunteers were invited on stage. An unmanned aerial vehicle (UAV) was used for taking photos, which was really cool. By the way it was also cool when the UAV flew over your head!

All in all, I enjoyed the two days of COSCUP very much. Looking into the future, I believe COSCUP can continue to be a great chance for Fedora Chinese community to gather up. Besides, non-Chinese contributors are also welcome to join to take the opportunity to meet face-to-face and get things done.

All systems go
Service 'COPR Build System' now has status: good: Everything seems to be working.
There are scheduled downtimes in progress
Service 'COPR Build System' now has status: scheduled: Fedora Infra Cloud reboots in progress: https://fedorahosted.org/fedora-infrastructure/ticket/4871

August 30, 2015

Activities from Mon, 24 Aug 2015 to Sun, 30 Aug 2015

Activities

Activities Amount Diff to previous week
Badges awarded 791 +26.56%
Builds 16847 +35.49%
Copr build completed 3999 +03.04%
Copr build started 4010 +02.64%
Edit on the wiki 472 +25.53%
FAS user created 111 -21.28%
Meeting completed 28 -03.45%
Meeting started 27 -03.57%
New packages 130 -15.58%
Posts on the planet 90 +63.64%
Retired packages 0 NA
Updates to stable 382 -22.67%
Updates to testing 706 +03.98%

Top contributors of the week

Activites Contributors
Badges awarded idviare (13), minh (12), hhlp (10)
Builds pbrobinson (7501), sharkcz (1977), karsten (1334)
Copr build completed avsej (612), dvratil (355), region51 (226)
Copr build started avsej (612), dvratil (355), region51 (226)
Edit on the wiki mikedep333 (37), pwhalen (26), robatino (19)
Meeting completed danofsatx (6), nirik (6), roshi (6)
Meeting started nirik (3), roshi (2), sgallagh (2)
New packages  
Posts on the planet iranzo (20), admin (10), icon (10)
Retired packages  
Updates to stable siwinski (32), kalev (31), corsepiu (19)
Updates to testing jchaloup (69), remi (38), siwinski (26)
Nic Bonding

I’ve been playing about with NIC bonding on my storage server, it’s current running CentOS 6.7. And to be honest apart from the types that I can’t use due to me not having decent switches, I can’t really tell any difference between them, I’m guessing that’s the whole point.

In my current setup, I’m using balance-rr [mode0] but I have also tried balance-tlb [mode5]

0 — Sets a round-robin policy for fault tolerance and load balancing. Transmissions are received and sent out sequentially on each bonded slave interface beginning with the first one available.

5 — Sets a Transmit Load Balancing (TLB) policy for fault tolerance and load balancing. The outgoing traffic is distributed according to the current load on each slave interface. Incoming traffic is received by the current slave. If the receiving slave fails, another slave takes over the MAC address of the failed slave.

Perhaps as it’s only a home setup, I’m not passing the server enough work to notice a difference, but it seems to provide fault tolerance if I unplug one of the network cables.

Are you using bonding in an enterprise environment using CentOS/RHEL/Fedora? What mode are you using?

August 29, 2015

Looking for new maintainer for Fedora / EPEL ownCloud packages

So I’ve been maintaining ownCloud for the last little while. Unfortunately I sat down today to try again and update the package to the latest upstream (8.1.1), and somewhere in the second hour of insanely stupid PHP autoloader code, I just snapped. I can’t take this crap any more.

I only personally really needed OC for calendar and contact sync anyway, so I’ve set up Radicale instead: it’s written in Python and it doesn’t have a ridiculous forest of bundled crap.

Given that there are dozens of other things I could be spending my time on that I’d find more rewarding, I’m just not willing to do any further major updates of the Fedora / EPEL ownCloud package, I’m sorry. I’m willing to keep the current major versions (8.x in everything but EPEL 6, 7.x in EPEL 6) updated until they go EOL, at which point if no-one else is interested, I will orphan the package.

If anyone would like to take on the work of doing the 8.1 upgrade and maintaining the package in future, please do let me know and I’ll happily transfer it over. To do a decent job, though, you are going to need to know or be willing to learn quite a lot of intimate and incredibly annoying details about things like PHP class loading and how Composer works. If you don’t, for instance, know what it means for unbundling purposes when a PHP library specifies ‘classmap’ as the autoload mechanism in its composer.json file, and you’re not willing to spend your time learning, you probably don’t want to own this package. :)

I’m very sorry to folks who are using it, but I really can’t deal with the crap any more. If all you need is calendar/contact sync, there are easier ways. Check out Radicale or something like it.

Upstream does of course provide ownCloud packages in an OBS repo. They do not follow Fedora web app packaging policies or unbundling rules, and probably don’t work very well with SELinux. Switching from the Fedora/EPEL packages to the OBS ones is likely to require moving various things around and config file editing and stuff. I’m not going to document that, sorry. If anyone else does, though, that’d be great.

My KVM Forum 2015 talk: New qemu technology used in virt-v2v

<iframe allowfullscreen="true" class="youtube-player" frameborder="0" height="312" src="https://www.youtube.com/embed/DJedam7TJWo?version=3&amp;rel=1&amp;fs=1&amp;showsearch=0&amp;showinfo=1&amp;iv_load_policy=1&amp;wmode=transparent" type="text/html" width="500"></iframe>

All KVM Forum talks can be found here.


Pizza Bash

Estuvimos trabajando durante 3 meses con estudiantes de ingeniería en sistemas y redes, de la universidad Centroamericana (UCA), Para recuperar la motivación y cosechar el interés sobre el uso de sistemas operativos linux, las diferentes herramientas GPL Y tecnología basada en software libre. Después de este periodo de seguimiento y entrenamiento,ganamos nuevos usuarios de Fedora (era de esperarse de una actividad dirigida por representantes de Fedora XD),que en un futuro pueden ser potencial colaboradores de nuestra comunidad.

IMG_20150827_124739 IMG_20150827_124811 IMG_20150827_124856 IMG_20150827_124754


New badge: Fedora IT Author !
Fedora IT AuthorYou're an author for it.fedoracommunity.org, a site with discussions, news and docs in the Italian language
New badge: Websites.NEXT !
Websites.NEXTYou helped to rock Fedora.NEXT on the web. Thanks! The new websites look great!
New badge: FUDCon Cordoba 2015 Volunteer !
FUDCon Cordoba 2015 VolunteerYou helped organize FUDCon Cordoba 2015