Fedora People

Episode 10 - The super botnet that nobody can stop

Posted by Open Source Security Podcast on October 24, 2016 09:25 PM
Kurt and Josh discuss Dirty COW, the big IoT DDoS, and Josh can't pronounce Mirai or Dyn.

Download Episode
<iframe frameborder="no" height="150" scrolling="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/289791587&amp;auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false&amp;visual=true" width="100%"></iframe>

Show Notes

Fedora Join meeting 24 October 2016 - Summary

Posted by Ankur Sinha "FranciscoD" on October 24, 2016 08:26 PM

We had another fortnightly Fedora Join SIG meeting today. We had decided on the specific goal of the SIG the last time we'd met. In short, we'd established that the SIG would work on setting up and maintaining channels where newbies can communicate with the community before they've begun contributing. We'd leave the tooling to CommOps, who are working on this already. Following up from then, this week we got together to see what tasks we should get on with. The links are here:

We began with a discussion of where the SIG needs to be mentioned for us to gain visibility. Of course, this discussion kept returning to tooling - wcidff, wiki pages, the website, and so on. We quickly agreed that we need to first make the SIG more visible within the community so that when we begin to funnel newbies to the channels, we have enough folks to help them out. Only then would it make sense to include our presence on our public facing pages.

So, for the next two weeks, we're working on:

  • find a way to get some metrics on the activity of the IRC channel - to see how many of us are there, how many newbies come in, what the most active times are and so on and so forth.

This is partially solved already. I remembered that the IRC support SIG used to have a weekly stats HTML page back in the day. nirik (who knows pretty much everything!) quickly told me that a package called pisg does this and I've started playing with it already. An example stats page is here: https://ankursinha.fedorapeople.org/fedora-join-stats/201610.html. There are other alternatives too - superseriousstats for example. I'll tinker with these a bit more to see if we get the stats we need. Neither can be clubbed with FAS, for example. The other tasks are about spreading the word to make the community better aware of the channels:

  • announce the SIG on the community blog
  • announce the SIG on the announce mailing list

We'll get these out in the next few days. Since you're reading this already, please do hang out in the channels that we've set up. We've already got some newbies trickling in to the IRC channel and sometimes we miss their questions if we're not actively monitoring the channel:

If you do run into people that need help getting started, please send them over to one of our channels too.

The next stage of course would be to publicise these channels outside the community, but that's a few weeks away at the moment. That's it for this meeting. Have a good week ahead!

The perils of long development cycles

Posted by Tomasz Torcz on October 24, 2016 07:37 PM

As for today, latest version of systemd is v231, released in July 2016. This is the version that will be in Fedora 25 (to be GA in three weeks). That's quite a long time between releases for systemd – we used to have a new version every two weeks.

During the hackfest at systemd.conf 2016, I've tried to tackle three issues biting me with Fedora 24 (v229, released in February this year) and F25. The outcome was… unexpected.

First one was a minor in issue in networking part of systemd suite. Data received via LLDP protocol had last letter cut off. I started searching through networkd code, looking for this off-by-one error. But it looked different, the output I was seeing didn't match the code. Guess what? LLDP presentation was reworked shortly after v229. The rework fixed the issue, but Fedora 24 newer received the backport. Which is fair, as the issue was cosmetic and not really important.

So I gone through another cosmetic issue which was annoying me. In list-timers display, multi-byte characters in day names were causing columns to be misaligned. I've discussed the issue as it appeared in Fedora 25 (v231) with Zbyszek during the conference, and I was ready to fix it. Again, I wasn't able to find the code, because it was reworked after v231 and the problem was corrected. I just wasted some time digging through codebase.

Last issue, which I didn't have time to chase, was more serious. Sometimes output from services stopped appearing in journal. It turned out to be genuine bug (#4408). Zbyszek fixed it last week, it will be in v232. This is serious flaw, the fix should be backported to v231 in Fedora 25 and other stable versions.

In summary, long development periods increase the need to do backports, escalating the workload on maintainers. Delays cause contributors using stable versions to lose time looking through evolved code, which may no longer be buggy. Prolonged delays between stable releases make the distributions ship old code. Even Fedora, which tries to be first.

Security Score Card For Fedora Infrastructure

Posted by Kevin Fenzi on October 24, 2016 05:41 PM

Josh is asking folks to send him their security score card via twitter. Since I’ve been trying to blog more and like pontificating, I thought I would respond here in a blog post. 😉

There’s 4 parts to the scorecard:

  1. Number of staff
  2. Number of “systems”
  3. Lines of code
  4. Number of security people

For Fedora Infrastructure, some of these are pretty hard to answer, but here’s some attempts:

  1. Fedora Infrastructure is a Open organization. People who show up and start doing things are granted more and more permissions based on their merit. Sometimes people drift away to other things, sometimes new people show up. There’s some people employed by Fedora’s primary sponsor Red Hat, specifically to work on Fedora. Those account for 3.5 sysadmins, 5 applications developers, 2 release engineers, and 2 design folks. Specific areas will have potentially lots more community folks working on them. So, answer: 13-130?
  2. This one is easier to quantify. We have (almost) everything in ansible, so right now our ansible inventory + some misc non ansible hosts is around 616 hosts.
  3. This is another one thats difficult. We have a lot of applications (see https://apps.fedoraproject.org/) Some of them are just upstream projects we have instances of (mediawiki, askbot, etc). Others are things where we are primary developers on (fedocal, pagure, etc). It would be a fun project to look at all these and count up lines of code. Answer: dunno. ;(
  4. If this is full time security people working only on security issues, then 0. We do have a excellent security office in Patrick who is super smart and good at auditing and looking for issues before they bite us, but he’s not doing that full time. Others of the sysadmin teams do security updates and monitoring lists/errata and watching logs for out of the ordinary behavior, but thats also not full time. So, answer: 0 or 1 0r 3?

So, from this I think it would be nice to have a better idea of our applications (not lines of code), but just where to keep track of things better and who knows that application. It would be awesome to get some full time security folks, but I am not sure that will be in the cards.

I’d like to thank Josh for bringing up the discussion… it’s an interesting one for sure.

Spark on Kubernetes at Spark Summit EU

Posted by Will Benton on October 24, 2016 02:54 PM

I’ll be speaking about Spark on Kubernetes at Spark Summit EU this week. The main thesis of my talk is that the old way of running Spark in a dedicated cluster that is shared between applications makes sense when analytics is a separate workload. However, analytics is no longer a separate workload — instead, analytics is now an essential part of long-running data-driven applications. This realization motivated my team to switch from a shared Spark cluster to multiple logical clusters that are co-scheduled with the applications that depend on them.

I’m glad for the opportunity to get together with the Spark community and present on some of the cool work my team has done lately. Here are some links you can visit to learn more about our work and other topics related to running Spark on Kubernetes and OpenShift:

From There to Here (But Not Back Again)

Posted by Red Hat Security on October 24, 2016 01:30 PM

Red Hat Product Security recently celebrated our 15th anniversary this summer and while I cannot claim to have been with Red Hat for that long (although I’m coming up on 8 years myself), I’ve watched the changes from the “0day” of the Red Hat Security Response Team to today. In fact, our SRT was the basis for the security team that Mandrakesoft started back in the day.

In 1999, I started working for Mandrakesoft, primarily as a packager/maintainer. The offer came, I suspect, because of the amount of time I spent volunteering to maintain packages in the distribution. I also was writing articles for TechRepublic at the time, so I also ended up being responsible for some areas of documentation, contributing to the manual we shipped with every boxed set we sold (remember when you bought these things off the shelf?).

Way back then, when security flaws were few and far between (well, the discovery of these flaws, not the existence of them, as we’ve found much to our chagrin over the years), there was one individual at Mandrakesoft who would apply fixes and release them. The advisory process was ad-hoc at best, and as we started to get more volume it was taking his time away from kernel hacking and so they turned to me to help. Having no idea that this was a pivotal turning point and would set the tone and direction of the next 16 years of my life, I accepted. The first security advisory I released for Linux-Mandrake was an update to BitchX in July of 2000. So in effect, while Red Hat Product Security celebrated 15 years of existence this summer, I celebrated my 16th anniversary of “product security” in open source.

When I look back over those 16 years, things have changed tremendously. When I started the security “team” at Mandrakesoft (which, for the majority of the 8 years I spent there, was a one-man operation!) I really had no idea what the future would hold. It blows my mind how far we as an industry have come and how far I as an individual have come as well. Today it amazes me how I handled all of the security updates for all of our supported products (multiple versions of Mandriva Linux, the Single Network Firewall, Multi-Network Firewall, the Corporate Server, and so on). While there was infrastructure to build the distributions, there was none for building or testing security updates. As a result, I had a multi-machine setup (pre-VM days!) with a number of chroots for building and others for testing. I had to do all of the discovery, the patching, backporting, building, testing, and the release. In fact, I wrote the tooling to push advisories, send mail announcements, build packages across multiple chroots, and more. The entire security update “stack” was written by me and ran in my basement.

During this whole time I looked to Red Hat for leadership and guidance. As you might imagine, we had to play a little bit of catchup many times and when it came to patches and information, it was Red Hat that we looked at primarily (I’m not at all ashamed to state that quite often we would pull patches from a Red Hat update to tweak and apply to our own packages!). In fact, I remember the first time I talked with Mark Cox back in 2004 when we, along with representatives of SUSE and Debian, responded to the claims that Linux was less secure than Windows. While we had often worked well together through cross-vendor lists like vendor-sec and coordinated on embargoed issues and so on, this was the first real public stand by open source security teams against some mud that was being hurled against not just our products, but open source security as a whole. This was one of those defining moments that made me scary-proud to be involved in the open source ecosystem. We set aside competition to stand united against something that deeply impacted us all.

In 2009 I left Mandriva to work for Red Hat as part of the Security Response Team (what we were called back then). Moving from a struggling small company to a much larger company was a very interesting change for me. Probably the biggest change and surprise was that Red Hat had the developers do the actual patching and building of packages they normally maintained and were experienced with. We had a dedicated QA team to test this stuff! We had a rigorous errata process that automated as much as possible and enforced certain expectations and conditions of both errata and associated packages. I was actually able to focus on the security side of things and not the “release chain” and all parts associated with it, plus there was a team of people to work with when investigating security issues.

Back at Mandriva, the only standard we focused on was the usage of CVE. Coming to Red Hat introduced me to the many standards that we not only used and promoted, but also helped shape. You can see this in CVE, and now DWF, OpenSCAP and OVAL, CVRF, the list goes on. Not only are we working to make, and keep, our products secure for our customers, but we apply our expertise to projects and standards that benefit others as these standards help to shape other product security or incident response teams, whether they work on open source or not.

Finally (as an aside and a “fun fact”) when I first started working at Mandrakesoft with open source and Linux, I got a tattoo of Tux on my calf. A decade later, I got a tattoo of Shadowman on the other calf. I’m really lucky to work on things with cool logos, however I’ve so far resisted getting a tattoo of the heartbleed logo!

I sit and think about that initial question that I was asked 16 years ago: “Would you want to handle the security updates?”. I had no idea it would send me to work with the people, places, and companies that I have. No question that there were challenges and more than a few times I’m sure that the internet was literally on fire but it has been rewarding and satisfying. And I consider myself fortunate that I get to work every day with some of the smartest, most creative, and passionate people in open source!

valgrind 3.12.0 and Valgrind@Fosdem

Posted by Mark J. Wielaard on October 24, 2016 01:13 PM

Valgrind 3.12.0 was just released with lots of exciting improvements. See the release notes for all the details. It is already packaged for Fedora 25.

Valgrind will also have a developer room at Fosdem on Saturday 4 February 2017 in Brussels, Belgium. Please join us, regardless of whether you are a Valgrind core hacker, Valgrind tool hacker, Valgrind user, Valgrind packager or hacker on a project that integrates, extends or complements Valgrind.

Please see the Call for Participation for more information on how to propose a talk or discussion topic.

Switchable / Hybrid Graphics support in Fedora 25

Posted by Hans de Goede on October 24, 2016 12:56 PM
Recently I've been working on improving hybrid graphics support for the upcoming Fedora 25 release. Although Fedora 25 Workstation will use Wayland by default for its GNOME 3 desktop, my work has been on hybrid gfx support under X11 (Xorg) as GNOME 3 on Wayland does not yet support hybrid gfx,

So no Wayland, still there are a lot of noticable hybrid gfx support and users of laptops with hybrid gfx using the open-source drivers should have a much smoother userexperience then before. Here is an (incomplete) list of generic improvements:

  • Fix the discrete GPU not suspending after using an external monitor, halving laptop battery life

  • xrandr --listproviders no longer shows 3 devices instead of 2 on laptops with 2 GPUs

  • Hardware cursor support when both GPUs have active video outputs, previously X would fallback to software cursor rendering in this case which would typically lead to a flickering, or entirely invisible cursor at the top of the screen

Besides this a lot of work has been done on fixing hybrid gfx issues in the modesetting driver, this is important since with Fedora we use the modesetting driver on Skylake and newer Intel integrated gfx as well as on Maxwell and newer Nvidia discrete GPUs, the following issues have been fixed in the modesetting driver (and thus on laptops with a Skylake CPU and/or a Maxwell or newer GPU):

  • Hide HW cursor on init, this fixes 2 cursors showing on some setups

  • Make the modesetting driver support DRI_PRIME=1 render offloading when the secondary GPU is using the modesetting driver

  • Fix misrendering (tiled vs linear) when using DRI_PRIME=1 render offloading and the primary GPU is using the modesetting driver

  • Fix GL apps running at 1 fps when shown on a video-output of the secondary GPU and the primary GPU is using the modesetting driver

  • Fix secondary GPU video output partial screen updates (part of the screen showing a previous frame) when the discrete GPU is the secondary GPU and the primary GPU is using the modesetting driver

  • Fix secondary GPU video output updates lagging (or sometimes the last frame simply not being shown at all because no further rendering is happening) when the discrete GPU is the secondary GPU and the primary GPU is using the modesetting driver

Note coming Wednesday (October 26th) we're having a Fedora Better Switchable Graphics Test Day, if you have a laptop with hybrid gfx please join us to help further improving hybrid gfx support.

Participez à la journée de test de Fedora 25 sur les images Cloud et Atomic

Posted by Charles-Antoine Couret on October 24, 2016 12:16 PM

Aujourd'hui, ce lundi 24 octobre, est une journée dédiée à un test précis : sur les images Cloud et Atomic de Fedora. En effet, durant le cycle de développement, l'équipe d'assurance qualité dédie quelques journées autours de certains composants ou nouveautés afin de remonter un maximum de problèmes sur le sujet.

Elle fournit en plus une liste de tests précis à effectuer. Il vous suffit de les suivre, comparer votre résultat au résultat attendu et le notifier.

Qu'est-ce que c'est ?

Les images clouds sont en fait des images d'installation de Fedora dédiée au Cloud. À l'instar de Workstation qui est la version de base, et Server pour les serveurs, Cloud fait parti des produits de Fedora pour gérer des cas d'utilisations spécifiques et offrir une expérience utilisateur cohérente autour de ceux-ci.

La particularités des images clouds sont d'être légères pour être instanciées plusieurs fois dans une même machine via des machine virtuelles ou autre solution similaire.

Les tests du jour couvrent :

  • Le bon démarrage du système, avec un accès SSH ouvert ;
  • La mise à jour du système atomiquement ;
  • Le retour en arrière suite à une mise à jour atomique ;
  • Le lancement des applications via Docker ;
  • La gestion de l'espace disque de Docker.

Comment y participer ?

Vous pouvez vous rendre sur la page des tests pour lister les tests disponibles et rapporter vos résultats. La page wiki récapitule les modalités de la journée.

En cas de bogue, il est nécessaire de le rapporter sur le BugZilla. Si vous ne savez pas faire, n'hésitez pas à consulter la documentation correspondante.

De plus, si une journée est dédiée à ces tests, il reste possible de les effectuer quelques jours plus tard sans problème ! Les résultats seront globalement d'actualité.

Lack of availability

Posted by Paul Mellors [MooDoo] on October 24, 2016 10:21 AM

A few people have been inquiring where I’ve been online the last few weeks/months and am I ok?

Short answer – Yes I’m fine🙂

Longer [only slightly] answer – Yes I’m fine, my laptop broke.   A couple of weeks ago, my laptop started powering off for no reason, it wouldn’t stay up for more than 2 mins at a time.  To be honest it’s an old laptop so apart from all the standard checks/taking apart/cleaning/brief investigation, it’s not worth spending any more time on it.  As it was my workhorse, I don’t have a replacement yet.  I’ve been loaned a laptop which I’m grateful for, but really need to replace my own.   For the time being then, I’ll not be online much in the evenings and weekends.  Please bear with me.  Guess I’m on the lookout for a cheap laptop replacement that runs linux.

This has been a #scheduledpost announcement.

What is the GRUB2 boot loader?

Posted by Fedora Magazine on October 24, 2016 08:00 AM

There are various things that make up an operating system. In any operating system, one of the most critical parts is powering on the machine. During this process, the computer will execute a small program in read-only memory (ROM) to begin initiating the startup process. This small program is known by many names, but most often called a boot loader. In almost every Linux distribution, including Fedora, GRUB2 (or GRand Unified Bootloader 2) is the default boot loader. Even though it is a critical piece of the operating system, many people aren’t aware of the boot loader, all that goes into it, or how it can be customized.

Every computer operating system needs a kernel and boot loader to load and boot the operating system. In a Linux system, the kernel and the initial ramdisk (or initrd) play major roles for loading the operating system from a disk drive and into memory (or RAM). GRUB2 supports all computer operating systems including Windows, all Linux distributions, and nearly all Unix-like operating systems like macOS.

Why GRUB2?

Windows 10, Fedora 24, CentOS 7.2, and macOS 10.11.5 El Capitan operating systems in GRUB2 menu

Windows 10, Fedora 24, CentOS 7.2, and macOS 10.11.5 El Capitan operating systems in GRUB2 menu

There are many different types of firmware that initialize system hardware during the startup process. Some of these include options such as Open-Efi and legacy / UEFI BIOS. GRUB2 supports these. This broad range of compatibility with various firmware ensures that GRUB2 can be used on almost any system. It works on some of the oldest machines still running as well as some of the newest hardware on the market.

It also has a special ability to boot from any file system format, such as HFS+ (macOS), NTFS (often Windows), ext3/4 (often Linux), XFS, and more. It also supports MBR (Master Boot Record) and GPT (GUID Partition Tables) partitioning schemes.

GRUB2 security

The design of GRUB2 is security-oriented and flexible for a variety of needs. It has two well-known security and privacy features to help protect your system. Normally when starting your option, you are able to enter your BIOS or UEFI settings and change them without logging in. GRUB2 allows you to set a password that must be entered to change these settings. This helps keep your system safe and secure from someone who may have physical access to your machine. An example of this being used is blocking USB devices from booting up on the system.

Additionally, GRUB2 supports Linux Unified Key Setup, or LUKS. When installing an operating system for the first time or formatting a hard drive, there is an extra security option to encrypt the entire file system. LUKS is the tool used to add an encryption layer from the root directory across all parts of the machine. Before you are able to reach the login screen of Fedora or another Linux distribution, you must enter an encryption passphrase to unlock your system. GRUB2 integrates with LUKS and supports booting with an encrypted file system.

In the world of security, not every threat may come from the Internet or a remote hacker. Sometimes the biggest security breach is at the physical layer with who has access to a system. Together, these two security features allow you to keep your system locked down and secure in the event you lose access to your machine.


Example of a customized GRUB2 menu using a custom wallpaper

Example of a customized GRUB2 menu using a custom wallpaper

GRUB2 has powerful settings for security and initializing the operating system, but it also has more features to customize the user experience. It supports background wallpapers, different fonts, and more to personalize your system. The Grub Customizer tool allows you to make these customizations quickly and easily.

To install the tool, open up a command line and enter the following command. You are prompted to install the package.

$ sudo dnf install grub-customizer

After installing, you will be able to open the application on your desktop. When opening it, you will enter your password so the application can make changes to the GRUB2 configuration file. This tool can make advanced changes to your system configuration, but for this guide, we will focus on the Appearance settings tab at the top.

This tab will present you with a variety of options to theme and customize your system. You can select a background image, change the font family, and alter the font colors. The center part of the screen will update with a live preview of what it will look like while you make changes. When you’re done making changes, hit the Save button in the upper-left corner and reboot to see your changes take effect.

Note that if you use a custom font, you MUST use a font size that will fit in your system’s resolution. If the font size is too large for the resolution of your system, GRUB2 will enter a crash loop and you will have to boot from live media (e.g. a USB or CD) to fix this.

Using Grub Customizer to add a background image and custom font to the GRUB2 menu

Using Grub Customizer to add a background image and custom font to the GRUB2 menu

Fudcon 2016, días de experiencias compartidas…

Posted by Bernardo C. Hermitaño Atencio on October 24, 2016 05:53 AM

Después de una semana y algo más, es momento de hacer un recuerdo de lo que fué el Fedora Users and Developers Conference – Fudcon Puno 2016, un evento de gran concurrencia, donde se llevaron muchas charlas y talleres en los días 13,14 y 15 de Octubre; en las siguientes líneas narro un breve resumen de mi aporte en el evento.

El día Jueves 13 por la  mañana, la inaguración con participación de las autoridades y estudiantes de la Universidad Nacional del Altiplano, estudiantes de otras universidades, institutos, profesionales afines a la computación y sistemas, usuarios, desarrolladores y embajadores de Fedora se congregraron a ser partícipes del FUDCON con mucho entusiasmo de poder brindar y obtener experiencias que sumen a todos.

Por la tarde me tocó realizar una charla denominada “Experiencias con Fedora en Educación Superior Tecnológica”, donde pude hacer mención y enfatizar la influencia de Fedora como aporte en comunidad que punto de partida que ayudo a generar ideas y/proyectos que están permitiendo resolver problemas álgidos y que ahora ya es posible observar algunas mejoras en el Instituto Superior Tecnológico Público Manuel Seoane Corrales en San Juan de Lurigancho – Lima.

1 2 3 4 5 7 8 9 10

El día Viernes 14 mi participación era ahora por la mañana, el tema denominado “Primeros pasos en el terminal de Linux”, tuvo mucha concurrencia, que después del final había un gran interés de los partipantes por querer continuar con el tema, es una evidencia ya muy conocida en el Perú de la desactualización y el alejamiento de estas tecnologías en las aulas,  se requieren una urgente reformulación de los contenidos para los estudiantes de informática y sistemas en la Educación Superior Peruana.

El resto del día pude observar varias charlas muy importantes y con aportes bastante interesantes de los ponentes que me motivaron a seguir experimentando…

11 11a 12 13 14 15 16 17 18 19 20 SAMSUNG CSC

El día 15, varios estudiantes de la UNAP camino a ser parte de la comunidad, los embajadores Lorddemon, Barto y con la guía de Echevemaster decidimos ser parte del taller de empaquetamiento en todo el día, donde nos dedicamos a buscar aplicaciones que no son parte de la paquetería de Fedora para luego estudiarlo e intentar empaquetarlo cosa que aún tengo pendiente a finalizar el proceso.

Por la tarde día 15 llegó la clausura del evento, muy emotiva las palabras de los organizadores Tonnet, Aly, de los ponentes y participantes, después de haber conocido nuevos amigos, nuevos conocimientos, nuevas ideas, llegan los nuevos retos que nos motivan a seguir aportando, pendiente de la comunidad y de sus actividades.

Everything you know about security is wrong

Posted by Josh Bressers on October 23, 2016 10:21 PM
If I asked everyone to tell me what security is, what do you do about it, and why you do it. I wouldn't get two answers that were the same. I probably wouldn't even get two that are similar. Why is this? After recording Episode 9 of the Open Source Security Podcast I co-host, I started thinking about measuring a lot. It came up in the podcast in the context of bug bounties, which get exactly what they measure. But do they measure the right things? I don't know the answer, nor does it really matter. It's just important to keep this in mind as in any system, you will get exactly what you measure.

Why do we do the things we do?
I've asked this question before, and I often get answers from people. Some are well thought out reasonable answers. Some are overly simplistic. Some are just plain silly. All of them are wrong. I'm going to go so far as to say we don't know why we do what we do in most instances. Sure, there might be compliance, with a bunch of rules, that everyone knows don't really increase security. Some of us fix security bugs so the bad guys don't exploit them (even though very few actually get exploited). Some of us harden systems using rules that probably don't stop a motivated attacker.

Are we protecting data? Are we protecting the systems? Are we protecting people? Maybe we're protecting the business. Sure, that one sounds good.

Measuring a negative
There's a reason this is so hard and weird though. It's only sort of our fault, it's what we try to measure. We are trying to measure something not happening. You cannot measure how many times an event didn't happen. It's also impossible to prove a negative.

Do you know how many car accidents you didn't get in last week? How about how many times you weren't horribly maimed in an industrial accident? How many times did you not get mugged? These questions don't even make sense, no sane person would even try to measure those things. This is basically our current security metrics.

The way we look at security today is all about the negatives. The goal is to not be hacked. The goal is to not have security bugs. Those aren't goals, those are outcomes.

What's our positive?
In order to measure something, it has to be true. We can't prove a negative, we have to prove something to measure it, so what's the "positive" we need to look for and measure. This isn't easy. I've been in this industry for a long time and I've done a lot of thinking about this. I'm not sure I'm right in my list below, but getting others to think about this is more important than being right.

As security people, we need to think about risk. Our job isn't to stop bad things, it's to understand and control risk. We cannot stop bad things from happening, the best we can hope for is to minimize damage from bad things. Right about now is where many would start talking about the NIST framework. I'm not going to. NIST is neat, but it's too big for my liking, we need something simple. I'm going to suggest you build a security score card and track it over time. The historical trends will be very important.

Security Score Card
I'm not saying this is totally correct, it's just an idea I have floating in my mind, you're welcome to declare it insane. Here's what I'm suggesting you track.

1) Number of staff
2) Number of "systems"
3) Lines of code
4) Number of security people

That's it.

Here's why though. Let's think about measuring positives. We can't measure what isn't happening, but we can measure what we have and what is happening. If you work for a healthy company, 1-3 will be increasing. What does your #4 look like? I bet in many organizations it's flat and grossly understaffed. Good staff will help deal with security problems. If you have a good leader and solid staff, a lot of security problems get dealt with. Things like the NIST framework is what happens when you have competent staff who aren't horribly overworked, you can't force a framework on a broken organization, it just breaks it worse. Every organization is different, there is no one framework or policy that will work. The only way we tackle this stuff is by having competent motivated staff.

The other really important thing this does is makes you answer the questions. I bet a lot of organizations can't answer 2 and 3. #1 is usually pretty easy (just ask ldap), #2 is much harder, and #3 may be impossible for some. These look like easy things to measure and just like quantum physics - by measuring it we will change it, probably for the better.

If you have 2000 employees, 200 systems, 4 million lines of code, and 2 security people, that's clearly a disaster waiting to happen. If you have 20, there may be hope. I have no idea what the proper ratios should be, if you're willing to share ratios with me I'd love to start collecting data. As I said, I don't have scientific proof behind this, it's just something I suspect is true.

I should probably add one more thing. What we measure not only needs to be true, it needs to be simple.

Send me your scorecard via Twitter

Tangerine 0.23

Posted by Petr Šabata on October 23, 2016 09:51 PM

Just a quick update. I’ve finally found some time to answer an old RFE and extended the Tangerine testrequires hook with some basic support for Test::Needs.

Releasing Tangerine 0.23. This should land in Fedora and EL within the next two weeks, as usually.

Another Fedora cycle, another painless Fedora upgrade

Posted by Kevin Fenzi on October 23, 2016 07:06 PM

As we near the release of Fedora 25, as always I upgrade my main servers to the new release before things come out so I can share any problems or issues I hit and possibly get them fixed before the official release.

As with the last few cycles, there was almost no problems. The one annoying item I hit was a configuration change in squid. In the past squid allowed you to have a ‘http_port NNNN intercept’ and no regular http_port defined, however the Fedora 25 version ( squid-4.0.11-1.fc25 ) fails to start with a cryptic: “mimeLoadIcon: cannot parse internal URL: http://myhost:0/
squid-internal-static/icons/…” error. It took me a while to find out that I needed to add a ‘http_port NNNN’ also now.

Otherwise everything went just fine. Many thanks to our excellent QA and upgrade developers.


Posted by Richard W.M. Jones on October 23, 2016 11:27 AM

I was reading about JWZ’s awesome portrait serial terminal and wondering what would a serial terminal look like today if we could implement it using modern technology.

You could get a flat screen display and mount it in portrait mode. Most have VESA attachments, so it’s a matter of finding a nice portrait VESA stand.

To implement the terminal you could fix a Raspberry Pi or similar to the back of the screen. Could it be powered by the same PSU as the screen? Perhaps if the screen had a USB port.

For the keyboard you’d use a nice USB keyboard.

Of course there’s no reason these days to use an actual serial line, nor to limit ourselves to just a text display. Use wifi to link to the host computer. Use software to emulate an old orange DEC terminal, and X11 to display remote graphics.

Building and Booting Upstream Linux and U-Boot for Orange Pi One ARM Board (with Ethernet)

Posted by Christopher Smart on October 23, 2016 04:16 AM

My home automation setup will make use of Arduinos and also embedded Linux devices. I’m currently looking into a few boards to see if any meet my criteria.

The most important factor for me is that the device must be supported in upstream Linux (preferable stable, but mainline will do) and U-Boot. I do not wish to use any old, crappy, vulnerable vendor trees!

The Orange Pi One is a small, cheap ARM board based on the AllWinner H3 (sun8iw7p1) SOC with a quad-Core Cortex-A7 ARM CPU and 512MB RAM. It has no wifi, but does have an onboard 10/100 Ethernet provided by the SOC (Linux patches incoming). It has no NAND (not supported upstream yet anyway), but does support SD. There is lots of information available at http://linux-sunxi.org.

Orange Pi One

Orange Pi One

Note that while Fedora 25 does not yet support this board specifically it does support both the Orange Pi PC (which is effectively a more full-featured version of this device) and the Orange Pi Lite (which is the same but swaps Ethernet for WiFi). Using either of those configurations should at least boot on the Orange Pi One.

Connecting UART

The UART on the Pi One uses the GND, TX and RX connections which are next to the Ethernet jack. Plug the corresponding cables from a 3.3V UART cable onto these pins and then into a USB port on your machine.

Orange Pi One UART Pin Connections

UART Pin Connections (RX yellow, TX orange, GND black)

Your device will probably be /dev/ttyUSB0, but you can check this with dmesg just after plugging it in.

Now we can simply use screen to connect to the UART, but you’ll have to be in the dialout group.

sudo gpasswd -a ${USER} dialout
newgrp dialout
screen /dev/ttyUSB0 115200

Note that you won’t see anything just yet without an SD card that has a working bootloader. We’ll get to that shortly!

Partition the SD card

First things first, get yourself an SD card.

While U-Boot itself is embedded in the card and doesn’t need it to be partitioned, it will be required later to read the boot files.

U-Boot needs the card to have an msdos partition table with a small boot partition (ext now supported) that starts at 1MB. You can use the rest of the card for the root file system (but we’ll boot an initramfs, so it’s not needed).

Assuming your card is at /dev/sdx (replace as necessary, check dmesg after plugging it in if you’re not sure).

sudo umount /dev/sdx* # makes sure it's not mounted
sudo parted -s /dev/sdx \
mklabel msdos mkpart \
primary ext3 1M 10M \
mkpart primary ext4 10M 100%

Now we can format the partitions (upstream U-Boot supports ext3 on the boot partition).
sudo mkfs.ext3 /dev/sdx1
sudo mkfs.ext4 /dev/sdx2

Leave your SD card plugged in, we will need to write the bootloader to it soon!

Upstream U-Boot Bootloader

Install the arm build toolchain dependencies.

sudo dnf install gcc-arm-linux-gnu binutils-arm-linux-gnu

We need to clone upstream U-Boot Git tree. Note that I’m checking out the release directly (-b v2016.09.01) but you could leave this off to get master, or change it to a different tag if you want.
cd ${HOME}
git clone --depth 1 -b v2016.09.01 git://git.denx.de/u-boot.git
cd u-boot

There is a defconfig already for this board, so simply make this and build the bootloader binary.
CROSS_COMPILE=arm-linux-gnu- make orangepi_one_defconfig
CROSS_COMPILE=arm-linux-gnu- make -j$(nproc)

Write the bootloader to the SD card (replace /dev/sdx, like before).
sudo dd if=u-boot-sunxi-with-spl.bin of=/dev/sdx bs=1024 seek=8

Testing our bootloader

Now we can remove the SD card and plug it into the powered off Orange Pi One to see if our bootloader build was successful.

Switch back to your terminal that’s running screen and then power up the Orange Pi One. Note that the device will try to netboot by default, so you’ll need to hit the enter key when you see a line that says the following.

(Or you can just repeatedly hit enter key in the screen console while you turn the device on.)

Note that if you don’t see anything, swap the RT and TX pins on the UART and try again.

With any luck you will then get to a U-Boot prompt where we can check the build by running the version command. It should have the U-Boot version we checked out from Git and today’s build date!

U-Boot version

U-Boot version

Hurrah! If that didn’t work for you, repeat the build and writing steps above. You must have a working bootloader before you can get a kernel to work.

If that worked, power off your device and re-insert the SD card into your computer and mount it at /mnt.

sudo umount /dev/sdx* # unmount everywhere first
sudo mount /dev/sdx1 /mnt

Creating an initramfs

Of course, a kernel won’t be much good without some userspace. Let’s use Fedora’s static busybox package to build a simple initramfs that we can boot on the Orange Pi One.

I have a script that makes this easy, you can grab it from GitHub.

Ensure your SD card is plugged into your computer and mounted at /mnt, then we can copy the file on!

cd ${HOME}
git clone https://github.com/csmart/custom-initramfs.git
cd custom-initramfs
./create_initramfs.sh --arch arm --dir "${PWD}" --tty ttyS0

This will create an initramfs for us in your custom-initramfs directory, called initramfs-arm.cpio.gz. We’re not done yet, though, we need to convert this to the format supported by U-Boot (we’ll write it directly to the SD card).

gunzip initramfs-arm.cpio.gz
sudo mkimage -A arm -T ramdisk -C none -n uInitrd \
-d initramfs-arm.cpio /mnt/uInitrd

Now we have a simple initramfs ready to go.

Upstream Linux Kernel

The Ethernet driver has been submitted to the arm-linux mailing list (it’s up to its 4th iteration) and will hopefully land in 4.10 (it’s too late for 4.9 with RC1 already out).

Clone the mainline Linux tree (this will take a while). Note that I’m getting the latest tagged release by default (-b v4.9-rc1) but you could leave this off or change it to some other tag if you want.

cd ${HOME}
git clone --depth 1 -b v4.9-rc1 \

Or, if you want to try linux-stable, clone this repo instead.
git clone --depth 1 -b v4.8.4 \
git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git linux

Now go into the linux directory.
cd linux

Patching in EMAC support for SOC

If you don’t need the onboard Ethernet, you can skip this step.

We can get the patches from the Linux kernel’s Patchwork instance, just make sure you’re in the directory for your Linux Git repository.

Note that these will probably only apply cleanly on top of mainline v4.9 Linux tree, not stable v4.8.

# [v4,01/10] ethernet: add sun8i-emac driver
wget https://patchwork.kernel.org/patch/9365783/raw/ \
-O sun8i-emac-patch-1.patch
# [v4,04/10] ARM: dts: sun8i-h3: Add dt node for the syscon
wget https://patchwork.kernel.org/patch/9365773/raw/ \
-O sun8i-emac-patch-4.patch
# [v4,05/10] ARM: dts: sun8i-h3: add sun8i-emac ethernet driver
wget https://patchwork.kernel.org/patch/9365757/raw/ \
-O sun8i-emac-patch-5.patch
# [v4,07/10] ARM: dts: sun8i: Enable sun8i-emac on the Orange PI One
wget https://patchwork.kernel.org/patch/9365767/raw/ \
-O sun8i-emac-patch-7.patch
# [v4,09/10] ARM: sunxi: Enable sun8i-emac driver on sunxi_defconfig
wget https://patchwork.kernel.org/patch/9365779/raw/ \
-O sun8i-emac-patch-9.patch

We will apply these patches (you could also use git apply, or grab the mbox if you want and use git am).

for patch in 1 4 5 7 9 ; do
    patch -p1 < sun8i-emac-patch-${patch}.patch

Hopefully that will apply cleanly.

Building the kernel

Now we are ready to build our kernel!

Load the default kernel config for the sunxi boards.
ARCH=arm CROSS_COMPILE=arm-linux-gnu- make sunxi_defconfig

If you want, you could modify the kernel config here, for example remove support for other AllWinner SOCs.
ARCH=arm CROSS_COMPILE=arm-linux-gnu- make menuconfig

Build the kernel image and device tree blob.
ARCH=arm CROSS_COMPILE=arm-linux-gnu- make -j$(nproc) zImage dtbs

Mount the boot partition and copy on the kernel and device tree file.
sudo cp arch/arm/boot/zImage /mnt/
sudo cp arch/arm/boot/dts/sun8i-h3-orangepi-one.dtb /mnt/

Bootloader config

Next we need to make a bootloader file, boot.cmd, which tells U-Boot what to load and boot (the kernel, device tree and initramfs).

The bootargs line says to output the console to serial and to boot from the ramdisk. Variables are used for the memory locations of the kernel, dtb and initramfs.

Note, if you want to boot from the second partition instead of an initramfs, change root argument to root=/dev/mmcblk0p2 (or other partition as required).

cat > boot.cmd << \EOF
ext2load mmc 0 ${kernel_addr_r} zImage
ext2load mmc 0 ${fdt_addr_r} sun8i-h3-orangepi-one.dtb
ext2load mmc 0 ${ramdisk_addr_r} uInitrd
setenv bootargs console=ttyS0,115200 earlyprintk root=/dev/root \
rootwait panic=10
bootz ${kernel_addr_r} ${ramdisk_addr_r} ${fdt_addr_r}

Compile the bootloader file and output it directly to the SD card at /mnt.
sudo mkimage -C none -A arm -T script -d boot.cmd /mnt/boot.scr

Now, unmount your SD card.

sudo umount /dev/sdx*

Testing it all

Insert it into the Orange Pi One and turn it on! Hopefully you’ll see it booting the kernel on your screen terminal window.

You should be greeted by a login prompt. Log in with root (no password).

Login prompt

Login prompt

That’s it! You’ve built your own Linux system for the Orange Pi One!

If you want networking, at root prompt with the Ethernet device an IP address on your network and test with a ping.

Here’s an example:

Networking on Orange Pi One

Networking on Orange Pi One

There is clearly lots more you can do with this device.

Memory usage

Memory usage


FUDCon Puno 2do y 3er Día

Posted by Tonet Jallo on October 22, 2016 11:36 PM

Hola de nuevo, hoy después de casi una semana que terminó FUDCon me pongo a escribir acerca de los sucesos de esta gran experiencia, empezando por…

2do Día

Este día empezó mejor que el primero, ya que el barcamp tenia ya los horarios definidos y los asistentes ya asimilaban la idea de estar delante de una desconferencia, la lógica fue simple para este FUDCon, que sea un evento fuera de lo común, no tener personas cuadriculadas adelante y hermetismo entre el asistente y los expositores, algunas fotos de las sesiones programadas para el segundo día son las siguientes:


Cabe destacar que se empezaron a identificar a aproximadamente diez potenciales contribuidores quienes a partir de este día estuvieron siendo asesorados por sus respectivos mentores, de los cuales se formaron tres grupos, uno de empaquetadores guiado por Eduardo Echeverria, uno de diseñadores guiado por Junior Wolnei y uno de traductores/escritores guiado por Eduardo Mayorga.


Terminando el evento, todo el grupo de Fedora Latam nos reunimos para hablar algo sobre el evento, la charla se hizo muy amena.

3er Día

Este fue RELATIVAMENTE el peor día de FUDCon, y pongo énfasis en lo de RELATIVAMENTE pues este día todo empezó a la misma hora, con el único problema de que no se tenían asistentes, literalmente habían entre cinco a diez personas hasta las 11:00am, en realidad es normal que suceda algo así en un evento de varios días que tiene como ultimo un sábado, bien, esto no significaría  un problema tan grande pues de puede aprovechar el tiempo para realizar un Fedora Activity Day en este día lo cual podría resultar incluso mas productivo que tan solo tener conferencias, pero no fue así, luego de las 11:00am llegaron mas personas, no eran muchas pero considero yo que eran quienes realmente querían llevarse algo aprendido de este evento, personas realmente interesadas en el software libre o personas realmente interesadas en Fedora, al final el evento termino con unos cincuenta asistentes, y fue algo genial, nosotros los organizadores nunca nos preocupamos en asistentes, NO NOS IMPORTÓ NUNCA TENER MUCHA GENTE ASISTENTE, no consideramos que el éxito de un evento se base en la cantidad de asistentes, como quien dice, lo que se quiere es CALIDAD, no CANTIDAD, así que creo que eso lo explica todo.


Volviendo a la palabra esta de RELATIVAMENTE, este FUDCon no fue enfocado a juntar mucha gente, antes de FUDCon se realizaron muchos eventos relacionados con software libre, desde mi participación en 2012 empezando con los FLISOL los cuales solo hablaron de temas genéricos relacionados a software libre, y terminando en los Fedora Weekends los cuales se enfocaron en despertar el interés en la comunidad y en buscar contribuidores específicamente para Fedora, y muy bien, lo logramos, no son muchos pero ahí están, hay que darles seguimiento y mucho apoyo para que puedan desarrollar sus habilidades para aportar a la comunidad, y que así puedan también aportar para crear una comunidad local mucho mas solida.

Este es un resumen a muy grandes rasgos de lo que fue FUDCon para mi, en realidad nunca pensé que este evento podría llegar a Puno, hasta donde me contaron es el primer FUDCon que se realiza en una ciudad donde NO HAY AEROPUERTO, no si sea un FUDCon malo o bueno, pero realmente me cambió la vida como organizador, y estoy seguro que a todos los que metieron mano en este evento el cual no podria haberse realizado como se realizó sin  ese granito de arena que cada uno puso.

Para finalizar, dentro de poco escribiré un articulo sobre las conclusiones de FUDCon a modo de feedback (yes, I promised you @bexelbie), los obstáculos, los problemas, lo bonito, lo obtenido, etc etc.

Así que hasta pronto, y espero que este evento haya sido para el provecho de la comunidad y de todos los asistentes que se quedaron con nosotros hasta el final.

Teilen von Beiträgen deaktiviert

Posted by Fedora-Blog.de on October 22, 2016 09:26 PM

Da die Funktion zum Teilen von Beiträgen auf Facebook und Co zuletzt zum Versenden von Spam missbraucht wurde und wir deswegen auch schon von unserem Hoster darauf aufmerksam gemacht wurden, wurde die Funktion einstweilen deaktiviert.

Krita 3.1 second beta.

Posted by mythcat on October 22, 2016 06:53 PM
The Krita 3.1 beta come with a full features and fixes.
The linux version to download your krita-3.0.91-x86_64.appimage.
The most useful features can you be found into this 2D software for drawing:

  • ffmpeg-based export to animated gif and video formats;
  • the OpenGL canvas is fixed;
  • will officially support OSX
  • fixing bugs on Windows and Linux
  • the beta still contains the colorize mask/lazy brush plugin
  • expecting to experimenting with new algorithms, but with the next release
You can read more abiut this here. 
Krita come with this free copyright for Krita

Krita is free software under the GNU Public License, Version 2 or any later version at your option. This means you can get the source code for Krita from . The license only applies to the Krita source code, not to your work! You can use Krita to create artworks for commerical purposes without any limitation.

Krita Shell Extension come with MIT copyright. Krita Shell Extension
The MIT License
Copyright (c) 2016 Alvin Wong

some IOT ideas

Posted by Kevin Fenzi on October 22, 2016 06:23 PM

With the massive denial of service against a large DNS provider (in turn causing a lot of other things to have outages) on friday using a network of insecure IOT (Internet of things) devices (mostly cameras), a lot of folks are thinking about how to address IOT problems. There are a lot of problems and no easy answers, but I thought I would throw out a few ideas here and see if any resonate with others.

First, the problems: IOT devices are insecure and easily taken over for denial of service or other attacks, there is little to no economic incentive to make them more secure, consumers largely don’t care as long as they are still doing whatever their main function is, devices seldom get updates and when they do only for a short time before the company that made them moves on to something else.

Some ideas:

  • Make things vastly more cheap. Yeah, thats right, more cheap. To the point where you can dispose of or toss in recycling the device that just went out of support or was found to be used in some botnet. Or 3d print a new device. Of course this is not going to happen for a long while.
  • Make updates very reliable. I think we can do this with something like ostree/atomic + some wrapper hardware. Upgrade, reboot, if things don’t come back in X minutes, reboot back to the last working version and send for help.
  • Get IOT devices to use mainstream open Linux distros. These would provide source, upgrades, and device makers wouldn’t have to care about not supporting things. This would probibly take legislation and a big push for Linux distros to cater to IOT devices, and there would likely still be some hardware specific code (but it could be open and maintained in distros).
  • Require Internet insurance for users of ISPs. ISPs that police botnet and other harmful devices would pay lower raates than ones that didn’t care, money raised could be used to shut down harmful devices.

It’s a pretty difficult problem sadly, and there’s no good answer, but we are going to have to start doing something or start getting used to large DDOSes taking out a bunch of things we use often.

GTK+ happenings

Posted by Matthias Clasen on October 22, 2016 03:57 PM

I haven’t written about GTK+ development in some time. But now there are some exciting things happening that are worth writing about.


Back in June, a good number of GTK+ developers came together for a hackfest in Toronto,  It was a very productive gathering. One of the topics we discussed there was the (lack of) stability of GTK+ 3 and versioning. We caused a bit of a firestorm by blogging about this right away… so we went back to the drawing board and had another long discussion about the pros and cons of various versioning schemes at GUADEC.

<figure class="wp-caption alignnone" style="width: 1024px"><figcaption class="wp-caption-text">GTK+ BOF in Karlsruhe</figcaption></figure>

The final, agreed-on plan was published on the GTK+ blog, and you can read it there.


Fast-forward to today, and we’ve made good progress on putting this plan into place.

GTK+ has been branched for 3.22, and all future GTK+ 3 releases will come from this branch. This is very similar to GTK+ 2, where we have the forever-stable 2.24 branch.  We plan to maintain the 3.22 branch for several years, so applications can rely on a stable GTK+ 3.

One activity that you can see in the branch currently is that we are deprecating APIs that will go away in GTK+ 4. Most deprecations have been in place for a while (some even from 3.0!),  but some functions have just been forgotten. Marking them as deprecated now will make it easier to port to GTK+ 4 in the future. Keep in mind that deprecations are an optional service – you don’t have to rush to act on them unless you want to port to the next version.

To avoid unnecessary heartburn and build breakage, we’ve switched  jhbuild, GNOME continuous and the flatpak runtimes over to using the 3.22 branch before opening the master branch for  new development, and did the necessary work to make the two branches parallel-installable.

With all these preparations in place, Benjamin and Timm went to work and did a big round of deprecation cleanup. Altogether,  this removed some 80.000 lines of code. Next, we’ve merged Emmanueles GSK work.  And there is a lot more work queued up, from modernizing the GDK layer, to redoing input handling, to building with meson.

The current git version of GTK+ calls itself 3.89, and we’re aiming to do a 3.90 release in spring, ideally keeping the usual 6 months cadence.

…and you

We hope that at least some of the core GNOME applications will switch to using 3.90 by next spring, since we need testing and validation. But… currently things are a still a bit rough in master. The GSK port will need some more time to shake out rendering issues and make it as fast as it should be.

Therefore, we recommend that you stick with the 3.22 branch until we do a 3.89.1 release. By that time, the documentation should also have a 3 → 4 migration guide to help you with porting.

If you are eager to get ready for GTK+ 4 now, you can prepare your application by eliminating the deprecations that show up when you build against the latest 3.22 release.


This is an exciting time for GTK+ ! We will post regular updates as things are landing, but just following the weekly updates on the GTK+ blog should give you a good idea of what is going on.

FUDCon:Puno 2016

Posted by Athos Ribeiro on October 22, 2016 02:53 PM

FUDCon Puno foi a edição de 2016 da conferência de desenvolvedores e usuários do Project Fedora, realizada na cidade de Puno, Peru, entre os dias 13 e 15 de Outubro. Esta foi a primeira edição do evento da qual participei desde que passei a contribuir de forma mais ativa com o projeto. A participação no FUDCon é gratuita e o evento é aberto à todos.

Sobre Puno

Puno é uma cidade pequena, localizada às margens do lago Titicaca a 3800 metros acima do nível do mar. A economia da cidade gira em torno do turismo, dado que pessoas de várias partes do globo visitam o local para conhecer as ilhas flutuantas de Uros e o próprio lago Titicaca.

O local

A Universidad Nacional del Altiplano Puno sediou o evento, sendo a maioria dos participantes estudantes de cursos de tecnologia. A boa localização da Universidade e o fato de Puno ser uma cidade pequena, permitiu que todos os colaboradores do Projeto Fedora pudessem se deslocar à pé durante todos os dias de evento.


Tonet, Aly, e várias outras pessoas foram simplesmente sensacionais no que diz respeito à organização: dedicaram 100% do seu tempo para garantir que o envento corresse como esperado e o resultado foi um evento tranquilo, sem maiores surpresas desagradáveis. Além disto, todos os organizadores eram pessoas amigáveis e receptivas, sempre dispostas a colaborar com o bom funcionamento do evento e aprender mais sobre o Projeto Fedora (alguns dos organizadores não faziam parte do projeto).


O primeiro dia de evento contou com algumas palestras e o BarCamp, onde os participantes selecionaram as atividades que ocorreriam ao longo dos outros dias de evento. As badges de participação não ficaram prontas para o primeiro dia de evento e ao perceber que não ficariam prontas antes do fim do evento, Wolnei, Itamar e eu entramos em contato com pessoas do time de infraestrutura e design (obrigado, jflory e nb) para que as badges fossem aprovadas e implantadas para serem distribuídas nos outros dois dias de evento.

O segundo dia de evento contou com as três salas onde aconteciam as atividades do evento bem cheias. Minha primeira apresentação aconteceu neste dia, onde pude falar um pouco da minha experiência com o COPR e como e porque decidi me tornar um colaborador do projeto. Como a maioria dos participantes do evento eram estudantes residentes em Puno que não dominavam inglês, e meu espanhol se limitava a “hola” e “cerveza”, contei com a ajuda do echevemaster, que traduziu minha apresentação de inglês para espanhol.

Após a primeira apresentação, conheci e entendi melhor o público do evento: não haviam muitos usuários de linux entre os estudantes, o que tornou palestras mais técnicas avançadas demais para a maioria deles: não faz muito sentido falar de ferramentas específicas de virtualização ou sistemas de construção de pacotes para pessoas que ainda não foram introduzidas à uma distribuição GNU/Linux.

Dadas as circunstâncias, ainda no segundo dia, falei sobre gerenciadores de pacotes de uma maneira mais geral, de forma a permitir que os estudantes entendessem a importância de se rastrear arquivos e pacotes dentro de um sistema e em seguida os apresentei ao RPM. Desta vez, mayorga foi meu tradutor de inglês para espanhol.

O último dia de evento caiu em um sábado, o que provavelmente afastou os estudantes do evento, resultando em um último dia menos movimentado, forçando a organização a reduzir o tempo de todas as atividades pela metade e realizá-las em uma única sala. No fim do dia falei um pouco sobre licenças de Software Livre (minha proposta original era uma introdução a Ansible e a infraestrutura do projeto) com ajuda do Wolnei, que fez traduções de português para espanhol.


De modo geral, o evento foi ótimo para reunir a comunidade de desenvolvedores da América Latina. Conheci pessoas (pessoalmente) com quem venho interagindo a algum tempo via IRC e Bugzilla. Ao longo de todo evento surgiram ideias, tanto para a comunidade LATAM quanto para o projeto, de forma mais geral. O evento me motivou a fazer mais revisões de pacotes, e quem sabe passar a sponsor do grupo de empacotadores em um futuro próximo.

Vários participantes nos procuraram, interessados em conhecer melhor os times de tradução, documentação, infraestrutura e empacotamento. Acredito que o evento tenha servido para introduzir o projeto a potenciais colaboradores no Peru.

A noite de sábado contou com um karaoke super animado, onde nos divertimos bastante e eu pude conhecer as pessoas um pouco melhor, além do projeto e trabalho (F for Friends). Eu me diverti bastante cantando com dgilmore, Brian (bex) e Wolnei. Kiara surpreendeu a todos no fim da noite, mostrando uma voz maravilhosa!

Domingo foi dia de conhecer a cidade e comer churrasco de alpaca. Mais uma vez o pessoal da organização acertou, proporcionando um dia bem divertido a todos. Tentamos jogar futebol durante o churrasco, mas a altitude não ajudou e desistimos depois de 10 minutos de jogo!

Não houve uma retrospectiva do evento, o que foi um ponto ruim. Combinamos de discutir o evento na reunião do grupo no dia 29 de Outubro, quando todos já terão retornado aos seus países e estiverem descansados. As fotos serão disponibilizadas na página do Fedora People do Tonnet, eventualmente.

My Custom Open Source Home Automation Project – Part 3, Roll Out

Posted by Christopher Smart on October 22, 2016 07:41 AM

In Part 1 I discussed motivation and research where I decided to build a custom, open source wired solution. Part 2 discussed the prototype and other experiments.

Because we were having to fit in with the builder, I didn’t have enough time to finalise the smart system, so I needed a dumb mode. This Part 3 is about rolling out dumb mode in Smart house!

Operation “dumb mode, Smart house” begins

We had a proven prototype, now we had to translate that into a house-wide implementation.

First we designed and mapped out the cabling.

Cabling Plans

Cabling Plans


  • Cat5e (sometimes multiple runs) for room Arduinos
  • Cat5e to windows for future curtain motors
  • Reed switch cables to light switch
  • Regular Cat6 data cabling too, of course!
  • Whatever else we thought we might need down the track

Time was tight (fitting in with the builder) but we got there (mostly).

  • Ran almost 2 km of cable in total
  • This was a LOT of work and took a LOT of time


Cabled Wall Plates

Cabled Wall Plates

Cabled Wireless Access Point

Cabled Wireless Access Point

Cable Run

Cable Run

Electrical cable

This is the electrician’s job.

  • Electrician ran each bank of lights on their own circuit
  • Multiple additional electrical circuits
    • HA on its own electrical circuit, UPS backed
    • Study/computers on own circuit, UPS backed
    • Various others like dryer, ironing board, entertainment unit, alarm, ceiling fans, ovens, etc
  • Can leave the house and turn everything off
    (except essentials)


The relays had to be reliable, but also available off-the-shelf as I didn’t want to get something that’s custom or hard to replace. Again, for devices that draw too much current for the relay, it will throw a contactor instead so that the device can still be controlled.

  • Went with Finder 39 series relays, specifically
  • Very thin profile
  • Built in fuses
  • Common bus bars
  • Single Pole Double Throw (SPDT)
Finder Relays with Din Mount Module

Finder Relays with Din Mount Module

These are triggered by 24V DC which switches the 240V AC for the circuit. The light switches are running 24V and when you press the button it completes the circuit, providing 24V input to the relay (which turns on the 240V and therefore the light).

There are newer relays now which have an input range (e.g. 0-24V), I would probably use those instead if I was doing it again today so that it can be more easily fired from an array of outputs (not just a 24V relay driver shield).

The fact that they are SPDT means that I can set the relay to be normally open (NO) in which case the relay is off by default, or normally closed (NC) in which case the relay is on by default. This means that if the smart system goes down and can’t provide voltage to the input side of the relay, certain banks of lights (such as the hallway, stairs, kitchen, bathroom and garage lights) will turn on (so that the house is safe while I fix it).

Bank of Relays

Bank of Relays

In the photo above you can see the Cat5e 24V lines from the light switch circuits coming into the grey terminal block at the bottom. They are then cabled to the input side of the relays. This means that we don’t touch any AC and I can easily change what’s providing the input to the relay (to a relay board on an Arduino, for example).

Rack (excuse the messy data cabling)

Rack (excuse the messy data cabling)

There are two racks, one upstairs and one downstairs, that provide the infrastructure for the HA and data networks.

PSU running at 24V

PSU running at 24V

Each rack has a power supply unit (PSU) running at 24V which provides the power for the light switches in dumb mode. These are running in parallel to provide redundancy for the dumb network in case one dies.

You can see that there is very little voltage drop.

Relay Timers

The Finder relays also support timer modules, which is very handy for something that you want to automatically turn off after a certain (configurable) amount of time.

  • Heated towel racks are bell press switches
  • Uses a timer relay to auto turn off
  • Modes and delay configurable via dip switches on relay

UPS backed GPO Circuits

UPS in-lined GPO Circuits

UPS in-lined GPO Circuits

Two GPO circuits are backed by UPS which I can simply plug in-line and feed back to the circuit. These are the HA network as well as the computer network. If the power is cut to the house, my HA will still have power (for a while) and the Internet connection will remain up.

Clearly I’ll have no lights if the power is cut, but I could power some emergency LED lighting strips from the UPS lines – that hasn’t been done yet though.


The switches for dumb mode are regular also off the shelf light switches.

  • Playing with light switches (yay DC!)
  • Cabling up the switches using standard Clipsal gear
  • Single Cat5e cable can power up to 4 switches
  • Support one, two, three and four way switches
  • Bedroom switches use switch with LED (can see where the switch is at night)
Bathroom Light Switch  (Dumb Mode)

Bathroom Light Switch (Dumb Mode)

We have up to 4-way switches so you can turn a single light on or off from 4 locations. The entrance light is wired up this way, you can control it from:

  • Front door
  • Hallway near internal garage door
  • Bottom of the stairs
  • Top of the stairs

A single Cat5e cable can run up to 4 switches.

Cat5e Cabling for Light Switch

Cat5e Cabling for Light Switch

  • Blue and orange +ve
  • White-blue and white-orange -ve
  • Green switch 1
  • White-green switch 2
  • Brown switch 3
  • White-brown switch 4

Note that we’re using two wires together for +ve and -ve which helps increase capacity and reduce voltage drop (followed the Clipsal standard wiring of blue and orange pairs).

Later in Smart Mode, this Cat5e cable will be re-purposed as Ethernet for an Arduino or embedded Linux device.

Hallway Passive Infrared (PIR) Motion Sensors

I wanted the lights in the hallway to automatically turn on at night if someone was walking to the bathroom or what not.

  • Upstairs hallway uses two 24V PIRs in parallel
  • Either one turns on lights
  • Connected to dumb mode network so fire relay as everything else
  • Can be overridden by switch on the wall
  • Adjustable for sensitivity, light level and length of time
Hallway PIR

Hallway PIRs

Tunable settings for PIR

Tunable settings for PIR

Tweaking the PIR means I have it only turning on the lights at night.

Dumb mode results

We have been using dumb mode for over a year now, it has never skipped a beat.

Now I just need to find the time to start working on the Smart Mode…

My Custom Open Source Home Automation Project – Part 2, Design and Prototype

Posted by Christopher Smart on October 22, 2016 07:35 AM

In Part 1 I discussed motivation and research where I decided to build a custom, open source wired solution. In this Part 2 I discuss the design and the prototype that proved the design.

Wired Design

Although there are options like 1-Wire, I decided that I wanted more flexibility at the light switches.

  • Inspired by Jon Oxer’s awesome SuperHouse.tv
  • Individual circuits for lights and some General Purpose Outlets (GPO)
  • Bank of relays control the circuits
  • Arduinos and/or embedded Linux devices control the relays

How would it work?

  • One Arduino or embedded Linux device per room
  • Run C-Bus Cat5e cable to light switches to power Arduino, provide access to HA network
  • Room Arduino takes buttons (lights, fans, etc) and sensors (temp, humidity, reed switch, PIR, etc) as inputs
  • Room Arduino sends network message to relay Arduino
  • Arduino with relay shield fires relay to enable/disable power to device (such as a light, fan, etc)
Basic design

Basic design

Of course this doesn’t just control lights, but also towel racks, fans, etc. Running C-Bus cable means that I can more easily switch to their proprietary system if I fail (or perhaps sell the house).

A relay is fine for devices such as LED downlights which don’t draw much current, but for larger devices (such as ovens and airconditioning) I will use the relay to throw a contactor.

Network Messaging

For the network messaging I chose MQTT (as many others have).

  • Lightweight
  • Supports encryption
  • Uses publish/subscribe model with a broker
  • Very easy to set up and well supported by Arduino and Linux

The way it works would be for the relay driver to subscribe to topics from devices around the house. Those devices publish messages to the topic, such as when buttons are pressed or sensors are triggered. The relay driver parses the messages and reacts accordingly (turning a device on or off).

Cabling Benefits

  • More secure than wireless
  • More future proof
  • DC only, no need to touch AC
  • Provides PoE for devices and motors
  • Can still use wireless (e.g. ZigBee) if I want to
  • Convert to proprietary system (C-Bus) if I fail
  • My brother is a certified cabler 🙂


  • Got some Freeduinos and relay boards (from Freetronics)
  • Hacked around with some ideas, was able to control the relays
  • Basic concept seems doable
  • More on that later..

However, I realised I wasn’t going to get this working in time. I needed a “dumb” mode that wouldn’t require any computers to turn on the lights at least.

Dumb Mode Prototype

So, the dumb mode looks like this.

  • Use the same Cat5e cabling so that Arduino devices can be installed later for smart mode
  • Use standard off the shelf Clipsal light switches
    • Support one, two, three and four way switching
  • Run 24 volts over the Cat5e
  • Light switch completes the circuit which feeds 24 volts into the relay
  • Relay fires the circuit and light turns on!
Dumb mode design

Dumb mode design

We created a demo board that supported both dumb mode and smart mode, which proved that the concept worked well.

HA Prototype Board

HA Prototype Board

The board has:

  • Six LEDs representing the lights
  • Networking switch for the smart network
  • One Arduino as the input (representing the light switch)
  • One Arduino as the relay driver
  • One Raspberry Pi (running Fedora) as the MQTT broker
  • Several dumb mode multi-way light switches
  • Smart inputs such has:
    • Reed switch
    • Temperature and humidity sensor
    • Light sensor
    • PIR

The dumb lights work without input from the smart network.

In smart mode, the Ardinos and Pi are on the same HA network and connect to the broker running on the Pi.

The input Arduino publishes MQTT messages from inputs such as sensors and buttons. The relay Arduino is subscribed to those topics and responds accordingly (e.g. controlling the relay when appropriate).

Dimming Lights

Also played with pulse width modulation (PWM) for LED downlights.

  • Most LEDs come with smart dimmable drivers (power packs) that use leading or trailing edge on AC
  • Wanted to control brightness via DC
  • Used an arduino to program a small ATTiny for PWM
  • Worked, but only with non-smart driver
  • Got electrician to install manual dimmers for now where needed, such as family room


Given the smart power packs, I cannot dim on the DC side, unless I replace the power packs (which is expensive).

In future, I plan to put some leading/trailing edge dimmers inline on the AC side of my relays (which will need an electrician) which I can control from an Arduino or embedded Linux device via a terminal block. This should be more convenient than replacing the power packs in the ceiling space and running lines to control the ATTiny devices.


Doors are complicated and have quite a few requirements.

  • Need to also work with physical key
  • Once in, door should be unlocked from inside
  • Need to be fire-able from an Arduino
  • Work with multiple smart inputs, e.g. RFID, pin pad
  • Played with wireless rolling q-code, arduino can fire the remote (Jon Oxer has done this)
  • Maybe pair this with deadlock and electric strike
  • Perhaps use electronic door closers

I’m not sure what route I will take here yet, but it’s been cabled up and electronic strike plates are in place.

At this point I had a successful prototype which was ready to be rolled out across the house. Stay tuned for Part 3!

My Custom Open Source Home Automation Project – Part 1, Motivation and Research

Posted by Christopher Smart on October 22, 2016 07:26 AM

In January 2016 I gave a presentation at the Canberra Linux Users Group about my journey developing my own Open Source home automation system. This is an adaptation of that presentation for my blog. Big thanks to my brother, Tim, for all his help with this project!

Comments and feedback welcome.

Why home automation?

  • It’s cool
  • Good way to learn something new
  • Leverage modern technology to make things easier in the home

At the same time, it’s kinda scary. There is a lack of standards and lack of decent security applied to most Internet of Things (IoT) solutions.

Motivation and opportunity

  • Building a new house
  • Chance to do things more easily at frame stage while there are no walls
Frame stage

Frame stage

Some things that I want to do with HA

  • Respond to the environment and people in the home
  • Alert me when there’s a problem (fridge left open, oven left on!)
  • Gather information about the home, e.g.
    • Temperature, humidity, CO2, light level
    • Open doors and windows and whether the house is locked
    • Electricity usage
  • Manage lighting automatically, switches, PIR, mood, sunset, etc
  • Control power circuits
  • Manage access to the house via pin pad, proxy card, voice activation, retina scans
  • Control gadgets, door bell/intercom, hot water, AC heating/cooling, exhaust fans, blinds and curtains, garage door
  • Automate security system
  • Integrate media around the house (movie starts, dim the lights!)
  • Water my garden, and more..

My requirements for HA

  • Open
  • Secure
  • Extensible
  • Prefer DC only, not AC
  • High Wife Acceptance Factor (important!)

There’s no existing open source IoT framework that I could simply install, sit back and enjoy. Where’s the fun in that, anyway?

Research time!

Three main options:

  • Wireless
  • Wired
  • Combination of both

Wireless Solutions

  • Dominated by proprietary Z-Wave (although has since become more open)
  • Although open standards based also exist, like ZigBee and 6LoWPAN
Z-Wave modules

Z-Wave modules

Wireless Pros

  • Lots of different gadgets available
  • Gadgets are pretty cheap and easy to find
  • Easy to get up and running
  • Widely supported by all kinds of systems

Wireless Cons

  • Wireless gadgets are pretty cheap and nasty
  • Most are not open
  • Often not updateable, potentially insecure
  • Connect to AC
  • Replace or install a unit requires an electrician
  • Often talk to the “cloud”

So yeah, I could whack those up around my house, install a bridge and move on with my life, but…

  • Not as much fun!
  • Don’t want to rely on wireless
  • Don’t want to rely on an electrician
  • Don’t really want to touch AC
  • Cheap gadgets that are never updated
  • Security vulnerabilities makes it high risk

Wired Solutions

  • Proprietary systems like Clipsal’s C-Bus
  • Open standards based systems like KNX
  • Custom hardware
  • Expensive
  • 🙁
Clipsal C-Bus light switch

Clipsal C-Bus light switch

Cabling Benefits

  • More secure than wireless
  • More future proof
  • DC only, no need to touch AC
  • Provides PoE for devices and motors
  • Can still use wireless (e.g. ZigBee) if I want to
  • Convert to proprietary system (C-Bus) if I fail
  • My brother is a certified cabler 🙂

Technology Choice Overview

So comes down to this.

  • Z-Wave = OUT
  • ZigBee/6LoWPAN = MAYBE IN
  • C-Bus = OUT (unless I screw up)
  • KNX = OUT
  • Arduino, Raspberry Pi = IN

I went with a custom wired system, after all, it seems like a lot more fun…

Stay tuned for Part 2!

Fixing the IoT isn't going to be easy

Posted by Matthew Garrett on October 22, 2016 05:14 AM
A large part of the internet became inaccessible today after a botnet made up of IP cameras and digital video recorders was used to DoS a major DNS provider. This highlighted a bunch of things including how maybe having all your DNS handled by a single provider is not the best of plans, but in the long run there's no real amount of diversification that can fix this - malicious actors have control of a sufficiently large number of hosts that they could easily take out multiple providers simultaneously.

To fix this properly we need to get rid of the compromised systems. The question is how. Many of these devices are sold by resellers who have no resources to handle any kind of recall. The manufacturer may not have any kind of legal presence in many of the countries where their products are sold. There's no way anybody can compel a recall, and even if they could it probably wouldn't help. If I've paid a contractor to install a security camera in my office, and if I get a notification that my camera is being used to take down Twitter, what do I do? Pay someone to come and take the camera down again, wait for a fixed one and pay to get that put up? That's probably not going to happen. As long as the device carries on working, many users are going to ignore any voluntary request.

We're left with more aggressive remedies. If ISPs threaten to cut off customers who host compromised devices, we might get somewhere. But, inevitably, a number of small businesses and unskilled users will get cut off. Probably a large number. The economic damage is still going to be significant. And it doesn't necessarily help that much - if the US were to compel ISPs to do this, but nobody else did, public outcry would be massive, the botnet would not be much smaller and the attacks would continue. Do we start cutting off countries that fail to police their internet?

Ok, so maybe we just chalk this one up as a loss and have everyone build out enough infrastructure that we're able to withstand attacks from this botnet and take steps to ensure that nobody is ever able to build a bigger one. To do that, we'd need to ensure that all IoT devices are secure, all the time. So, uh, how do we do that?

These devices had trivial vulnerabilities in the form of hardcoded passwords and open telnet. It wouldn't take terribly strong skills to identify this at import time and block a shipment, so the "obvious" answer is to set up forces in customs who do a security analysis of each device. We'll ignore the fact that this would be a pretty huge set of people to keep up with the sheer quantity of crap being developed and skip straight to the explanation for why this wouldn't work.

Yeah, sure, this vulnerability was obvious. But what about the product from a well-known vendor that included a debug app listening on a high numbered UDP port that accepted a packet of the form "BackdoorPacketCmdLine_Req" and then executed the rest of the payload as root? A portscan's not going to show that up[1]. Finding this kind of thing involves pulling the device apart, dumping the firmware and reverse engineering the binaries. It typically takes me about a day to do that. Amazon has over 30,000 listings that match "IP camera" right now, so you're going to need 99 more of me and a year just to examine the cameras. And that's assuming nobody ships any new ones.

Even that's insufficient. Ok, with luck we've identified all the cases where the vendor has left an explicit backdoor in the code[2]. But these devices are still running software that's going to be full of bugs and which is almost certainly still vulnerable to at least half a dozen buffer overflows[3]. Who's going to audit that? All it takes is one attacker to find one flaw in one popular device line, and that's another botnet built.

If we can't stop the vulnerabilities getting into people's homes in the first place, can we at least fix them afterwards? From an economic perspective, demanding that vendors ship security updates whenever a vulnerability is discovered no matter how old the device is is just not going to work. Many of these vendors are small enough that it'd be more cost effective for them to simply fold the company and reopen under a new name than it would be to put the engineering work into fixing a decade old codebase. And how does this actually help? So far the attackers building these networks haven't been terribly competent. The first thing a competent attacker would do would be to silently disable the firmware update mechanism.

We can't easily fix the already broken devices, we can't easily stop more broken devices from being shipped and we can't easily guarantee that we can fix future devices that end up broken. The only solution I see working at all is to require ISPs to cut people off, and that's going to involve a great deal of pain. The harsh reality is that this is almost certainly just the tip of the iceberg, and things are going to get much worse before they get any better.

Right. I'm off to portscan another smart socket.

[1] UDP connection refused messages are typically ratelimited to one per second, so it'll take almost a day to do a full UDP portscan, and even then you have no idea what the service actually does.

[2] It's worth noting that this is usually leftover test or debug code, not an overtly malicious act. Vendors should have processes in place to ensure that this isn't left in release builds, but ha well.

[3] My vacuum cleaner crashes if I send certain malformed HTTP requests to the local API endpoint, which isn't a good sign

comment count unavailable comments

IoT Can Never Be Fixed

Posted by Josh Bressers on October 22, 2016 02:32 AM
This title is a bit click baity, but it's true, not for the reason you think. Keep reading to see why.

If you've ever been involved in keeping a software product updated, I mean from the development side of things, you know it's not a simple task. It's nearly impossible really. The biggest problem is that even after you've tested it to death and gone out of your way to ensure the update is as small as possible, things break. Something always breaks.

If you're using a typical computer, when something breaks, you sit down in front of it, type away on the keyboard, and you fix the problem. More often than not you just roll back the update and things go back to the way they used to be.

IoT is a totally different story. If you install an update and something goes wrong, you now have a very expensive paperweight. It's usually very difficult to fix IoT devices if something goes wrong, many of them are installed in less than ideal places and some may even be dangerous to get near the device.

This is why very few things do automatic updates. If you have automatic updates configured, things can just stop working one day. You'll probably have no idea it's coming, one day you wake up and your camera is bricked. Of course it's just as likely things won't break until it's something super important, we all know how Murphy's Law works out.

This doesn't even take into account the problems of secured updates, vendors going out of business, hardware going end of life, and devices that fail to update for some reason or other.

The law of truly large numbers

Let's assume there are 2 million of a given device out there. Let's assume there are automatic updates enabled. If we can guess 10% won't get updates for some reason or other. That means there will be around 200,000 vulnerable devices that miss the first round of updates. That's one product. With IoT the law of truly large numbers kicks in. Crazy things will happen because of this.

The law of truly large numbers tells us that if you have a large enough sample set, every crazy thing that can happen, will happen. Because of this law, the IoT can never be secured.

Now, all this considered, that's no reason to lose hope. It just means we have take this into consideration. We don't build systems that can handle a large number of crazy events. Once we take this into account we can start to design a system that's robust against these problems. The way we develop these systems and products will need a fundamental change. The way we do things today doesn't work in a large number situation. It's not a matter of maybe fixing this, it has to be fixed, and someone will fix it, the rewards will be substantial.

Comment on Twitter

xfce4-terminal: lots of progress recently

Posted by Kevin Fenzi on October 22, 2016 01:57 AM

Most Linux desktop’s have their own terminal programs, and Xfce is no exception. There’s been a number of releases of xfce4-terminal recently, so I thought I would share some of the changes with people who perhaps haven’t tried it recently.

The biggest change is that it’s been ported to gtk3 and vte291. This is great news as it means it’s no longer using the no longer supported at all ancient version of vte. Unlimited scrollback has been added, wayland support, tons of bug fixes, translation updates, and memory leaks quashed.

More specifics in the NEWS file: https://git.xfce.org/apps/xfce4-terminal/tree/NEWS

Give it a try today and see if it handles all your terminal needs.

Friday, 21 oct, was the last day of Latinoware 2016 in the city of Foz do Iguaçu held at the Parque ...

Posted by Wolnei Tomazelli Junior on October 21, 2016 09:39 PM
Friday, 21 oct, was the last day of Latinoware 2016 in the city of Foz do Iguaçu held at the Parque Tecnologico Itaipu - PTI. Today was a typical spring day with sunshine and a temperature 27 degrees Celsius or 78 Fahrenheit.
Traditionally on the last day of any event in Brazil, is the free distribution of adhesives, limited to one per person each type as we had stock.
Fortunately the vast majority of our visitors stand in the day were university students who use Fedora in their university and took the opportunity to discover that they can use through Games Spin to have good fun in them free time with all kinds of games present in Fedora.
The main activity occurred at 14h with the official photo capture the event with all participants and 6pm was event final ceremony.

#fedora #latinoware #linux  

LatinoWare 2016

Flatpak; the road to CI/CD for desktop applications?

Posted by Gerard Braad on October 21, 2016 04:00 PM

In this presentation I will introduce Flatpak and how it changes the software distribution model for Linux. In short it will explain the negatives of using packages, how Flatpak solves this, and how to create your own applications and distribute them for use with Flatpak. This presentation was given at the GNOME 3.22 release party, organized by the Beijing GNOME User Group.


<iframe height="768" src="http://gbraad.gitlab.io/presentation-bjgug-flatpak/slides.html" width="1024">

Your browser does not support iframes.



If you have any suggestion, please discuss below or send me an email.

Note: the original presentation can be found at: GitLab: BJGUG Flatpak. This is based on an earlier presentation about Software Dsitribution for a new era

Office Binary Document RC4 CryptoAPI Encryption

Posted by Caolán McNamara on October 21, 2016 10:35 AM
In LibreOffice we've long supported Microsoft Office's "Office Binary Document RC4 Encryption" for decrypting xls, doc and ppt. But somewhere along the line the Microsoft Office encryption scheme was replaced by a new one, "Office Binary Document RC4 CryptoAPI Encryption", which we didn't support. This is what the error dialog of...

"The encryption method used in this document is not supported. Only Microsoft Office 97/2000 compatible password encryption is supported."

...from LibreOffice is telling you when you open, for example, an encrypted xls saved by a contemporary Microsoft Excel version.

I got the newer scheme working this morning for xls, so from LibreOffice 5-3 onwards (I may backport to upstream 5-2 and Fedora 5-1) these variants can be successfully decrypted and viewed in LibreOffice.

Consequences of the HACK CAMP 2016 FEDORA + GNOME

Posted by Julita Inca Chiroque on October 21, 2016 08:54 AM

I used to do install parties in order to promote the use of FEDORA and GNOME project since five years ago. As you can see more details in the Release Party FEDORA 17 for Fedora, and Linux Camp 2012, GNOME PERU 2013, GNOME PERU 2014:

peru01To the fifth edition of GNOME PERU 2015 I asked FEDORA to be a sponsor and they kindly accepted, it was included in a video and some pictures of the support as follow:

peru02Fedora and GNOME work different, we have in Perú many ambassador of FEDORA and only three members of the GNOME Foundation. That is why I have seen more FEDORA events than GNOME, and I was trying to reach more users of GNOME on FEDORA.

Anytime I have to do a local or international workshop or a class in universities, I do spread the FEDORA + GNOME word. Some pictures of the DVDs I have given for free:

peru05As well I have seen many students that now are working with Linux-related technology, but there was not a real contributor, neither for FEDORA or GNOME community that I brought to these project. I have also been involved in many Linux groups in Lima too, with no success in bringing new real contributor. These efforts were not enough…

peru07This year, 2016, I decided to focus directly to developers and instead of having long quantity of audience, I chose to keep the eye on quality.

I found a group in Lima called Hack Space where they train people to code, and this summer they gathered students from many parts of Peru. These students were not just students at universities, they were extraordinary students at self-training and also “leaders” in their communities. I decided to take them to a Hack Camp and it was a coincidence that HackSpace were also planing this activity! So we joined to use code on Linux though FEDORA and GNOME. This was possible thanks to Alvaro Concha and Martin Vuelta, and of course the sponsor of the GNOME Foundation. We did have 5 days:

peru08After the HACK CAMP 2016, I was be able to be part of the conference CONEISC 2016 because a participant of the HACK CAMP 2016 was part of the organization in his local community. I traveled to Pucallpa (jungle of Peru) in order to spread the FEDORA+ GNOME word (this time I was not sponsored neither by GNOME or FEDORA).

peru09Because CONEISC 2016 congregated business and professionals related to computer and science. There was an opportunity to interact to other formal entrepreneurs interested in promoting Linux technologies such as DevAcademy and Backtrack Academy.

I am arranging special editions related to FEDORA and GNOME. Both channels have achieved more than 2,762 views. DevAcademy has 6,750 subscribers and BacktracAcademy had reached  5,393.

  • Another talk I have gotten because some participants at CONEISC 2016 enjoyed my workshop, is going to be presented at the SEISCO event that will be held at UPN:

14714804_188926978215598_7074006320136480389_oAnd finally, the most important part of all these effort for more than five years is Martin Vuelta, who supported the organization of HACK CAMP 2016! He is a potential contributor to FEDORA and then, because we both attended the FUDcon LATAM 2016 event, we saw also an opportunity to build the GNOME PUNO initiative!😀

There are lots of plans to grow in my local community! Thanks FEDORA and GNOME!❤

Filed under: FEDORA, GNOME, τεχνολογια :: Technology Tagged: fedora, GNOME, Julita Inca, Julita Inca Chiroque, Peru effort

Installroot in DNF-2.0

Posted by DNF on October 21, 2016 06:42 AM

I have seen several discussions about proper behavior of installroot and often it happened that what was requested by one user several others declined. But all of them had one thing common, that they would like to have proper description of installroot behavior. Now I can proudly announce that installroot entered new era where the behavior is slightly changed, but documented in detail.

Basically what was changed

  • config file and reposdir are searched inside the installroot first. If they are not present, they are taken from host system. But when a path is specified within command line argument (–config= in case of config file or –setopt=reposdir= for reposdir) then this path is always related to the host with no exceptions.
  • pluginconfpath is taken from installroot

If you think that reposdir behavior is not so new, you are completely right, because it is same like in YUM.

The documentation was also enhanced with following examples:

dnf --installroot=<installroot> --releasever=<release> install system-release
Sets permanently the releasever of the system within <installroot> directory from given <release>.
dnf --installroot=<installroot> --setopt=reposdir=<path> --config /path/dnf.conf upgrade
Upgrade packages inside of installroot from repository described by --setopt using configuration from /path/dnf.conf

I am really happy that I can say that additional information can be found in DNF documentation. Please have a fun with DNF.

Getting started with Inkscape on Fedora

Posted by Fedora Magazine on October 21, 2016 04:30 AM

Inkscape is a popular, full-featured, free and open source vector graphics editor available in the official Fedora repositories. It’s specifically tailored for creating vector graphics in the SVG format. Inkscape is great for creating and manipulating pictures and illustrations. It’s also good for creating diagrams, and user interface mockups.


Windmill Landscape illustration created using inkscape

The screenshots page on the official website has some great examples of what can be done with Inkscape. The majority of the featured images here on the Fedora Magazine are also created using Inkscape, including this recent featured image:


A recent featured image here on the Fedora Magazine that was created with Inkscape

Installing Inkscape on Fedora

Inkscape is available in the official Fedora repositories, so it’s super easy to install using the Software app in Fedora Workstation:


Alternatively, if you are comfortable with the command line, you can install using the following dnf command:

sudo dnf install inkscape

Dive into Inkscape (getting started)

When opening the app for the first time, you are greeted with a blank page, and a bunch of different toolbars. For beginners, the three most important of these toolbars are the Toolbar, the Tools Control Bar, and the Colour Palette:


The Toolbar provides all the basic tools for creating drawings, including tools such as:

  • The rectangle tool, for drawing rectangles and squares
  • The star / polygon (shapes) tool
  • The circle tool, for drawing ellipses and circles
  • The text tool, for adding labels and other text
  • The path tool, for creating or editing more complex or customized shapes
  • The select tool for selecting objects in your drawing

The Colour Palette provides a quick way to set the colour of the currently selected object. The Tools Control Bar provides all the settings for the currently selected tool in the Toolbar. Each time you select a new tool, the Tools Control Bar will update with the settings for that tool:

Drawing shapes

Next, let’s draw a star with Inkscape. First, choose the star tool from the Toolbar, and click and drag on the main drawing area.

You’ll probably notice your star looks a lot like a triangle. To change this, play with the Corners option in the Tools Control Bar, and add a few more points. Finally, when you’re done, with the star still selected choose a colour from the Palette to change the colour of your star:


Next, experiment with some of the other shapes tools in the Toolbar, such as the rectangle tool, the spiral tool and the circle tool. Also play around with some of the settings for each tool to create a bunch of unique shapes.

Selecting and moving objects in your drawing

Now you have a bunch of shapes, and can use the Select tool to move them around. To use the select tool, first select it from the toolbar, and then click on the shape you want to manipulate. Then click and drag the shape to where you want it to be.

When a shape is selected, you can also use the resize handles to scale the shape. Additionally, if you click on a shape that is selected, the resize handles change to rotate mode, allowing you to spin your shape:



Inkscape is an awesome piece of software that is packed with many more tools and features. In the next articles in this series, we will cover more of the features and options you can use to create awesome illustrations and documents.

keepalived: Simple HA

Posted by Kevin Fenzi on October 20, 2016 11:07 PM

We have been using keepalived in Fedora Infrastructure for a while now. It’s a pretty easy to use and simple way to do some basic HA. Keepalived can keep track of which machine is “master” for a IP address and quickly fail over and back when moving that IP address around. You can also run scripts on state change. Keepalived uses VRRP and handles updating arp tables when IP addresses move around. It also supports weighting so you can prefer one or another server to “normally” have the master IP/scripts.

Right now we are keepalived on our main koji server pair. We have a koji01 and koji02. Normally 01 is primary/master and has the application IP address on it so all traffic goes to it. If for some reason it was turned off, the keepalived on 02 would see that and take the IP address and run a script to become master. If 01 came back up, 02 would see that and transfer back to it. Right now we have the scripts setting up on the secondary server a bunch of cron jobs (garbage collection) and kojira (the process that regenerates build roots).

We are also using keepalived on some new paired postgresql instances we are working on. More on that in a later blog post. If you need simple HA with a IP address and script(s), keepalived does a banner job.

Factory 2.0 et la traduction ?

Posted by Jean-Baptiste Holcroft on October 20, 2016 10:00 PM

Un article intéressant sur la « Factory 2.0 » a été publié sur le blog communautaire (j’ai couvert la conférence sur le même sujet cet été au Flock dans cet article)

Les problèmes que cette démarche souhaite adresser sont les suivants :

  1. les interventions humaines rendent lents les traitements #1 Repetitive human intervention makes the pipeline slow ;
  2. les sérialisations inutiles rendent lents les traitements #2 Unnecessary serialization makes the pipeline slow;
  3. l’acheminement impose une cadence rigide et inflexible aux produits #3 The pipeline imposes a rigid and inflexible cadence on products;
  4. l’acheminement fait des assomptions sur le contenu qui sera livré #4 The pipeline makes assumptions about the content being shipped;
  5. la distribution est définie par paquet, pas par fonctionnalité (modularité) #5 The distro is defined by packages, not “features” (Modularity);
  6. il n’y a pas de façon simple de tracer les dépendances des produits sources #6 There’s no easy way to trace deps from upstream to product.

Une partie significative des problèmes adressés sont rencontrés au quotidien par les traducteurs :

  1. nous demandons des actualisations des chaînes à traduire, demandons pour les intégrer au projet source, demandons à les intégrer à la distribution ;
  2. pour atteindre l’utilisateur, toutes les étapes de production et validation de paquet sont obligatoires alors que le code est inchangé ;
  3. nécessité de produire des nouvelles versions, impose le lancement de nombreuses règles et vérifications inutiles ;
  4. il faut un RPM contenant d’ailleurs toutes les langues par opposition à un simple fichier po ;
  5. quand on s’accorde sur un nouveau terme ou qu’on détecte un mésusage, on doit faire nos corrections en envayant toutes les phrases du paquet, plutôt que simplement la phrase corrigée du sous-périmètre du logiciel concerné.
  6. de nombreux logiciels renvoient des chaînes d’autres logiciels, Fedora Media Writer utilise des phrases du site internet, dnf utilise des phrases de rpm mais aussi de dnf-plugin-system-upgrade, dnf-plugins-core, dnf-plugins-extras ; tandis qu’abrt utilise aussi abrt-gnome, libreport, retrace-server et des phrases du noyau !

Je vois donc beaucoup de points communs entre les défis que l’équipe infrastructure tente d’affronter, avec ceux que nous subissons au quotidien. Ils s’intéressent logiquement à leurs points de douleurs, sans adresser les nôtres, qu’ils ne subissent pas au quotidien.

Les traducteurs sont très rarement des développeurs, je doute que le problème soit résolu sans aide. Quand est-ce qu’une équipe spécialisée en internationalisation s’attaquera au problème ?

Today, 20 oct, was the second day of Latinoware with a more pleasant temperature around 25 degrees due...

Posted by Wolnei Tomazelli Junior on October 20, 2016 09:40 PM
Today, 20 oct, was the second day of Latinoware with a more pleasant temperature around 25 degrees due to the absence of sun.
We are visited on our stand by a teacher of high school, interested in adopting Fedora Scientific Spin KDE as a platform to manage and encourage computer use in school computer lab. In addition we having a pleasant conversation with another woman about using Stoq, business management software for managing a small business.
To my surprise and instant happiness a Fedora user who visited our stand today, reported to have begun to use our Linux distribution after being present in my first lecture officially by the Fedora in 2008 at UDESC - State University of Santa Catarina in city of São Bento do Sul when I was only 16 years old.
In the rest of my afternoon helped a Fedora 64-bit Workstation installation on a notebook, demonstrated touch support in Gnome 3.2 and helped a future Paraguayan user to decide to have KDE as your Fedora desktop.

#fedora #latinoware #linux #FozIguacu  

LatinoWare 2016

FCAIC in the House

Posted by Fedora Magazine on October 20, 2016 06:48 PM

The who in the where?

On 3 October I officially1 started my new role as the Fedora Community Action and Impact Coordinator2 (abbreviated to FCAIC, pronounced “F-cake”).

The job is like many other roles called “Community Manager” or “Community Lead.” That means there is a focus on metrics and experiences. One role is to try ensure smooth forward movement of the project towards its goals. Another role is to serve as a source of information and motivation. Another role is as a liaison between the project and significant downstream and sponsoring organizations.

In Fedora, this means I help the Fedora Project Leader. I try to be the yen to his yang, the zig to his zag, or the right hand to his right elbow. In all seriousness, it means that I work on a lot of the non-engineering focused areas of the Fedora Project. While Matthew has responsibility for the project as a whole I try to think about users and contributors and be mechanics of keeping the project running smoothly.

Who are you?

Really? You don’t know? I was sure I was internet famous by now!

I’m Brian Exelbierd. The short version is that I’ve worked in IT in various roles and in various business management roles for the last 20 years. I’ve worked for the government, universities, and the private sector. My technical work has focused on Unix/Linux systems and open source software. In business, I’ve led teams and served in administrative and management roles.

I joined Red Hat in March 2013 and I’ve had the privilege of working in several areas. I started as a technical writer as it was one area of IT I hadn’t worked in. I moved on to work on content strategy before returning to my programming roots as a software engineer in the Developer Tools group. There I focused my efforts on container tools and Project Atomic. I’ve been craving a community-focused role and jumped at the chance to work full-time with the Fedora community when my predecessor left to work on Hillary Clinton’s presidential campaign.

I am super excited to be in this new role. You can out find more about me on my website. You’ll also find my blog and more examples of my sense of humor.


A good rule of thumb when accepting a new job is to listen first and act second. Unfortunately, a few priorities have made it so I need to do both immediately. To that end, I’ve picked four goals to start on while I’m listening.

Goal 1: Get to know the community

This is a never-ending goal that is highly tied to listening. While I’ve done professional work in several Fedora and Fedora-related areas, I’ve mostly contributed to the Fedora Project in Documentation. Therefore I’m going to be dropping in on meetings, reading logs and mailing lists, and generally working to learn what is going on globally.

To this end, I am attending both FUDCons in both LATAM and APAC. As an American living in Europe (Brno, Czech Republic), I have a great appreciation for the cultures and challenges NA and EMEA have. These FUDCons allow me to get a better understanding of APAC and LATAM3.

If you’re going to be near me, let me know so we can meet and say “hello.” I’ll endeavor to keep my travel schedule on my website and in the Fedora vacation calendar so you can find me when I’m not at home.

Goal 2: Budget.Next

My predecessor, Remy DeCausemaker, began a process to change the way Fedora manages money. The groundwork for a highly transparent, Fedora-directed, locally managed process has been laid. My immediate goal is able to be summarized as three objectives:

  1. Catch up on the budget reporting and status for the current year.
  2. Create a workable system for financial reporting that makes maintaining our status easier.
  3. When it is time help the various sub-project’s prepare budget requests for the Council. I’ll also help the Council understand their options and communicate final allocations out.

This will be a slow process. If you’ve been handling financial information or money for the project and have heard from me already, please contact me so I can make sure to find out what you know.

These goals won’t set direction or otherwise radically change processes. If I find something that could be improved or that is broken, expect to hear about new ideas on the Fedora Community Blog as I look for consensus.

Goal 3: FOSCo (and FAMSCo)

The Fedora Ambassador Steering Committee (FAmSCo) and some great folks have been working on creating a new body to increase coordination of our outreach efforts. In this case, outreach means our user-facing and focused messaging and components, and the same for our contributor components.

To that end, they began working on the Fedora Outreach Steering Committee (FOSCo). Originally envisioned as a successor organization for FAmSCo, it appears that after a lot of work, it has been discovered that FAmSCo still has a lot of useful work to do and shouldn’t be disbanded.

I’ve been working with several folks in FAmSCo on various proposals. I think we are getting closer and think we will be able to charge the new body soon. It also sounds like FAMSCo will be able to take the opportunity to focus on the Ambassadors program in greater depth because some of its coordination duties will now be shifted to FOSCo.

Goal 4: Fedora Docs Publishing

I’ve been involved in the new docs publishing tool chain. You have read some of my previous blog entries about the conversations we had at Flock this year. I’ve also written a bit about possible implementations.

I have a personal goal of us having a new publication solution by Fedora 26. I believe a lot of the docs community is on board with this objective as well.

I also have a dream that the new publication system can allow us to have an easier time publishing all kinds of other docs such as the Diversity Inbox and various sub-project materials. I am positive more information will be forthcoming soon.

  1. I’ve been doing a some work since September trying to familiarize myself with parts of Fedora where I have contributed and queuing things up for my start.
  2. What a mouthful. I abbreviate it F-CAIC and pronounce it F-Cake. I often write it as F-🎂 because emoji thingies make me feel young. You’ll find random references to the position as FCL or Fedora Community Lead in various project documentation. If you’d help me out by correcting those as you see them, I’d be appreciative. The change is not really a change. This is the actual title that was approved by the Council and I also feel like a well established community like Fedora doesn’t need a community leader, but a coordinator would be very helpful.
  3. I’ve personally traveled to a number of APAC countries so I feel prepared to listen in Cambodia. Outside of one trip to Cancun, which kind of doesn’t count, I’ve never been to LATAM, so I have a much steeper learning curve there.

PyCon India 2016

Posted by farhaan on October 20, 2016 03:31 PM

Day 0

“This is awesome!”, this was my first reaction when I boarded my first flight to Delhi. I was having trouble in finding a proper accommodation Kushal, Sayan and Chandan helped me a lot in that part, I finally got honour of  bunking with Sayan , Subho and Rtnpro which I will never forget. So, I landed and directly went to JNU convention center. I met the whole  Red Hat intern gang . It was fun to meet them all. I had proposed Pagure for Dev Sprint and I pulled in Vivek to do the same.

The dev sprint started and there was no sign of Vivek or Saptak, Saptak is FOSSASIA contributor and Vivek  contributes to Pagure with me. Finally it was my turn to talk about Pagure on stage , it was beautiful  the experience and the energy.  We got a lot of young and new contributors and we tried to guide them and make them send at least one PR.  One of them was lucky enough to actually make a PR and it got readily merged.

I met a lot of other contributors and other mentors and each and every project was simply amazing. I wish I could help all of them some day. We also met Paul, who writes code for PyCharm, we had a nice discussion over Vim v/s PyCharm.

Finally the day ended with us Vivek, Sayan , Subho  , Saptak and me going out to grab some dinner. I bunked with Sayan and Subho and we hacked all night. I was configuring my Weechat and was trying all the plugins available and trust me there are a lot of them.

Day 1

I was a session chair in one of the lecture room and it was a crazy experience from learning to write a firmware for a drone, using generators to write multi-threaded program and also to work with salt stack. The food was really good but the line for food was equally “pythonic” as the code should be.

There were a lot of stalls put up and I went to all of them and had a chat with them. My favorite one was PyCharm because Paul promised me to teach me some neat tricks to use PyCharm.

The Redhat and Pyladies booth were also there which also were very informative and they were responsible making people aware about certain social issues and getting women in tech.

We had two keynotes on this day one by BG and the other by VanL and trust me both of the keynotes were so amazing the they make you look technology from a different view point altogether.

One of the amazing part of such conferences are Open Space and Lightning talks. There are few open spaces which I attended and I found them really enthralling. I was waiting for the famous Stair case meeting of Dgplug.  We met Kushal’s mentor, Sartaj and he gave a deep insight in what and why we should contribute to open source. He basically told us that even if one’s code is not used by anyone he will still be writing code for the love of doing it.

After this we went for Dgplug/Volunteers  dinner at BBQ nation, it was an eventful evening😉 to be modest.

Day 2 

The last day of conference I remember myself wondering how a programming language translates into philosophy and how that philosophy unites a diverse nation like India. The feeling was amazing but I could sense the sadness. The sadness of parting from friends who meet once in an year. I could actually now relate all IRC nicks with their faces. It just brings a lot more on the table.

At last we all went to the humdrum of our normal life with the promise to meet again. But I still wonder how a technology bring comradeship between people from all nook and corners of life. How it relates from a school teacher to a product engineer . T his makes  me feel that this is more than just a programming language , this is that unique medium that unites people and give them power to make things right.

With this thought fhackdroid signs out!

Happy Hacking!

What does Factory 2.0 mean for Modularity?

Posted by Fedora Community Blog on October 20, 2016 02:32 PM

This blog now has a drop-down category called Modularity. But, many arteries of Modularity lead into a project called Factory 2.0. These two are, in fact, pretty much inseparable. In this post, we’ll talk about the 5 problems that need to be solved before Modularity can really live.

The origins of Factory 2.0 go back a few years, when Matthew Miller started the conversation at Flock. The first suggested names were “Fedora Rings”, “Envs and Stacks”, and Alephs.

What problems did Factory 2.0 want to solve?

#1 Repetitive human intervention makes the pipeline slow.
#2 Unnecessary serialization makes the pipeline slow.
#3 The pipeline imposes a rigid and inflexible cadence on products.
#4 The pipeline makes assumptions about the content being shipped.
#5 The distro is defined by packages, not “features” (Modularity).
#6 There’s no easy way to trace deps from upstream to product.


The great news is… if we had problems before, they’re about to get a lot worse. Does the Lego analogy say anything to you? This is how Modularity would look like without Factory 2.0.

What Factory 2.0 is not

Factory 2.0 is not a single web application.

Factory 2.0 is not a rewrite of our entire pipeline.

Factory 2.0 is not a silver bullet.

Factory 2.0 is not a silver platter.

Factory 2.0 is not just Modularity.

Factory 2.0 is not going to be easy.

Does Modularity mean anything with Factory 2?

Does Factory 2 mean anything without Modularity?

Problem Number 1: Automating Throughput

Repetitive human intervention makes the pipeline slow. This one can cover a lot of ground: Rebuild automation, compose automation, release automation.

Rebuilds and Composes

Builds: For this we’d like to build a workflow layer on top of koji called “the orchestrator” (or, the build orchestrator). The concept was originally confused with modularity-specific considerations, but we’d like it to be more general.


Take pungi and break it out into an ad hoc process alongside the buildsystem.

In the best scenario, compose artifacts are built before we ask for them.


We can do two-week Fedora Atomic Host releases now. Hooray!


Can we reconcile that with the mainline compose/QA/release process? The problem is much more intense for Red Hat just due to volume. We have uncovered ground in Bodhi for automation. The karma system is a predecessor, but it relies on humans. Can we fast-track some components based on Taskotron results?

How can we specify an (automated) policy for setting difference release cadences? (without hard coding it)

Problem Number 2: Pipeline Serialization

Unnecessary serialization makes the pipeline slow. This is less a problem for Fedora’s Infrastructure than it is for the Red Hat-internal PnT DevOps environment: things happen, unnecessarily, in serial. One big piece we (will) share here is the Openshift Build Service (OSBS) for building containers. We’re going to need to crack the autosigner.py nut to get around new problems (assuming we “go big” with containers).

Internally, we’re going to be using a special build key for this — which we’ll treat as semantically different from the gold key. Let’s consider doing the same in Fedora.

Problem Number 3: Flexible Cadence

The pipeline imposes a rigid and inflexible cadence on “products”.


Related to the previous point about Automating Releases. In the first analysis, “the pipeline is as fast as the pipeline is.”


Think about the different EOL discussions for the different Editions. Beyond that – a major goal of modularity is “independent lifecycles”. What does that mean in practice?

Let’s talk about pkgdb2 and its collections model.

Problem Number 4: Artifact Assumptions

The pipeline makes assumptions about the content being shipped. Remember we asked some Red Hat stakeholders what they wanted out of a next generation pipeline? There were some real gems in there. My favorite was: “I want to be able to build any content, in any format, without changing anything.”

This is fine


This one is an odd duck among the problem statements. Qualitative – not quantitative. Do we have to do gymnastics every time we add a new format? Or can we make that easier over time?

Autocloud and two week atomic, OSBS, Flatpak, snaps, rocket containers, etc… We can do anything. But how easily can we do it? Which leads us to….

The pernicious hobgoblin of technical debt: Microservices (consolidate around responsibility!), reactive services, idempotent services, infrastructure automation…

Problem number 5: Modularity

All Roads Lead to Rome. The distro is defined by packages, not “features”. There are some specific things about modularity (module build service, BPO, etc…). Really, this is where we tie all the threads together. Each has a certain value on its own, but if we can’t “do modularity” it won’t have the same effect.

Building modules


See the Modularity Infrastructure page. Then, visit the dev instance of the build pipeline overview app.

Problem Number 6: Dependency Chain

There’s no easy way to trace deps from upstream to product (through all intermediaries).

We can model deps of RPMs today, kinda. We can model deps of docker containers in OSBS.

Productmd manifests produced from pungi contain the deps of all our images. So, that’s great. But there’s no easy way to traverse deps all the way from upstream component to end artifacts.

Let’s expand pdc-updater.


And then we can use that data for great justice. 


There’s an opportunity to do something very cool with how we make the distro. Please tell us where we’re wrong. Hop in #fedora-modularity and #fedora-admin to join the party.

The so-called “Factory 2.0”

Presented at Flock 2016 by @ralphbean.

Slides available at http://threebean.org/presentations/factory2-flock16/

The post What does Factory 2.0 mean for Modularity? appeared first on Fedora Community Blog.

All systems go

Posted by Fedora Infrastructure Status on October 20, 2016 01:48 PM
Service 'Koschei Continuous Integration' now has status: good: Everything seems to be working.

varnish-5.0, varnish-modules-0.9.2 and hitch-1.4.1, packages for Fedora and EPEL

Posted by Ingvar Hagelund on October 20, 2016 10:34 AM

The Varnish Cache project recently released varnish-5.0, and Varnish Software released hitch-1.4.1. I have wrapped packages for Fedora and EPEL.

varnish-5.0 has configuration changes, so the updated package has been pushed to rawhide, but will not replace the ones currently in EPEL nor in Fedora stable. Those who need varnish-5.0 for EPEL may use my COPR repos at https://copr.fedorainfracloud.org/coprs/ingvar/varnish50/. They include the varnish-5.0 and matching varnish-modules packages, and are compatible with EPEL 5, 6, and 7.

hitch-1.4.1 is configure file compatible with earlier releases, so packages for Fedora and EPEL are available in their respective repos, or will be once they trickle down to stable.

As always, feedback is warmly welcome. Please report via Red Hat’s Bugzilla or, while the packages are cooking in testing, Fedora’s Package Update System.

Varnish Cache is a powerful and feature rich front side web cache. It is also very fast, and that is, fast as in powered by The Dark Side of the Force. On steroids. And it is Free Software.

Redpill Linpro is the market leader for professional Open Source and Free Software solutions in the Nordics, though we have customers from all over. For professional managed services, all the way from small web apps, to massive IPv4/IPv6 multi data center media hosting, and everything through container solutions, in-house, cloud, and data center, contact us at www.redpill-linpro.com.

Choosing smartcards, readers and hardware for the Outreachy project

Posted by Daniel Pocock on October 20, 2016 07:25 AM

One of the projects proposed for this round of Outreachy is the PGP / PKI Clean Room live image.

Interns, and anybody who decides to start using the project (it is already functional for command line users) need to decide about purchasing various pieces of hardware, including a smart card, a smart card reader and a suitably secure computer to run the clean room image. It may also be desirable to purchase some additional accessories, such as a hardware random number generator.

If you have any specific suggestions for hardware or can help arrange any donations of hardware for Outreachy interns, please come and join us in the pki-clean-room mailing list or consider adding ideas on the PGP / PKI clean room wiki.

Choice of smart card

For standard PGP use, the OpenPGP card provides a good choice.

For X.509 use cases, such as VPN access, there are a range of choices. I recently obtained one of the SmartCard HSM cards, Card Contact were kind enough to provide me with a free sample. An interesting feature of this card is Elliptic Curve (ECC) support. More potential cards are listed on the OpenSC page here.

Choice of card reader

The technical factors to consider are most easily explained with a table:

On disk Smartcard reader without PIN-pad Smartcard reader with PIN-pad
Software Free/open Mostly free/open, Proprietary firmware in reader
Key extraction Possible Not generally possible
Passphrase compromise attack vectors Hardware or software keyloggers, phishing, user error (unsophisticated attackers) Exploiting firmware bugs over USB (only sophisticated attackers)
Other factors No hardware Small, USB key form-factor Largest form factor

Some are shortlisted on the GnuPG wiki and there has been recent discussion of that list on the GnuPG-users mailing list.

Choice of computer to run the clean room environment

There are a wide array of devices to choose from. Here are some principles that come to mind:

  • Prefer devices without any built-in wireless communications interfaces, or where those interfaces can be removed
  • Even better if there is no wired networking either
  • Particularly concerned users may also want to avoid devices with opaque micro-code/firmware
  • Small devices (laptops) that can be stored away easily in a locked cabinet or safe to prevent tampering
  • No hard disks required
  • Having built-in SD card readers or the ability to add them easily

SD cards and SD card readers

The SD cards are used to store the master private key, used to sign the certificates/keys on the smart cards. Multiple copies are kept.

It is a good idea to use SD cards from different vendors, preferably not manufactured in the same batch, to minimize the risk that they all fail at the same time.

For convenience, it would be desirable to use a multi-card reader:

although the software experience will be much the same if lots of individual card readers or USB flash drives are used.

Other devices

One additional idea that comes to mind is a hardware random number generator (TRNG), such as the FST-01.

Can you help with ideas or donations?

If you have any specific suggestions for hardware or can help arrange any donations of hardware for Outreachy interns, please come and join us in the pki-clean-room mailing list or consider adding ideas on the PGP / PKI clean room wiki.

Bodhi 2.3.0 beta

Posted by Randy Barlow on October 20, 2016 04:05 AM

Hello Planet Fedora! I'm pleased to announce that a Bodhi 2.3.0 beta has been deployed to Fedora's staging infrastructure.

You can read the draft release notes if you'd like to learn about what has changed since Bodhi 2.2.4. The beta is currently deployed to staging if you would like to help with testing. If you would like to try the new packages for yourself, you can grab the beta from my bodhi-pre-release Copr repository.


Thanks to all the Bodhi contributors for helping to create this release. It was truly a group effort, with contributions from bug reporters, testers and programmers alike.

If you do find an issue with Bodhi, please file an issue or a pull request.

Yesterday, Wednesday 19 oct, was the first day of LatinoWare thirteen edition hosted in the city of ...

Posted by Wolnei Tomazelli Junior on October 20, 2016 03:04 AM
Yesterday, Wednesday 19 oct, was the first day of LatinoWare thirteen edition hosted in the city of Foz do Iguaçu in Parana state with presence of 5155 participants and temperature of 36ºC. Currently this is the biggest event of free software in Brazil.
Early morning we took Latinoware bus to Itaipu Technological Park. We set up the Fedora Project stand with banner and six type of stickers for the winners of a traditional quiz. All the ambassadors present in our stand, catch from imagination a question about Fedora.
Between our quick quiz, many persons came by our stand to ask some question about use Fedora or how contribute in some our projects, today indicate one more person to start contribute in Infrastructure team.

#fedora #latinoware #linux #partiu

LatinoWare 2016

FCAIC in the House

Posted by Brian "bex" Exelbierd on October 20, 2016 12:00 AM

The who in the where?

On 3 October I officially1 started my new role as the Fedora Community Action and Impact Coordinator2 (abbreviated to FCAIC, pronounced “F-cake”).

The job is like many other roles called “Community Manager” or “Community Lead.” That means there is a focus on metrics and experiences. One role is to try ensure smooth forward movement of the project towards its goals. Another role is to serve as a source of information and motivation. Another role is as a liaison between the project and significant downstream and sponsoring organizations.

In Fedora, this means I help the Fedora Project Leader. I try to be the yen to his yang, the zig to his zag, or the right hand to his right elbow. In all seriousness, it means that I work on a lot of the non-engineering focused areas of the Fedora Project. While Matthew has responsibility for the project as a whole I try to think about users and contributors and be mechanics of keeping the project running smoothly.

Read more over at the Fedora Magazine where this was originally posted.

  1. I’ve been doing a some work since September trying to familiarize myself with parts of Fedora where I have contributed and queuing things up for my start.

  2. What a mouthful. I abbreviate it F-CAIC and pronounce it F-Cake. I often write it as F-🎂 because emoji thingies make me feel young. You’ll find random references to the position as FCL or Fedora Community Lead in various project documentation. If you’d help me out by correcting those as you see them, I’d be appreciative. The change is not really a change. This is the actual title that was approved by the Council and I also feel like a well established community like Fedora doesn’t need a community leader, but a coordinator would be very helpful.

Attending a FUDcon LATAM 2016

Posted by Julita Inca Chiroque on October 19, 2016 11:45 PM

From my experience I will share my days at FUDcon 2016 held on Puno last week. There were 3 core days, and 2 more days to visit around.

Day 0: 

Martin & I departed from Lima to Puno around noon to find FEDORA guys as Chino, Athos, Wolnei and Brian. After walking around to reach our hotel, we found FEDORA posters on streets. Then we shared a traditional food called “Cheese Fries” and went together to the “hair comb mission” during the afternoon. Got altitude sickness at night😦

fedora1Day 1:

FEDORA contributors were presented in the auditorium and then the BarCam took place. I have helped to count the votes in “Dennis Spanish”. We shared food at fancy restaurant for lunch. Then, I attended three talk according to my interests: Kiara’s talk since I do research with IoT, David’s talk because I am a professor at university and Chino Soliard’s talk about SElinux.  At night we shared a delicious “pizza de la casa” served by “Sylvia” 😀

fedora2Day 2:

During the morning I have tried to cheer up audience by pricing with FEDORA stickers, pens and pins during the gaps of the presentations of Eduardo Mayorga (Fedora Documentation Project), Jose Reyes (How to start with FEDORA) and Martin Vuelta (Python, Arduino y FEDORA). To lunch I chose “Trucha Frita” and during the afternoon I presented “Building GNOME with JHBUILD”. I introduced people into the GNOME project and encourage people to install FEDORA 25. For dinner we shared “Chaufa” and bills from many countries. Thanks to Mayorga for guiding my steps to the FEDORA planet

fedora3Day 3:

This was a pretty long day…

It started by helping in translating Brian’s talk as well it was done by Mayorga and Kiara, people was interested in clarify the differences among FEDORA, CentOS, FEDORA and UBUNTU. Brian mentioned us the ATOMIC project and explaining kindly the current status of FEDORA. (We are growing! at least in one percent🙂

fedora4During the lunch, local people get in touch with me to start the GNOME PUNO project, I found enthusiastic young people that I did not imagine to find! I decided to give some minutes during my second talk DNF(Dandified YUM) to present the GNOME customization in the login part. After my talk the GNOME PUNO effort got two more members! We decided to prove our attempts by registering our tasks in our blogs!😀

fedora5For the closing event we received some presents from the organizers and we shared our passion to the FEDORA project to locals. We celebrated also with typical dance of Puno.

fedora6For dinner we shared “Pollo a la brasa” and then I helped in the preparation of the barbacue for the next day!… the following pictures belong to Tonet and Abhadd:

fedora7Day 4:

This was another long day, but a touristic one. We started by visiting Sillustani (pre-Inca cementery on the shores of Lake Umayo). Then we shared the barbacue with the group for lunch. At the afternoon we went to the Chucuito Fertility Temple (famous for phalluses). Special thanks to Gonzalo from FEDORA Chile who kindly agrees in sharing his photo:

fedora9At night we visited the effort of ESCUELAB PUNO and shared our last dinner together😦

fedora10Day 5:

Last day of the journey and this is the picture of the survivors and our last breakfast at CozyLucky hotel. Then we went to the port to visit the floating straw Island of Uros. The good bye was recording in a interview lead by Abadh and saying good bye to new FEDORA friends! I renewed my vows to FEDORA and GNOME as they gave more than Linux knowledge, they gave me unforgettable and unique experiences in my life!

fedora11Thank you so much to the FEDORA community my impressions:

  • Brian: humble and wise (Escuelab Puno), helpful and mischievous (.planet)
  • Dennis: greedy (3 familiar pizzas and one entire chicken), sweet (Spanish talk)
  • Echevemaster: father of newcomers (teaching packaging to Bolivians)
  • Itamar: quiet, smart and funny (we need to translate his Spanish to Spanish)
  • Mayorga: genius kid that not like so much pictures of him
  • Martin: intelligent, smart, my support and my truly friend
  • Wolnei: only happy with pizza
  • Tonet: funny, clever and hardworking guy!
  • Aly: little funny, willing to learn and good support of Tonet
  • Neider: a genuine crazy Linux guy
  • Chino Soliard: cheerful, talkative and very social guy
  • Jose: enthusiastic young Linux guy
  • Kiara: a Linux geek girl
  • Athos: spreading good energy to others
  • Antonio and Gonzalo: great cookers
  • Abadh: hard hard hard working Peruvian guy
  • David: kind, helpful, inspirational and sweet
  • Gonzalo Nina and Rigoberto: smart, practical and responsible guys
  • Yohan and Jose Laya: skillful and talented guys

fedora12and you especially! for taking your time to read my post :3

Filed under: FEDORA, GNOME, τεχνολογια :: Technology Tagged: dnf, fedora, FEDORA LATAM, FUDCon, FUDcon 2016, gnome puno, install jhbuild, Julita Inca, Julita Inca Chiroque, Puno