Fedora People

All systems go

Posted by Fedora Infrastructure Status on October 28, 2016 12:44 AM
Service 'Package Updates Manager' now has status: good: Everything seems to be working.

Inkscape: Adding some colour

Posted by Fedora Magazine on October 28, 2016 12:34 AM

In our previous Inkscape article, we covered the absolute basics of getting started with Inkscape — installing, and how to create basic shapes and manipulate them. We also covered changing the colour of inkscape objects using the Palette. While the Palette is great for quickly changing the colour of your objects from a pre-defined list, most times you will need more control over the colours of your objects. This is where we use one of the most important dialogs in Inkscape — the Fill and Stroke dialog.

A quick note about the animations in this post: some of the colours in the animations appear banded. This is just an artifact of the way the animations are created. When you try this out on Inkscape you will see nice smooth gradients of colour.

Using the Fill / Stroke dialog

To open the Fill and Stroke dialog in Inkscape, choose Object > Fill and Stroke from the main menu. Once opened, the main three tabs of this dialog allow you to inspect and change the Fill colour, Stroke colour, and Stroke style of the currently selected object.


In Inkscape, the Fill is the main colour given to the body of an object. The stroke of the object is an optional outline of your object. The stroke of an object also has additional styles — configurable in the Stroke style tab — allowing you to change the thickness of the stroke, create a dotted outline, or add rounded corners to you stroke. In this next animation, I change the fill colour of the star, and then change the stroke colour, and tweak the thickness of the stroke:


Adding and Editing a gradient

A gradient can also be the Fill (or the stroke) of an object. To quickly set a gradient fill from the Fill and Stroke dialog, first choose the Fill tab, then pick the linear gradient option:


To edit our gradient further, we need to use the specialised Gradient Tool. Choose the Gradient tool from the toolbar, and some additional gradient editing handles will appear on your selected shape. Moving the handles around will change the positioning of the gradient. If you click on an individual handle, you can also change the colour of that handle in the Fill and Stroke dialog. To add an additional stop in your gradient, double click on the line connecting the handles, and a new handle will appear.


That covers the basics of adding some more colour and gradients to your Inkscape drawings. The Fill and Stroke dialog also has many other options to explore, like pattern fills, different gradient styles, and many different stroke styles. Also check out the additional options in the Tools control bar for the Gradient Tool to see how you can tweak gradients in different ways too.

There are scheduled downtimes in progress

Posted by Fedora Infrastructure Status on October 27, 2016 11:18 PM
Service 'Package Updates Manager' now has status: scheduled: Bodhi upgrade in progress

Bodhi 2.3.1 released

Posted by Bodhi on October 27, 2016 09:40 PM

Bodhi 2.3.1 fixes #1067,
such that edited updates now tag new builds into the pending_signing_tag
instead of the pending_testing_tag. This is needed for automatic signing
gating to work.

Bodhi 2.3.0 released

Posted by Bodhi on October 27, 2016 02:49 PM

Bodhi 2.3.0 is a feature and bug fix release.


  • The package input field is now autofocused when creating new updates #876.
  • Releases now have a pending_signing_tag 3fe3e219.
  • fedmsg notifications are now sent during ostree compositions b972cad0.
  • Critical path updates will have autopush disabled if they receive negative karma b1f71006.
  • The e-mail templates reference dnf for Fedora and yum for Enterprise Linux 1c1f2ab7.
  • Updates are now obsoleted if they reach the unstable threshold while pending f033c74c.
  • Bodhi now gates Updates based on whether they are signed yet or not #1011.


  • Candidate builds and bugs are no longer duplicated while searching #897.
  • The Bugzilla connection is only initialized when needed 950eee2c.
  • A sorting issue was fixed on the metrics page so the data is presented correctly 487acaaf.
  • The Copyright date in the footer of the web interface is updated 1447b6c7.
  • Bodhi will comment with the required time instead of the elapsed time on updates #1017.
  • Bodhi will only comment once to say that non-autopush updates have reached the threshold #1009.
  • /masher/ is now allowed in addition to /masher for GET requests cdb621ba.


Bodhi now depends on fedmsg-atomic-composer >= 2016.3, which addresses a few issues during mashing.

Development improvements

Bodhi 2.3.0 also has a few improvements to the development environment that make it easier to
contribute to Bodhi or improve Bodhi's automated tests:

  • Documentation was added to describe how to connect development Bodhi to staging Koji 7f3b5fa2.
  • An unused locked_date_for_update() method was removed b87a6395.
  • The development.ini.example base_address was changed to localhost so requests would be allowed 0fd5901d.
  • The setup.py file has more complete metadata, making it more suitable for submission to PyPI 5c201ac2.
  • The #bodhi and #fedora-apps channels are now documented in the readme file 52093069.
  • A new test has been added to enforce PEP-8 style and a few modules have been converted to conform bbafc9e6.

Release contributors

The following contributors authored patches for 2.3.0:

  • Josef Sukdol
  • Julio Faracco
  • Patrick Uiterwijk
  • Randy Barlow
  • Richard Fearn
  • Trishna Guha

Deckard and LibreOffice

Posted by Caolán McNamara on October 27, 2016 12:44 PM
LibreOffice reuses the same ui format that gtk uses. This suggests that deckard could be used to preview translations of them.

Testing this out shows (as above) that it can be made to work. A few problems though:

1. We have various placeholder widgets which don't work in deckard because the widgets don't exist in gtk so dialogs that use them can't display as something falls over with e.g. "Invalid object type 'SvSimpleTableContainer'" I had hoped I'd get placeholders by default on failure.
2. Our .po translation entries for the dialogs strings all have autogenerated msgctxt fields which don't correspond to the blank default of the .ui so the msgctxt fields have to be removed, then msguniq to remove duplicates, and the result can the be run through msgfmt to create a .mo that works with deckard to show web-previews

React.js is pretty cool

Posted by Aurélien Bompard on October 27, 2016 11:44 AM

These days I’ve been working on Fedora Hubs, it’s a Python (Flask) application, with a React.js frontend. I know Python quite well now, but it’s the first time I dabble in the React.js framework. I must say I’m pretty impressed. It solves a lot of the issues I’ve had with dynamic web development these last years. And it manages to make writing Javascript almost enjoyable, which is not a small feat!😉

I’m still wrestling with Webpack and ES6, but I’ll get there eventually. React is really a great way to build UIs. Plus some people are writing the Bootstrap components in React, so this is very promising.

Building and Booting Upstream Linux and U-Boot for Raspberry Pi 2/3 ARM Boards

Posted by Christopher Smart on October 27, 2016 10:30 AM

My home automation setup will make use of Arduinos and also embedded Linux devices. I’m currently looking into a few boards to see if any meet my criteria. Previously I looked at the Orange Pi One, now I’m looking at the Raspberry Pi 2 (which is compatible with the 3).

The most important factor for me is that the device must be supported in upstream Linux (preferably stable, but mainline will do) and U-Boot. I do not wish to use any old, crappy, vulnerable vendor trees!

The Raspberry Pi needs little introduction. It’s a small ARM device, created for education, that’s taken the world by storm and is used in lots of projects.

Raspberry Pi 2, powered by USB with 3v UART connected

Raspberry Pi 2, powered by USB with 3v UART connected

The Raspberry Pi actually has native support for booting a kernel, you don’t have to use U-Boot. However, one of the neat things about U-Boot is that it can provide netboot capabilities, so that you can boot your device from images across the network (we’re just going to use it to boot a kernel and initramfs, however).

One of the other interesting things about the Raspberry Pi is that there are lots of ways to tweak the device using a config.txt file.

The Raspberry Pi 3 has a 64bit CPU, however it is probably best run in 32bit mode (as a Raspberry Pi 2) as 64bit userland is not particularly advanced in ARM world, yet.

Fedora 25 will finally support Raspberry Pi 2 and 3 (although not all peripherals will be supported right away).

Connecting UART

The UART on the Raspberry Pi uses the GND, TX and RX connections which are on the GPIO pins (see above). Plug the corresponding cables from a 3.3V UART cable onto these pins and then into a USB port on your machine.

Your device will probably be /dev/ttyUSB0, but you can check this with dmesg just after plugging it in.

Now we can simply use screen to connect to the UART, but you’ll have to be in the dialout group.

sudo gpasswd -a ${USER} dialout
newgrp dialout
screen /dev/ttyUSB0 115200

Note that you won’t see anything just yet without an SD card that has a working bootloader. We’ll get to that shortly!

Partition the SD card

First things first, get yourself an SD card.

The card needs to have an msdos partition table with a smallish boot partition (formatted FAT32). The binary U-Boot file will sit there, called kernel.img, along with some other bootloader files. You can use the rest of the card for the root file system (but we’ll boot an initramfs, so it’s not needed).

Assuming your card is at /dev/sdx (replace as necessary, check dmesg after plugging it in if you’re not sure).

sudo umount /dev/sdx* # makes sure it's not mounted
sudo parted -s /dev/sdx \
mklabel msdos \
mkpart primary fat32 1M 30M \
mkpart primary ext4 30M 100%

Now we can format the partitions (upstream U-Boot supports ext3 on the boot partition).
sudo mkfs.vfat /dev/sdx1
sudo mkfs.ext4 /dev/sdx2

Next, mount the boot partition to /mnt, this is where we will copy everything to.
sudo mount /dev/sdb1 /mnt

Leave your SD card plugged in, we will need to copy the bootloader to it soon!

Upstream U-Boot Bootloader

Install the arm build toolchain dependencies.

sudo dnf install gcc-arm-linux-gnu binutils-arm-linux-gnu

We need to clone upstream U-Boot Git tree. Note that I’m checking out the release directly (-b v2016.09.01) but you could leave this off to get master, or change it to a different tag if you want.
cd "${HOME}"
git clone --depth 1 -b v2016.09.01 git://git.denx.de/u-boot.git
cd u-boot

There are default configs for both Raspberry Pi 2 and 3, so select the one you want.
# Run this for the Pi 2
CROSS_COMPILE=arm-linux-gnu- make rpi_2_defconfig
# Run this for the Pi 3
CROSS_COMPILE=arm-linux-gnu- make rpi_3_defconfig

Now, compile it.
CROSS_COMPILE=arm-linux-gnu- make -j$(nproc)

Now, copy the u-boot.bin file onto the SD card, and call it kernel.img (this is what the bootloader looks for).

sudo cp -iv u-boot.bin /mnt/kernel.img

Proprietary bootloader files

Sadly, the Raspberry Pi cannot boot entirely on open source software, we need to get the proprietary files from Broadcom and place them on the SD card also.

Clone the Raspberry Pi Foundation’s GitHub repository.
cd "${HOME}"
git clone --depth 1 https://github.com/raspberrypi/firmware

Copy the minimum set of required files to the SD card.
sudo cp -iv firmware/boot/{bootcode.bin,fixup.dat,start.elf} /mnt/

Finally, unmount the SD card.
sync && sudo umount /mnt

OK, now our bootloader should be ready to go.

Testing our bootloader

Now we can remove the SD card from the computer and plug it into the powered off Raspberry Pi to see if our bootloader build was successful.

Switch back to your terminal that’s running screen and then power up the Pi. Note that the device will try to netboot by default, so you’ll need to hit the enter key when you see a line that says the following.

(Or you can just repeatedly hit enter key in the screen console while you turn the device on.)

Note that if you don’t see anything, swap the RX and TX pins on the UART and try again.

With any luck you will then get to a U-Boot prompt where we can check the build by running the version command. It should have the U-Boot version we checked out from Git and today’s build date!

Raspberry Pi running U-Boot

Raspberry Pi running U-Boot

Hurrah! If that didn’t work for you, repeat the build and writing steps above. You must have a working bootloader before you can get a kernel to work.

If that worked, power off your device and re-insert the SD card into your computer and mount it at /mnt.

sudo umount /dev/sdx* # unmount everywhere first
sudo mount /dev/sdx1 /mnt

Creating an initramfs

Of course, a kernel won’t be much good without some userspace. Let’s use Fedora’s static busybox package to build a simple initramfs that we can boot on the Raspberry Pi.

I have a script that makes this easy, you can grab it from GitHub.

Ensure your SD card is plugged into your computer and mounted at /mnt, then we can copy the file on!

cd ${HOME}
git clone https://github.com/csmart/custom-initramfs.git
cd custom-initramfs
./create_initramfs.sh --arch arm --dir "${PWD}" --tty ttyAMA0

This will create an initramfs for us in your custom-initramfs directory, called initramfs-arm.cpio.gz. We’re not done yet, though, we need to convert this to the format supported by U-Boot (we’ll write it directly to the SD card).

gunzip initramfs-arm.cpio.gz
sudo mkimage -A arm -T ramdisk -C none -n uInitrd \
-d initramfs-arm.cpio /mnt/uInitrd

Now we have a simple initramfs ready to go.

Upstream Linux Kernel

Clone the mainline Linux tree (this will take a while). Note that I’m getting the latest tagged release by default (-b v4.9-rc1) but you could leave this off or change it to some other tag if you want.

cd ${HOME}
git clone --depth 1 -b v4.9-rc1 \

Or, if you want to try linux-stable, clone this repo instead.
git clone --depth 1 -b v4.8.4 \
git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git linux

Now go into the linux directory.
cd linux

Building the kernel

Now we are ready to build our kernel!

Load the default kernel config for the sunxi boards.
ARCH=arm CROSS_COMPILE=arm-linux-gnu- make bcm2835_defconfig

If you want, you could modify the kernel config here, but it’s not necessary.
ARCH=arm CROSS_COMPILE=arm-linux-gnu- make menuconfig

Build the kernel image and device tree blob.
ARCH=arm CROSS_COMPILE=arm-linux-gnu- make -j$(nproc) zImage dtbs

Mount the boot partition and copy on the kernel and device tree file.
sudo cp -iv arch/arm/boot/zImage /mnt/
sudo cp -iv arch/arm/boot/dts/bcm2836-rpi-2-b.dtb /mnt/

Bootloader config

Next we need to make a bootloader file, boot.cmd, which tells U-Boot what to load and boot (the kernel, device tree and initramfs).

The bootargs line says to output the console to serial and to boot from the ramdisk. Variables are used for the memory locations of the kernel, dtb and initramfs.

Note, if you want to boot from the second partition instead of an initramfs, change root argument to root=/dev/mmcblk0p2 (or other partition as required).

cat > boot.cmd << EOF
fatload mmc 0 \${kernel_addr_r} zImage
fatload mmc 0 \${fdt_addr_r} bcm2836-rpi-2-b.dtb
fatload mmc 0 \${ramdisk_addr_r} uInitrd
setenv bootargs console=ttyAMA0,115200 earlyprintk root=/dev/root \
rootwait panic=10
bootz \${kernel_addr_r} \${ramdisk_addr_r} \${fdt_addr_r}

Compile the bootloader file and output it directly to the SD card at /mnt.
sudo mkimage -C none -A arm -T script -d boot.cmd /mnt/boot.scr

Now, unmount your SD card.

sudo umount /dev/sdx*

Testing it all

Insert it into the Raspberry Pi and turn it on! Hopefully you’ll see it booting the kernel on your screen terminal window.

You should be greeted by a login prompt. Log in with root (no password).

Login prompt

Login prompt

That’s it! You’ve built your own Linux system for the Raspberry Pi!


Log in as root and give the Ethernet device (eth0) an IP address on your network.

Now test it with a tool, like ping, and see if your network is working.

Here’s an example:

Networking on Raspberry Pi

Networking on Raspberry Pi

Memory usage

There is clearly lots more you can do with this device…

Raspberry Pi Memory Usage

Raspberry Pi Memory Usage


How to replace deprecated platform.distro()

Posted by Miroslav Suchý on October 27, 2016 09:10 AM

The Python method platform.distro() have been deprecated since Python 3.5. And it will be removed in Python 3.7. So it is about time to really replace it in your (and my) code.

Fortunately there is a new python module "distro". I packaged it for Fedora and EPEL so it should be available in main repository now. It is available as python2-distro and python3-distro.

The replacement is quite easy. Instead of:

import platform

you just put there:

import distro

and that's all.

Fedora at Ohio Linuxfest 2016

Posted by Fedora Community Blog on October 27, 2016 08:15 AM

Originally posted on Ben Williams’ blog.

This is the Ambassador Report for Ben & Cathy Williams for the Ohio Linuxfest 2016.

We arrived at the our hotel around 1PM on Friday. After checking in we headed over to find the new site in the Hyatt Regency Hotel. The first things we noticed was the Columbus Convention Center is doing a major renovation and one of those renovations was they removed the escalators from the food court to the second floor. At first we thought this may be a issue to move the event stuff in but there was an elevator close by. Also no signage for OLF in the Food Court area. After getting off the elevator on the second floor there was a sign pointing around the corner to the Ohio Linuxfest registration table. This year Ohio Linuxfest charged $10 for general attendees (free to students with student ID). We checked in and out our badges (yes insert favorite Blazing Saddles joke here). We walked down to the Vendor Expo hall which this year had a grand total of 28 exhibitors (see website for vendor lists). While the Expo was setup ready for Vendors to move in but the Vendor Expo was not open to the public on Friday.

With this news we checked out the early penguin tracks and the OLF institute Tracks which Thomas Cameron was doing his “OpenStack for Mere Mortals” in the morning and “SELinux for Mere Mortals” in the afternoon. It was still early so we went and grabbed some lunch and headed back to our hotel room for the evening.

Up at 6:30AM we prepared and had breakfast at the Drury (our favorite place to stay while attending OLF) and grabbed the event box, vertical banners and swag out of the truck of the car and headed back to the Hyatt Regency to setup for the expo opening at 9AM. We went to our spot and noticed no electric runs as we had seen in the past when we were in the Columbus Convention Center. We continued setting up getting everything ready when Rob (the sponsor coordinator) and I asked about plugging into the electricity and he said “thank you for asking but yes this year that is not an issue go for it”. We got the Table setup and running easily before 9:00AM opening. We had had advance notification from SinClair Community College they would like to pickup media for their Classes. As I was finishing the touches on the booth Cathy was assembling a box of media for Sinclair Community College.

After the Expo opened we had the normal stream of activity between the session asking questions like:

  1. What is Linux?
  2. So how is Fedora different than ?
  3. I am having issues with Nvidia drivers?

As the day went on, Cathy and I got into the flow of handling the traffic, she took the job of being the pleasant lady at the booth, and I did my normal technical guy duties answering questions as they came up showing demos of the spins on the event laptop and answering questions about the XO (OLPC) on the table .

We had several people comment they were glad to see that we were giving out media and it appeared we were the only group that was handing out media at this show. As the day was winding down we had different group wanting media for their schools and local LUGs so we were able to fill those requests very easily. During the lulls I ran up the to registration and they told me the registration for this year was around 600 attendees.

We came to Ohio Linuxfest with close to 600 pieces of media and left the show with 34.

We always look forward to attending Ohio Linuxfest and I hope next year it will not be competing against an Ohio State Home Football Game and what appeared to be several other events in town.

Image courtesy Alex Wong – originally posted to Unsplash as Untitled.

The post Fedora at Ohio Linuxfest 2016 appeared first on Fedora Community Blog.

What just happened?

Posted by Adam Williamson on October 27, 2016 08:08 AM

4pm: “Well, guess it’s time to write the F25 Final blocker status mail.”

4:10pm: “Yeesh, I guess I’d better figure out which of the three iSCSI blocker bugs is actually still valid, and maybe take a quick look at what the problem is.”

1:06am: “Well, I think I’m done fixing iSCSI now. But I seem to have sprouted four new openQA action items. Blocker status mail? What blocker status mail?”

ibus-anthy 1.5.9

Posted by Takao Fujiwara on October 27, 2016 04:12 AM

ibus-anthy 1.5.9 is released and it’s now available in Fedora 25 and 24.

# dnf update ibus-anthy

It’s Emoji dictionary is updated for Emoji 3.0 and Unicode 9.0.

ibus-anthy Emoji 3.0

I think the translation of Emoji annotation is not unique. E.g. “bicycle” can be “バイク”, “原付”, “スクーター”. So probably we need to think plural forms?

This version also fixes Tab key to enable the predict function.

ASCII - art of sense.

Posted by mythcat on October 26, 2016 05:53 PM
This is a part o my ASCII database: 

♫♪ |̲̅̅●̲̅̅|̲̅̅=̲̅̅|̲̅̅●̲̅̅| ♫♪                         Boombox
__̴ı̴̴̡̡̡ ̡͌l̡̡̡ ̡͌l̡*̡̡ ̴̡ı̴̴̡ ̡̡͡|̲̲̲͡͡͡ ̲▫̲͡ ̲̲̲͡͡π̲̲͡͡ ̲̲͡▫̲̲͡͡ ̲|̡̡̡ ̡ ̴̡ı̴̡̡ ̡͌l̡̡̡̡.___          House on the beach
♫♪♬                                       Music
c[_]                                         Cup
¯\_(ツ)_/¯                               Smile
︻╦╤─                                   Weapon
▇ ▅ █ ▅ ▇ ▂ ▃ ▁           Equalizer
▁ ▅ ▃ ▅ ▅ ▄ ▅ ▇           Equalizer 2
[̲̅$̲̅(̲̅ιοο̲̅)̲̅$̲̅]                                 $100
████████████ 99%     Loading

LinuxCon EU 2016

Posted by bee2502 on October 26, 2016 05:17 PM

LinuxCon EU 2016  took place from Oct 4-6 in Berlin, Germany. LinuxCon is one of the biggest FOSS conference where developers, sys admins, architects and all levels of technical talent gather together under one roof for three days. Since I am currently living in Berlin, there was no way I could miss this conference – even though the tickets for attending the full conference were around 1000 euros and way out of my league as a student researcher. Thankfully, I was awarded the Minority scholarship by Linux Foundation to attend the conference (including the talks and workshops) – and also the Women in Open Source Lunch and some other evening events ! I was also a part of the Fedora ‘crew’ at LinuxCon and helping out with Fedora Booth !

Fedora at LinuxCon


Our Fedora booth ‘crew’ consisted mainly of three members – JiriZach and me. Jiri and Zach are Fedora Ambassadors for EMEA region. But we were not the only booth with some Fedora content. Red Hat had a booth on the other side of the room. People could see Fedora and meet Fedora contributors there too. We were also joined by RedHat ‘gang’ involved in Fedora sometimes – mostly Adam (who currently works on Fedora Modularity, but is one of the lead developers and designers of the awesome Fedora Developer Portal) and also Brain Exelbird (Bex)  – the Fedora Community Action and Impact Co-ordinator (yes, it’s F-CAIC or F-CAKE depending on how much you like desserts😛 ) . You can know more about the Fedora impact at LinuxCon + ContainerCon EU 2016 by reading Jiri’s report on LinuxCon EU 2016 on Fedora Community Blog here.

Impact of LinuxCon on Fedora

While we were able to create a special Fedora badge for LinuxCon EU attendees to gauge the impact of the event, out of the 16 times it was awarded , there was only one new contributor ganto and Jiri has talked about the reasons why this happened in his post on Fedora Community Blog.

That being said, there were many people interested in using and/or contributing to Fedora and we were able to point them in the right direction. If you are reading this post and are interested in contributing to Fedora Project but don’t know how to start ? or are confused about which team to join ? – check out Fedora website to help new contributors. If you still have any queries, you can contact one of us (or definitely me).

There were a few common troublepoints for us at the Fedora booth at LinuxCon –

  • There were quite a few people who felt the Fedora logo looked similar to Facebook.
  • Some people couldn’t understand that it was Fedora booth as the Fedora logo was at the bottom of the banners. (I think some people were hesitant in approaching the booth because of this too)
  • People were interested in installing Fedora but couldn’t take the workstation DVDs as their computers had no CD drives. I am not sure how many of those actually went home and downloaded Fedora.
  • Many were doubtful as why Fedora had a  booth separate from RedHat ? (Fedora booth was organized by the community and not Red Hat) and if so, why was it so away ? (Fedora and RedHat booths were at opposite ends of the hall. )

All of us had discussions during LinuxCon about how to improve visibility of Fedora at such events and quite a few interesting points came up including creating swag for interested contributors. Adam even worked on a design for a new Fedora banner with increased visibility. There were also quite a few technical discussions about ongoing developments in Fedora OS, modularity in Fedora and how we were working on it and so on. We also discussed amongst ourselves about different issues affecting Fedora Community including diversity and inclusion in Fedora community.

We ended the conference by going out for dinner at an Indian restaurant with Red Hat ‘gang’ where I ended up having ‘Vindaloo’ for the first time – even though I am an Indian !😛

My Takeaways as a Fedora contributor

On a personal note, I feel that conferences are not only a great way to help onboard new users and/or contributors but also a great learning opportunity for existing contributors about different parts of the project, meeting different community members, raising questions about a variety of issues related to the project, learning why some things are taking place in a certain way and sometimes even having a suggestion or idea which could potentially impact the system for better. In case of projects like Fedora where most of the contributions are virtual, it definitely ‘humanizes’ the project i.e. for me, I have began to understand that barriers for entry in FOSS are not unnecessarily high and I don’t have to be an expert in the area I contribute or I don’t need to have loads of free time – all I need is an avid interest and the rest will be taken care of !

LinuxCon EU talks and events


I didn’t know anything about containers before attending LinuxCon + ContainerCon but after attending some elementary talks on containers, I can definitely say I know the basics and can deploy a kubernetes module😛

Apart from container-related talks, some other talks I found pretty interesting were

‘Gender-diversity analysis of technical contributions’ by dizquierdo from Bitergia.  The talk combined my two interests – metrics and diversity and really helps make sense of the diversity issue in FOSS by supporting it with numbers. I wanted to meet Daniel after the talk to discuss about the diversity metrics, the work Bitergia is doing and it’s similarity with my work involving community metrics in Fedora (on contributor engagement, impact of attending events on contributors and community, improving contributor retention rates and diversity related metrics) but somehow between being at Fedora booth and attending talks, I could never find him. So @dizquierdo, if you read this – however unlikely that is, I am a fan of Bitergia and would love to discuss more about metrics Bitergia compiles and analyses🙂 (Slides for the talk are here )
The talk on ‘FOSS Involvement of Google’ was also very interesting. I always knew Google had some Open Source projects and even used them but never knew that Google actually contributed to LinuxKernel.  (Slides for the talk are here)
Other talks which I found interesting were ‘Outreachy Linux Kernel Internship Report’ by Julia Lawall and other past Outreachy interns, ‘Corporate Trends in Open Source Engagement’ by Nithya Ruff, ‘IF YOU BUILD IT,THEY WON’T COME’  about non-technical aspects of FOSS projects by Ruth Suehle, ‘Graphite@Scale:How to store million metrics per second’ by Vladimir Smirnov of Booking.com . I couldn’t attend all of these talks but the slides were informative and I hope the videos are uploaded soon.
You can find the slides for all talks here.

Networking is tough ! Or maybe, it’s all about curiousity


As with all conferences, LinuxCon is a great opportunity for networking. However, I am a dummy at ‘networking’ since I recently graduated from my college in India where networking was not in anybody’s vocabulary. While I did visit booths, collected swag, talked about job opportunities – I wasn’t able to forge any contacts. We had a networking event in evening at Charlottenberg Palace and I remember everybody talking and laughing with glasses of wine and me wondering about the right question to start a conversation. While I did manage to talk with different people by the end of the evening, I realised I forged contacts when I was actually interested and curious about the product and wasn’t just asking someone for jobs. I was able to talk to people working in different FOSS organizations, know about their experiences and the products they were working on, how they used data analytics and Machine Learning – and sometimes walk away with job opportunities – all because I was curious and interested in their organization, and not just looking for a job ! It was tougher to initiate personal contact at booths though since booths had multiple people at times and personal contacts were hard to make. However, I was able to learn a lot about the ongoing projects at organizations, the opportunities and work environment at the booths. I was especially fascinated by a Unity game made by a Business Administration intern at HP during his summer internship – it just shows the diversity of opportunities you have at such workplaces. I also took part in some raffle contents, but never won anything! (Zach won a game though – like Legos, but not as interesting – is all I could understand about it😛 )

Research Opportunities and Scholarships

Since I am interested in pursuing a PhD, I was interested in exploring research collaborations with FOSS projects and organizations – via a research project with mentor in the organization, collecting data from the organization, scholarships or support for FOSS related research work or joint PhD programs with research institution and the FOSS organization but couldn’t find much opportunities in this area. Afaik, Red Hat Brno sometimes has students working for thesis with them but it is generally OS-development related and not for data analytics. Ping me, if you know something !


Sponsors for conferences at LinuxCon spend a lot of money for the booths, the swag and goodies at the booth (giving away T-Shirts at booths was common and some even had USBs and power banks), and many more including the expenses of the employees who represent the organization at the event. The expenses are huge, but do organizations get returns for what they spent ? The rational answer is that they do, otherwise why else would they come back ? But isn’t there a better of evaluating the impact at these events ? Bex and I had a long discussion about the different costs a organization like Red Hat incurs for being a part of LinuxCon but I would love to know more about how they measure the impact they have at a particular event.

Women in Open Source

Diversity and Inclusion in FOSS is one of the topics very close to my heart and I was extremely excited to be given an opportunity to be a part of the Women in Open Source Lunch at LinuxCon. Since I am also a part of Diversity Team at Fedora and involved with FOSSWave (an outreach initiative for new FOSS contributors in India), I felt this would be a great learning opportunity where I could apply the learnings directly towards the benefit of the FOSS community.

Lunch and Discussion, Meeting Role models


Over lunch, girls and women from different phases of their career and involved with different FOSS projects discussed about different issues related to diversity and inclusion in FOSS communities. The 50 or so group of women were divided around 8-10 tables with each table discussing a particular issue(question). At the end of the lunch, a moderator from each table spoke in brief about the ideas or solutions they had come up regarding the issue.The questions we discussed ranged from ‘What diversity or inclusivity programs do you see working and how can FOSS communities adopt those ? Programs from software, propreitery companies, FOSS companies etc‘(which was my table), ‘How can women support other women in community?‘ to ‘What does safety mean in online communities? What can we do to ensure communities are safe ?

I found the format especially helpful and efficient when conducting such a diversity related panel/QA session at conferences. I have compiled a list of the topics we discussed and the solutions other women came up with here which you can edit with your own comments on the issue.

My key takeaways from the discussion were :

  1. Start young – start breaking down the stereotypes in high school. The root of the problem starts in high schools and it is easier to tackle at the earliest stage. Also, when doing this , involve not just students but also parents and teachers as they make up the environment and community which shapes the individuals perceptions !
  2. Start small ! Speak up! – If you see something wrong – don’t ignore or let it slide even if it is not technically harmful. Small disruptions lead to big changes. Share your success stories, mentor someone from your community – every small bit counts.
  3. Provide role models and mentors – Many succesful programs involve mentorship. Helps ease the transition and feels good to have a support system and know people who have done it, you don’t feel alone.
  4. We need a dedicated and active support channel for diversity and inclusion related issues across projects and communities.

During the lunch, I was also able to meet a lot of my role models from FOSS communites and learnt about the experiences of some awesome women in Open Source like Nithya Ruff of Western Digital, Julia Lawall of Linux Kernal and many past Outreachy interns as well as FOSS contributors.

Along with the awesome lunch, goodies by Sandisk (I desperately needed a pen drive, thank you!) and beautiful birds-eye view of Berlin from 13th floor of the hotel, the event also had some fun activities like raffle in the end – and surprisingly, I got lucky and won a Berlin Bear ! Yaay !

Diversity Survey, Talking to different organizations and people

There are multiple ongoing efforts across different FOSS projects and organizations to improve diversity and promote workplace inclusion for minority groups in their respective communities. However, these efforts are not consolidated and do not look towards analyzing the impact of those strategies – about what is working and what isn’t ! With this in mind, I have started a  Diversity and Inclusion study (more details here) to get an overview of diversity and inclusion practices across different FOSS projects, communities or organizations and learn from their success and failures too and share it with other FOSS organizations and projects so that the same mistakes don’t get repeated again and again ! The study is still in it’s nascent stage and I am working on the shortcomings to better develop the study however, I talked to different people and organizations(representatives at Booths) to learn about their experiences and/or how their organization is working towards this. I learnt quite a few interesting things and if you would love to be a part of this study, please ping me so we can discuss more.

To sum it up, LinuxCon was an awesome learning experience, full of fun – I wish I could have attended more talks and talked to more people(especially those from Bitergia and Fitbit) – but I gained knowledge about a lot of new stuff like Containers, learnt about new FOSS projects and organizations, helped onboard some new contributors to Fedora and brainstormed about some community issues, gained a fresh perspective on diversity and inclusion in FOSS – all while collecting some awesome swag and a Berlin bear along the way !

Dirty Cow: Privilege Escalation Exploit, Linux Kernel

Posted by Corey ' Linuxmodder' Sheldon on October 26, 2016 02:06 PM

Okay so likely have heard about this, if you like me use Linux daily, in your college, professional or hobbyist life but like what the heck is it really?

To paraphrase from the initial disclosure docs:

the privilege-escalation vulnerability potentially allows any installed application, or malicious code smuggled onto a box, to gain root-level access and completely hijack the device.

The programming bug gets its name from the copy-on-write mechanism in the Linux kernel; the implementation is so broken, programs can set up a race condition to tamper with what should be a read-only root-owned executable mapped into memory

So exactly what does all that mean?  It means your web facing servers and even Androids have a big time issue with multi tasking in a sense.  This bug allows for what is called a ‘race condition’  which as you may have guessed makes for a first one in wins scenario.  The bad part is that that allows the kernel to be tricked into mapping a new ‘page’  (a coding term for the memory allocation) without fully un-allocating or ‘unlocking’  the previous one. This in turn allows for a bad memory page to get into a root-owned (the almighty full system admin) which is bad news.  The process that is overwritten or bypassed is called Copy-On-Write  (hence the COW part of the name) and being that the race condition is executed by using and triggering dirty paging within or  in an effort to gain privileged access its been Dubbed Dirty CoW.  If you feel so inclined to read the much more technical details feel free to read up on CVE 2016-5195

Filed under: 0Day /Vulns, Community, Current Events, Current Events -- non Technical, F24, Fedora, Fedora 23, PSAs, Redhat, Security, Testing Tagged: DirtyCoW, Exploits, Fedora, Open Source, Sysadmin

Dual-GPU integration in GNOME

Posted by Bastien Nocera on October 26, 2016 01:37 PM
Thanks to the work of Hans de Goede and many others, dual-GPU (aka NVidia Optimus or AMD Hybrid Graphics) support works better than ever in Fedora 25.

On my side, I picked up some work I originally did for Fedora 24, but ended up being blocked by hardware support. This brings better integration into GNOME.

The Details Settings panel now shows which video cards you have in your (most likely) laptop.

dual-GPU Graphics

The second feature is what Blender and 3D video games users have been waiting for: a contextual menu item to launch the application on the more powerful GPU in your machine.

Mooo Powaa!

This demonstration uses a slightly modified GtkGLArea example, which shows which of the GPUs is used to render the application in the title bar.

on the integrated GPU

on the discrete GPU

Behind the curtain

Behind those 2 features, we have a simple D-Bus service, which runs automatically on boot, and stays running to offer a single property (HasDualGpu) that system components can use to detect what UI to present. This requires the "switcheroo" driver to work on the machine in question.

Because of the way applications are launched on the discrete GPU, we cannot currently support D-Bus activated applications, but GPU-heavy D-Bus-integrated applications are few and far between right now.

Future plans

There's plenty more to do in this area, to polish the integration. We might want applications to tell us whether they'd prefer being run on the integrated or discrete GPU, as live switching between renderers is still something that's out of the question on Linux.

Wayland dual-GPU support, as well as support for the proprietary NVidia drivers are also things that will be worked on, probably by my colleagues though, as the graphics stack really isn't my field.

And if the hardware becomes more widely available, we'll most certainly want to support hardware with hotpluggable graphics support (whether gaming laptop "power-ups" or workstation docks).


All the patches necessary to make this work are now available in GNOME git (targeted at GNOME 3.24), and backports are integrated in Fedora 25, due to be released shortly.

F24 Updated ISOs available. (Kernel with Dirty Cow Patched)

Posted by Corey ' Linuxmodder' Sheldon on October 26, 2016 12:31 PM

It is with great pleasure to announce that the Community run respin team has yet another Updated ISO round.  This round carries the 4.7.9-200 kernel along with over 800 MB of updates (avg,  some Desktop Environments more, some less) since the Gold release back in June.

Torrents will be available at the same link as usual alongside the .iso files.

You have heard about this nasty privilege escalation bug called ‘Dirty Cow’ , well rest assured the infected farm and farmer have been found and the vaccine has been applied to the kernel in these updates. More info on ‘Dirty Cow’  on my blog post on it here.

Below are the contents of Both CHECKSUM512-20161023 and HASHSUM512-20161023 (the later is torrent hashes):

cat CHECKSUM512-20161023

(Clearsigned with 0xF59276298D2264944)


Hash: SHA1

8d1c8b9637b1ccc16233ae740e6e0137485574a6f02ab05e66e5a6fb8d5c18a6671395e14341e2cb45f902cd20a4ef987bd83265b68932b8c2183ff2b5194e5e F24-source-20161023.iso
aee5e894dc6b34e207aaa0f23f7a4fd6d16577846d5f7ab3568a234f9e0b2bea1ae814a2291852cb2dbba3930b046ce31cffbcadaf5bff72208a36176eabbecd F24-x86_64-CINN-20161023.iso
6875a43e59a899e4520260e19fb28bb7ade59565b46fc6ad4f22ca8da01c57822ca7ed9373f795377dfaf750d28b6df6c1c083da5d5a0628f8d26553fb744ea0 F24-x86_64-KDE-20161023.iso
197ea70be8337f97f60e2558188e82f53eae0208d3166a7356267dc515e0b7c6204e4b5aae1ae4b44150ff59881a7c2b3d8c998e9dd715872d8c2fe1fe0485c3 F24-x86_64-LXDE-20161023.iso
7f4b48998cb716042a899089f1b292aa77ce5fca44c8d69ceb25f8769da5d9bbe2b29cef728630ae643ad4fc290c64adf270debb228454e620509f039294849c F24-x86_64-MATE-20161023.iso
ffb77a60e5895d4521c58efcbb41bb6afcca7a9a2b3320929cecb9422d00caa1cb7e5f23b04bae9644bf6ea0dae1ab2c9674f4f7ba243acd6250fd5e90221c1d F24-x86_64-WORK-20161023.iso
4ba2915a0ba51b870e7a51eb8d658464bac99f6d998f9f06c26ab8642fedd0212c2ab47c256c7fb5be060494dd6c8eb3279a0d84ca8516db199e45e612d2e502 F24-x86_64-XFCE-20161023.iso


cat HASHSUM512-20161023

(Clearsigned with 0xF59276298D2264944)

Hash: SHA1

c45662c568ecb116fd18e8f2fae4dedb43a17bdd F24-x86_64-CINN-20161023.iso
379d63a42b3e218cc0aefebc176eabe4c508c622 F24-x86_64-KDE-20161023.iso
89cd48b48d4b1163ac0b81fa639b40ae11fc36ae F24-x86_64-LXDE-20161023.iso
9dfb8df60faa611178d430d897ea365f3df4bd00 F24-x86_64-MATE-20161023.iso
240f78a935d54ef5c92491ee14334b14ad4d5951 F24-x86_64-WORK-20161023.iso
4af163e1162642e8d2a2632878106b94c58ac22a F24-x86_64-XFCE-20161023.iso


Filed under: 0Day /Vulns, Community, F24, Fedora, Fedora 24 Torrents, PSAs, Volunteer Tagged: Community, Exploits, Fedora, Open Source, Sysadmin, Torrents

Distributing spotify as a flatpak

Posted by Alexander Larsson on October 26, 2016 10:52 AM

One of the features in the recent flatpak relase is described as:

Applications can now list a set of URIs that will be downloaded with the application

This seems a bit weird. Downloading an application already means your loading data from a URI. What is the usecase for this?

Normally all the data that is needed for your application will be bundled with it and this feature is not needed. However, in some cases applications are freely downloadable, but not redistributable. This only affects non-free software, but Flatpak wants to support both free and non-free software for pragmatic reasons.

Common examples of this are Spotify and Skype. I hope that they eventually will be available as native flatpaks, but in order to bootstrap the flatpak ecosystem we want to make it possible to distribute these right now.

So, how does this work? Lets take Spotify as an example. It is available as a binary debian package. I’ve created a wrapper application for it which contains all the dependencies it needs. It also specifies the URL of the debian package and its sha256 checksum, plus a script to unpack it.

When the user downloads the application it also downloads the deb, and then (in a sandbox) it runs the unpack script which extracts it and puts the files in the right place. Only then is the installation considered done, and from thereon it is used read-only.

I put up a build of the spotify app on S3, so to install spotify all you need to do is:

flatpak install --from https://s3.amazonaws.com/alexlarsson/spotify-repo/spotify.flatpakref

(Note: This requires flatpak 0.6.13 and the remote with the freedesktop runtime configured)

Here is an example of installing spotify from scratch:

Fedora at LinuxCon Europe 2016

Posted by Fedora Community Blog on October 26, 2016 08:15 AM

The Linux Foundation, the organizer of the conference, says the following about LinuxCon:

“It is the place to learn from the best and the brightest, delivering content from the leading maintainers, developers and project leads in the Linux community and from around the world.”

The Fedora community has been at all European editions since 2011 and this is a report from the last one, which took place on Oct 4-6.

Arriving to LinuxCon Europe

This year‘s LinuxCon Europe was co-located with another conference organized by the Linux Foundation, ContainerCon, and held in the city of Berlin. I took a 7-hour train ride from Brno and Zacharias Mitzelos a flight from Greece. The third representative of the Fedora Project, Bhagyashree Uday Padalkar, happens to live in Berlin as a member of a research project. I met Zacharias in the hotel in the evening before the conference. I brought swag and he took care of roll-up banners.

The conference took place in the conference area of InterContinental Hotel. Our hotel was close, but we still had to get up really early because the Linux Foundation required us to have the booth ready before 7.30am.

The booth

The booth location was not great. We ended up in the corner on the other side of the exhibition area from the entrance. But I cannot complain, as we got for free what companies had to pay a lot of money for. And I‘d like to thank the Linux Foundation that it supports the Fedora Project this way every year.

Our booth consisted of one table, two roll-up banners, two chairs, and a TV which was rented by Red Hat for us. We used it for the “Fedora Modularity” demo, which stirred some attention. Adam Šámalík of Red Hat who works on modularity in Fedora helped us explain to visitors what it’s about.

But we were not the only booth with some Fedora content. Red Hat had a booth on the other side of the room. People could see Fedora and meet Fedora contributors there too. Peter Robinson showcased Fedora Workstation running on a Raspberry Pi and there was also a little cluster running OpenShift on the top of Fedora Atomic.

You could also see Fedora at the booth of Intel.


The audience of LinuxCon is different from other Linux events. It’s definitely more business-oriented. The sponsor prospectus says the distribution of visitors is:

  • Developers: 60%
  • SysAdmin/DevOps: 18%
  • IT / Dev managers: 12%
  • Business / Vendor / Community: 7%
  • Press / Analyst: 3%

So the largest group is developers, who I assume given the nature of the event, develop on Linux. It’s definitely the kind of audience Fedora should be interested in. And it’s still a different audience than at Linux events for enthusiasts, where we traditionally have a strong presence.

At LinuxCon Europe, you can still meet people who have never heard of Fedora and get reactions such as, “Oh, so Fedora is a variant of Linux, right?”

Evaluating impact

We created a badge for visiting the Fedora booth at LinuxCon Europe to have one way to measure an impact. To be honest, this didn’t go that well. Out of ~1300 visitors, only 16 claimed the badge. The majority of these people were Fedora contributors visiting LinuxCon Europe. Not that no one else was interested, but the process to get the first badge is sort of cumbersome. People need to be connected to the Internet and WiFi often sucks at conferences. Moreover the FAS account creation often fails and it’s not mobile friendly. Some people gave up and copied the URL to finish it at home. But I checked and no one did.

Nevertheless, we had great conversations with several developers who had been Fedora users for some time and were interested in becoming contributors. So we usually went with them through different roles in Fedora to find what matches their interest and skills. We also gave these people “Proud Fedora User” t-shirts, which they proudly wore the next day which improved our brand visibility even more.

Looking ahead

I’d like to thank the Fedora Project for covering our travel and lodging expenses and making it possible for us to represent Fedora at LinuxCon Europe. I would also like to thank Red Hat for providing us with a TV, which made our booth definitely more attractive and for promoting Fedora at their booth as well.

In my opinion, LinuxCon Europe is great for building a brand recognition among technical staff of businesses interested in Linux. This year was the first time when we met a higher number of developers who had already been Fedora users and were interested in becoming Fedora contributors. Next time, we should be more ready for them. We discussed that with Brian Exelbierd and one of the ideas was to create cards explaining different contributor roles in Fedora and a set of steps to get involved. That could be useful not only for LinuxCon, but for many other conferences.

The post Fedora at LinuxCon Europe 2016 appeared first on Fedora Community Blog.

Fedora-powered computer lab at our university

Posted by Fedora Magazine on October 26, 2016 08:00 AM

At the University of Novi Sad in Serbia, Faculty of Sciences, Department of Mathematics and Informatics, we teach our students a lot of things. From an introduction to programming to machine learning, all the courses make them think like great developers and software engineers. The pace is fast and there are many students, so we must have a setup on which we can rely on. We decided to switch our computer lab to Fedora.

Previous setup

Our previous solution was keeping our development software in Windows virtual machines installed on Ubuntu Linux. This seemed like a good idea at the time. However, there were a couple of drawbacks. Firstly, there were serious performance losses because of running virtual machines. Performance and speed of the operating system was impacted because of this. Also, sometimes virtual machines ran concurrently in another user’s session. This led to serious slowdowns. We were losing precious time on booting the machines and then booting the virtual machines. Lastly, we realized that most of our software was Linux-compatible. Virtual machines weren’t necessary. We had to find a better solution.

Enter Fedora!

Computer lab in Serbia powered by Fedora

Picture of a computer lab running Fedora Workstation by default

We thought about replacing the virtual machines with a “bare bones” installation for a while. We decided to go for Fedora for several reasons.

Cutting edge of development

In our courses, we use many different development tools. Therefore, it is crucial that we always use the latest and greatest development tools available. In Fedora, we found 95% of the tools we needed in the official software repositories! For a few tools, we had to do a manual installation. This was easy in Fedora because you have almost all development tools available out of the box.

What we realized in this process was that we used a lot of free and open source software and tools. Having all that software always up to date was always going to be a lot of work – but not with Fedora.

Hardware compatibility

The second reason for choosing Fedora in our computer lab was hardware compatibility. The computers in the lab are new. In the past, there were some problems with older kernel versions. In Fedora, we knew that we would always have a recent kernel. As we expected, everything worked out of the box without any issues.

We decided that we would go for the Workstation edition of Fedora with GNOME desktop environment. Students found it easy, intuitive, and fast to navigate through the operating system. It was important for us that students have an easy environment where they could focus on the tasks at hand and the course itself rather than a complicated or slow user interface.

Powered by freedom

Lastly, in our department, we value free and open source software greatly. By utilizing such software, students are able to use it freely even when they graduate and start working. In the process, they also learn about Fedora and free and open source software in general.

Switching the computer lab

We took one of the computers and fully set it up manually. That included preparing all the needed scripts and software, setting up remote access, and other important components. We also made one user account per course so students could easily store their files.

After that one computer was ready, we used a great, free and open source tool called CloneZilla. CloneZilla let us make a hard drive image for restoration. The image size was around 11GB. We used some fast USB 3.0 flash drives to restore the disk image to the remaining computers. We managed to fully provision and setup twenty-four computers in one hour and fifteen minutes, with just a couple of flash drives.

Future work

All the computers in our computer lab are now exclusively using Fedora (with no virtual machines). The remaining work is to set up some administration scripts for remotely installing software, turning computers on and off, and so forth.

We would like to thank all Fedora maintainers, packagers, and other contributors. We hope our work encourages other schools and universities to make a switch similar to ours. We happily confirm that Fedora works great for us, and we can also vouch that it would work great for you!

Science Hack Day, Belgaum

Posted by Siddhesh Poyarekar on October 26, 2016 07:36 AM

We almost did not go, and then we almost cancelled. It was a good thing though that we ended up going because this ended up being one of our more memorable weekends out and definitely the most memorable tech event I have been to.

It started quite early with Kushal telling me that Praveen Patil was organizing a Science Hack Day with Hong Phuc’s help and that it might be an interesting place to come to. He mentioned that there were many interesting people coming in and that Nisha and I would have a good time. I wasn’t very keen though because of my usual reluctance to get out and meet people. This was especially an issue for me with Cauldron and Connect happening back to back in September, draining most of my ‘extrovert energy’. So we were definitely not going.

That is until Praveen pinged me and asked me if I could come.

That was when I posed the question to Nisha, asking if she wanted to do something there. She was interested (she is usually much more enthusiastic about these things than I am anyway) and decided to propose a hack based on an idea that she had already had. She was also fresh from Pycon Delhi where she enjoyed meeting some very interesting people and she was hoping for a similar experience in Belgaum. She proposed a hack to replace a proprietary microcontroller board in one of Ira’s toys with a Raspberry Pi to do some interesting things on pressing many of its buttons, like reading from a list of TODO items and playing songs from the internet. A couple of days before we were to drive down to Belgaum though, we had some issues which led to us almost cancelling the trip. Thankfully we were able to resolve that and off we went to Belgaum.

Poyarekar ladies watching the inauguration

The first impression of the event was the resort where it was hosted. The Sankalp Bhumi Resort at Belgaum was outside the city and was suitably isolated to give us a calm location. It felt like we were on holiday and that helped me relax a bit. The first day started with an informal inauguration ceremony with all of the mentors (including Nisha) giving a brief description of what they were attempting during the weekend. I found out then that there were workshops for school students going on at the same time, teaching them a variety of science hacks like making toys out of paper and straws, soldering and so on. It seemed like it would be total chaos with kids running around all over the place, but it was anything but that. The workshops seemed very well managed and more importantly, almost every child there was the quintessential wide-eyed curious student marvelling at all of the ‘magic’ they were learning.

An organic map of the venue that Arun Ganesh and his team created by mapping the area using OSM.

The hacks themselves were quite interesting, with ideas ranging from using weather sensors on various boards to various solar applications like a sun tracking solar panel, solar lamps, motion detectors, etc. My plan to remain aloof during the conference and just relax with Ira were foiled and I was promptly sucked into the engaging ideas. The fact that we had a bit of firefighting to do on the first morning (we forgot the password to the Pi and had to hunt for a microsd adapter to reset it) also helped me get more involved and appreciate the very interesting people that I found myself with.

The wall of people between me and the biomass burner

There were so many high points during the event that I am pretty sure that I’ll miss quite a few. The most memorable one was the lightning talk that Prof. Prabhu gave on a biomass burner that they had developed that could completely and cleanly burn a variety of bio-fuels, especially compacted dry organic rubbish. Then there was this spontaneous moment on Sunday when Arun Ganesh came up with a microscope with a broken mirror and wondered if we could add an LED under it with a firm pivot of some sort to provide light. It was a pretty simple hack, but we thoroughly enjoyed the process of burning a couple of LEDs in the process and hunting for parts in everybody’s toolkits.

Oh, and did I mention that Praveen did a Laser show to demonstrate some physics and mathematics concepts?

The hacked microscope

After a wonderful two days, it was finally time to go and we did not depart without getting an assurance from Praveen that we will do this again next year. Like I said, this was the most memorable event I have been to and more importantly, it is an event that I would like to take my daughter to every year to show her the wonders of science from an early age, to let her interact with some very interesting people (they were her ‘other friends’ over the weekend) and expand her horizons beyond the learnings she will get from school.

Science Hack Day India 2016

Posted by Kushal Das on October 26, 2016 07:05 AM

Few months back Praveen called to tell me about the new event he is organizing along with FOSSASIA, Science Hack Day, India. I never even registered for the event as Praveen told me that he just added mine + Anwesha’s name there. Sadly as Py was sick for the last few weeks, Anwesha could not join us in the event. On 20th Hong Phuc came down to Pune, in the evening we had the PyLadies meetup in the Red Hat office.

On 21st early morning we started our journey. Sayan, Praveen Kumar, and Pooja joined us in my car. This was my longest driving till date (bought the car around a year back). As everyone suggested, the road in Karnataka was smooth. I am now waiting for my next chance to drive on that road. After reaching Belgaum we decided to follow Google maps, which turned out to be a very bad decision. As the maps took us to a dead end with a blue gate. Later we found many localities also followed Google maps, and reached the same dead end.

The location of the event was Sankalp Bhumi, a very well maintained resort, full with greenery, and nature. We stayed in the room just beside the lake. Later at night Saptak joined us. Siddesh, Nisha + Ira also reached later in the evening.

Day 1

We had a quick inauguration event, all mentors talked about the project they will be working on, and then we moved towards the area for us. The main hall was slowly filled with school kids who had a build your own solar light workshop (lead by Jithin). Pooja also joined the workshop to help the kids with soldering.

I managed to grab the largest table in the hack area. Around 9 people joined me, among them we had college students, college professors, and someone came in saying she is from different background than computers. I asked her to try this Python thing, and by the end of the day she was also totally hooked into learning. I later found her daughter was also participating in the kids section. Before lunch we went through the basics of Python as a programming language. All of the participants had Windows laptops, so it was fun to learn various small things about Windows. But we managed to get going well.

Later we started working on MicroPython. We went ahead step by step, first turn on LED, later to DHT11 sensors for temperature+humidity measurements. By late afternoon all of us managed to write code to read the measurements from the sensors. I had some trouble with the old firmware I was using, but using the latest nightly firmware helped to fix the issue related to MQTT. I kept one of the board running for the whole night, Sayan wrote the client to gather the information from the Mosquitto server.

In the evening we also had lighting talks, I gave a talk there about dgplug summer training. The last talk in the evening was from Prof. Pravu, and during that talk I saw someone started a powerful gas stove outside the hut. I was then totally surprised to learn that the fire was not from gas, but using water and some innovative design his team managed to make a small stove which is having 35% efficiency of any biomass, the fire was blue, and no smoke. This was super cool.

After dinner, there was a special live show of laser lights+sound work done by Praveen. Teachers are important part of our lives. When we see someone like Praveen, who is taking the learning experience to another level while being in one of the small town in India, that gives a lot of pride to us. Btw, if you are wondering, he uses Python for most of his experiments :)

Day 2

I managed to move to the hack area early morning, and kept the setup ready. My team joined me after breakfast. They decided to keep one of the boards under the Sun beside the lake, and see the difference of temperature between two devices. I also met two high school teachers from a village near Maharashtra/Karnataka border. They invited us for more workshops in their school. They also brought a few rockets, which we launched from the venue :)

During afternoon Sayan, and Saptak worked on the web-frontend for our application, the following image shows the temperature, and humidity values from last night. The humidity during night was 70%, but during day it was around 30%. Temperature stayed between 20-30C.

Beside our table Nisha was working on her Cookie project. Near the dinning area, Arun, and his group created an amazing map of the resort in the ground using various organic materials available in the location. That project own the best science hack of the event. You can find more about various other details in the etherpad.

The impact of the event

We saw school kids crying as they don’t want to go back from the event. Every participant was full with energy. We had people working with ideas on all kinds things, and they came in from all different background. Siddhesh mentioned this event as the best event in India he has even been to. Begalum as a city joined in to make this event a successful one, we found local businesses supporting by sponsoring, local news papers covered the event. The owner of the venue also helped in various ways. By the end of the day 2, everyone of us were asking about when can we get back for next year’s event. I should specially thank the organising team (Hitesh, Rahul, and all of the volunteers) to make this event such a success. I also want to thank Hong Phuc Dang and FOSSASIA for all the help.

FOSDEM 2017 Real-Time Communications Call for Participation

Posted by Daniel Pocock on October 26, 2016 06:39 AM

FOSDEM is one of the world's premier meetings of free software developers, with over five thousand people attending each year. FOSDEM 2017 takes place 4-5 February 2017 in Brussels, Belgium.

This email contains information about:

  • Real-Time communications dev-room and lounge,
  • speaking opportunities,
  • volunteering in the dev-room and lounge,
  • related events around FOSDEM, including the XMPP summit,
  • social events (the legendary FOSDEM Beer Night and Saturday night dinners provide endless networking opportunities),
  • the Planet aggregation sites for RTC blogs

Call for participation - Real Time Communications (RTC)

The Real-Time dev-room and Real-Time lounge is about all things involving real-time communication, including: XMPP, SIP, WebRTC, telephony, mobile VoIP, codecs, peer-to-peer, privacy and encryption. The dev-room is a successor to the previous XMPP and telephony dev-rooms. We are looking for speakers for the dev-room and volunteers and participants for the tables in the Real-Time lounge.

The dev-room is only on Saturday, 4 February 2017. The lounge will be present for both days.

To discuss the dev-room and lounge, please join the FSFE-sponsored Free RTC mailing list.

To be kept aware of major developments in Free RTC, without being on the discussion list, please join the Free-RTC Announce list.

Speaking opportunities

Note: if you used FOSDEM Pentabarf before, please use the same account/username

Real-Time Communications dev-room: deadline 23:59 UTC on 17 November. Please use the Pentabarf system to submit a talk proposal for the dev-room. On the "General" tab, please look for the "Track" option and choose "Real-Time devroom". Link to talk submission.

Other dev-rooms and lightning talks: some speakers may find their topic is in the scope of more than one dev-room. It is encouraged to apply to more than one dev-room and also consider proposing a lightning talk, but please be kind enough to tell us if you do this by filling out the notes in the form.

You can find the full list of dev-rooms on this page and apply for a lightning talk at https://fosdem.org/submit

Main track: the deadline for main track presentations is 23:59 UTC 31 October. Leading developers in the Real-Time Communications field are encouraged to consider submitting a presentation to the main track.

First-time speaking?

FOSDEM dev-rooms are a welcoming environment for people who have never given a talk before. Please feel free to contact the dev-room administrators personally if you would like to ask any questions about it.

Submission guidelines

The Pentabarf system will ask for many of the essential details. Please remember to re-use your account from previous years if you have one.

In the "Submission notes", please tell us about:

  • the purpose of your talk
  • any other talk applications (dev-rooms, lightning talks, main track)
  • availability constraints and special needs

You can use HTML and links in your bio, abstract and description.

If you maintain a blog, please consider providing us with the URL of a feed with posts tagged for your RTC-related work.

We will be looking for relevance to the conference and dev-room themes, presentations aimed at developers of free and open source software about RTC-related topics.

Please feel free to suggest a duration between 20 minutes and 55 minutes but note that the final decision on talk durations will be made by the dev-room administrators. As the two previous dev-rooms have been combined into one, we may decide to give shorter slots than in previous years so that more speakers can participate.

Please note FOSDEM aims to record and live-stream all talks. The CC-BY license is used.

Volunteers needed

To make the dev-room and lounge run successfully, we are looking for volunteers:

  • FOSDEM provides video recording equipment and live streaming, volunteers are needed to assist in this
  • organizing one or more restaurant bookings (dependending upon number of participants) for the evening of Saturday, 4 February
  • participation in the Real-Time lounge
  • helping attract sponsorship funds for the dev-room to pay for the Saturday night dinner and any other expenses
  • circulating this Call for Participation (text version) to other mailing lists

See the mailing list discussion for more details about volunteering.

Related events - XMPP and RTC summits

The XMPP Standards Foundation (XSF) has traditionally held a summit in the days before FOSDEM. There is discussion about a similar summit taking place on 2 and 3 February 2017. XMPP Summit web site - please join the mailing list for details.

We are also considering a more general RTC or telephony summit, potentially in collaboration with the XMPP summit. Please join the Free-RTC mailing list and send an email if you would be interested in participating, sponsoring or hosting such an event.

Social events and dinners

The traditional FOSDEM beer night occurs on Friday, 3 February.

On Saturday night, there are usually dinners associated with each of the dev-rooms. Most restaurants in Brussels are not so large so these dinners have space constraints and reservations are essential. Please subscribe to the Free-RTC mailing list for further details about the Saturday night dinner options and how you can register for a seat.

Spread the word and discuss

If you know of any mailing lists where this CfP would be relevant, please forward this email (text version). If this dev-room excites you, please blog or microblog about it, especially if you are submitting a talk.

If you regularly blog about RTC topics, please send details about your blog to the planet site administrators:

Planet site Admin contact
All projects Free-RTC Planet (http://planet.freertc.org) contact planet@freertc.org
XMPP Planet Jabber (http://planet.jabber.org) contact ralphm@ik.nu
SIP Planet SIP (http://planet.sip5060.net) contact planet@sip5060.net
SIP (Español) Planet SIP-es (http://planet.sip5060.net/es/) contact planet@sip5060.net

Please also link to the Planet sites from your own blog or web site as this helps everybody in the free real-time communications community.


For any private queries, contact us directly using the address fosdem-rtc-admin@freertc.org and for any other queries please ask on the Free-RTC mailing list.

The dev-room administration team:

CUDA 8, cuDNN, Nvidia drivers and GNOME Software metadata

Posted by Simone Caronni on October 25, 2016 09:38 PM

GNOME software integration

The Nvidia driver repository has been updated with AppStream metadata. From Fedora 25 onward, you will be able to search for Nvidia, CUDA, GeForce or Quadro to make the driver, control panel and other programs appear in the Gnome Software window.

As far as I know, this should be enabled by default on Fedora 25.


Thanks to Richard Hughes for helping out with the metadata.

I require proper 16:10 aspect ratio pictures for both NSight and the Visual Profiler running on Fedora, so if you want to contribute just drop me an email or open an issue on the CUDA package on GitHub.

Changes to the Nvidia driver packaging

The Nvidia driver can now be installed without nvidia-settings (the control panel utility) as requested by Red Hat, in preparation for the Gnome software integration. This means the dependencies have been reversed, and that to install the driver and the control panel you need to install nvidia-settings or the driver and nvidia-settings:

dnf/yum -y install nvidia-settings kernel-devel

The libglvnd package has been updated to the latest snapshot and now features all the changes that have been introduced by Adam Jackson for the Mesa GLVND integration in Fedora 25. This means that while installing you will be prompted to install/upgrade smaller packages that contain a subset of the libglvnd libraries, this includes EGL support for the recently released beta drivers version 375.10. For anything lower than 375.10 (so Fedora 23-24 and CentOS/RHEL 6/7 at the moment of writing this) Nvidia’s last official note on EGL is:

“libEGL.so.1, while not a proper GLVND library, depends upon the GLVND infrastructure for proper functionality. Therefore, any driver package which aims to support NVIDIA EGL must provide the GLVND libraries […]”

So for now, in Fedora 23, 24 and CentOS/RHEL 6/7:

$ rpm -q --requires nvidia-driver-libs.x86_64 | grep libglvnd
libglvnd-gles(x86-64) >= 0.1.1
libglvnd-glx(x86-64) >= 0.1.1
libglvnd-opengl(x86-64) >= 0.1.1
$ rpm -q --conflicts nvidia-driver-libs.x86_64 | grep libglvnd
libglvnd-egl(x86-64) >= 0.1.1

And for Fedora 25:

$ rpm -q --requires nvidia-driver-libs.x86_64 | grep libglvnd
libglvnd-egl(x86-64) >= 0.2
libglvnd-gles(x86-64) >= 0.2
libglvnd-glx(x86-64) >= 0.2
libglvnd-opengl(x86-64) >= 0.2

Not a big deal. This accommodates the ongoing modularization in Mesa but still preserves the original EGL libraries from Nvidia. The upgrade should be transparent and you should not notice any difference except some smaller packages being installed.

vulkan_500px_june16Vulkan is now part of Fedora, so on supported Fedora releases, the Vulkan loader and libraries can be installed and you do not need to do anything to enable support in the drivers. CentOS and Red Hat Enterprise Linux do not have Vulkan yet. I’m not sure if it’s worth installing it by default along with the drivers, though.

Let’s assume you have a freshly installed Fedora 25 system with a recent Nvidia GPU and you want to:

  • Install the driver for gaming
  • Play Vulkan enabled games
  • Want to be comfortable with the control panel
  • Play 32 bit games on a 64 bit system
  • Play 32 bit Vulkan games on a 64 bit system
$ sudo dnf install nvidia-settings kernel-devel dkms-nvidia vulkan.i686 nvidia-driver-libs.i686
Last metadata expiration check: 0:33:49 ago on Mon Oct 24 14:14:30 2016.
Dependencies resolved.
 Package            Arch   Version                             Repository       Size
 dkms-nvidia        x86_64 2:375.10-1.fc25                     fedora-nvidia   6.4 M
 libglvnd           i686   1:0.2.999-4.20161025git28867bb.fc25 fedora-nvidia   103 k
 libglvnd           x86_64 1:0.2.999-4.20161025git28867bb.fc25 fedora-nvidia   105 k
 libglvnd-egl       i686   1:0.2.999-4.20161025git28867bb.fc25 fedora-nvidia    44 k
 libglvnd-egl       x86_64 1:0.2.999-4.20161025git28867bb.fc25 fedora-nvidia    42 k
 libglvnd-gles      i686   1:0.2.999-4.20161025git28867bb.fc25 fedora-nvidia    29 k
 libglvnd-gles      x86_64 1:0.2.999-4.20161025git28867bb.fc25 fedora-nvidia    28 k
 libglvnd-glx       i686   1:0.2.999-4.20161025git28867bb.fc25 fedora-nvidia   114 k
 libglvnd-glx       x86_64 1:0.2.999-4.20161025git28867bb.fc25 fedora-nvidia   110 k
 libglvnd-opengl    i686   1:0.2.999-4.20161025git28867bb.fc25 fedora-nvidia    39 k
 libglvnd-opengl    x86_64 1:0.2.999-4.20161025git28867bb.fc25 fedora-nvidia    38 k
 libva-vdpau-driver x86_64 0.7.4-14.fc24                       fedora           61 k
 libvdpau           i686   1.1.1-3.fc24                        fedora           35 k
 nvidia-driver      x86_64 2:375.10-3.fc25                     fedora-nvidia   3.1 M
 nvidia-driver-NVML x86_64 2:375.10-3.fc25                     fedora-nvidia   397 k
 nvidia-driver-libs i686   2:375.10-3.fc25                     fedora-nvidia    15 M
 nvidia-driver-libs x86_64 2:375.10-3.fc25                     fedora-nvidia    14 M
 nvidia-libXNVCtrl  x86_64 2:375.10-1.fc25                     fedora-nvidia    26 k
 nvidia-settings    x86_64 2:375.10-1.fc25                     fedora-nvidia   935 k
 vulkan             i686                     updates-testing 1.5 M
 vulkan-filesystem  noarch                     updates-testing 8.0 k
Transaction Summary
Install  21 Packages
Total download size: 42 M
Installed size: 178 M
Is this ok [y/N]:

Note that the requirement on kernel-devel is still required as otherwise the package kernel-debug-devel is pulled in automatically in place of the normal non-debug package. There is bug opened on dnf/libsolv for this.

Changes to CUDA packaging

The CUDA packages hosted on the Nvidia repository are split into multiple subpackages, based on the library. For each library, you have the corresponding devel subpackage with the headers, the unversioned library symlink and the static library. Here, they were divided in one libs, one big extra-libs, one static and one devel subpackage for everything. Since I’m planning to enable CUDA/NVCUVID encoding/decoding in FFmpeg (I’m actually waiting to the dynamic loader patches to land in the 3.2 branch before enabling that) there should be a way to install just what is required by those functions and not the whole CUDA toolkit set of libraries.

So now, all the libraries are split into subpackages, much like in the original Nvidia CUDA repository. This allows you to install and build software relying on specific components without the need to install all the CUDA toolkit just to satisfy a library dependency. With the new packaging organization, the original cuda-devel and cuda-extra-libs will pull in all the specific subpackages giving you the same situation you are accustomed to. Also, for the same reason, static libraries have been included in each respective devel subpackage.

Example, just with the basic tools:

$ sudo dnf install cuda
Last metadata expiration check: 0:00:20 ago on Sun Oct 23 13:11:01 2016.
Dependencies resolved.
 Package           Arch         Version               Repository           Size
 cuda              x86_64       1:8.0.44-4.fc24       fedora-nvidia        95 M
 cuda-cufft        x86_64       1:8.0.44-4.fc24       fedora-nvidia        97 M
 cuda-curand       x86_64       1:8.0.44-4.fc24       fedora-nvidia        38 M
 cuda-libs         x86_64       1:8.0.44-4.fc24       fedora-nvidia       6.4 M
Transaction Summary
Install  4 Packages
Total size: 236 M
Installed size: 469 M
Is this ok [y/N]:

The basic tools along with all the libraries (note that the NVML headers are included):

$ sudo dnf install cuda-devel
Last metadata expiration check: 0:10:00 ago on Sun Oct 23 13:11:01 2016.
Dependencies resolved.
 Package                 Arch       Version             Repository         Size
 cuda                    x86_64     1:8.0.44-4.fc24     fedora-nvidia      95 M
 cuda-cublas             x86_64     1:8.0.44-4.fc24     fedora-nvidia      21 M
 cuda-cublas-devel       x86_64     1:8.0.44-4.fc24     fedora-nvidia      38 M
 cuda-cudart             x86_64     1:8.0.44-4.fc24     fedora-nvidia     131 k
 cuda-cudart-devel       x86_64     1:8.0.44-4.fc24     fedora-nvidia     659 k
 cuda-cufft              x86_64     1:8.0.44-4.fc24     fedora-nvidia      97 M
 cuda-cufft-devel        x86_64     1:8.0.44-4.fc24     fedora-nvidia      73 M
 cuda-cupti              x86_64     1:8.0.44-4.fc24     fedora-nvidia     1.2 M
 cuda-cupti-devel        x86_64     1:8.0.44-4.fc24     fedora-nvidia     213 k
 cuda-curand             x86_64     1:8.0.44-4.fc24     fedora-nvidia      38 M
 cuda-curand-devel       x86_64     1:8.0.44-4.fc24     fedora-nvidia      60 M
 cuda-cusolver           x86_64     1:8.0.44-4.fc24     fedora-nvidia      23 M
 cuda-cusolver-devel     x86_64     1:8.0.44-4.fc24     fedora-nvidia     4.1 M
 cuda-cusparse           x86_64     1:8.0.44-4.fc24     fedora-nvidia      23 M
 cuda-cusparse-devel     x86_64     1:8.0.44-4.fc24     fedora-nvidia      23 M
 cuda-devel              x86_64     1:8.0.44-4.fc24     fedora-nvidia     1.6 M
 cuda-libs               x86_64     1:8.0.44-4.fc24     fedora-nvidia     6.4 M
 cuda-npp                x86_64     1:8.0.44-4.fc24     fedora-nvidia      91 M
 cuda-npp-devel          x86_64     1:8.0.44-4.fc24     fedora-nvidia      47 M
 cuda-nvgraph            x86_64     1:8.0.44-4.fc24     fedora-nvidia     4.6 M
 cuda-nvgraph-devel      x86_64     1:8.0.44-4.fc24     fedora-nvidia      12 k
 cuda-nvml-devel         x86_64     1:8.0.44-4.fc24     fedora-nvidia      41 k
 cuda-nvrtc              x86_64     1:8.0.44-4.fc24     fedora-nvidia     6.6 M
 cuda-nvrtc-devel        x86_64     1:8.0.44-4.fc24     fedora-nvidia      16 k
Transaction Summary
Install  24 Packages
Total size: 655 M
Installed size: 1.4 G
Is this ok [y/N]:

The nvidia-driver-NVML-devel package, which was including the NVML header (for libnvidia-ml.so) has now been made obsolete by the new headers, which are now part of CUDA 8. So the cuda-nvml-devel package will take care of that. Again, this is the same as in the Nvidia repository. Everything that was requiring the NVML header now refers to that package instead of the previous one. I will leave it for a few releases like that and then I will remove the Obsolete/Provides tags from the various SPEC files.

The header is also required for building the latest nvidia-settings from the 375.10 source, this has been taken into account making the CUDA package buildable on i686 but generating only the cuda-nvml-devel subpackage.

Extra stuff

In addition to the libraries bundled in the CUDA toolkit, also the cuDNN library for distributed neural networks is included in the repository.

As usual, you are welcome to open bugs / request stuff / comment on the GitHub repositories.

#CCNA 5.0 Nivel 1 Capítulo 2

Posted by Alvaro Castillo on October 25, 2016 07:38 PM
Buenas tardes,

Aquí os traigo apuntes del curso CCNA 5.0 en español del nivel 1 continuando con el capítulo 2. Para que puedan disfrutarlos e ir más directo a las cosas importantes. Este capítulo suele tener muchos ejemplos, pero sobre todo contienen muchos comandos.

Este es el resumen de este capítulo
  • Explicar que tipo de SO tienen la mayoría de dispositivos Cisco
  • Como configurar dicho SO incluyendo nombre de dispositivo, mensaje del día...
  • Explicar cómo se comunican los dispositivos mediante red
  • Obtener una tabla con los diferentes tipos de modos de acceso jerárquico dentro de IOS
  • Conjunto de combinaciones de teclas para facilitar el trabajo en entornos CLI
Para descargarlo, tenéis acceso a dos repositorios de git llamados #ptlabs. Tengo un archivo CHANGELOG que registra todos los cambios del repositorio, es decir, si hay alguna modificación de alguna errata, información extra...etc queda registrado en él.

También dispongo de un fichero CHECKSUM para que puedan ustedes comprobar la integridad del archivo en caso de que se os haya descargado mal, saberlo fehacientemente.
Recuerden que si queréis practicar con el software Packet Tracer y no os arranca el programa, pueden optar por utilizar mi script válido para openSUSE Leap, Debian, Fedora y Gentoo.

New flatpak command line

Posted by Alexander Larsson on October 25, 2016 05:27 PM

Today I released version 0.6.13 of flatpak which has a lot of nice new features. One that I’d like to talk a bit about is the new command line argument format.

The flatpak command line was always a bit low-level and hard to use. Partly this was because of lack of focus on this, and partly due to the fact that the expected UI for flatpak for most people would be a graphical user interface like gnome-software. However, with this new release this changed, and flatpak is now much nicer to use from the commandline.

So, what is new?

Flatpakrepo files

Before you can really use flatpak you have to configure it so it can find the applications and runtimes. This means setting up one or more remotes. Historically you did this by manually specifying all the options for the remote as arguments to the flatpak remote-add command. To make this easier we added a file format (.flatpakrepo) to describe a remote, and made it easy to use it.

The new canonical example to configure the gnome stable repositories is:

$ flatpak remote-add --from gnome \
$ flatpak remote-add --from gnome-apps \

Alternatively you can just click on the above links and they should open in gnome-software (if you have new enough versions installed).

Multiple arguments to install/update/uninstall

Another weakness of the command line has been that commands like install, uninstall and update only accepted one application name argument (and optionally a branch name). This made it hard to install multiple apps in one go, and the separate branch name made it hard to cut-and-paste from output of e.g. flatpak list.

Instead of the separate branch name all the commands now take multiple “partial refs” as arguments. These are partial versions of the OSTree ref format that flatpak uses internally. So, for an internal reference like app/org.gnome.gedit/x86_64/stable, one can now specify one of these:


And flatpak will automatically fill in the missing part in a natural way, and give a detailed error if you need to specify more details:

$ flatpak install gnome org.gnome.Platform
error: Multiple branches available for org.gnome.Platform, you must specify one of: org.gnome.Platform//3.20, org.gnome.Platform//3.22, org.gnome.Platform//3.16, org.gnome.Platform//3.18

Automatic dependencies

The other problem with the CLI has been that it is not aware of runtime dependencies. To run an app you generally had to know what runtime it used and install that yourself. The idea here was that the commandline should be simple, non-interactive and safe. If you instead use the graphical frontend it will install dependencies interactively so you can control where things get installed from.

However, this just made the CLI a pain to use, and you could easily end up in situations where things didn’t work. For instance, if you updated gedit from 3.20 to 3.22 it suddenly depended on a new runtime version, and if you weren’t aware of this it probably just stopped working.

Of course, we still can’t just install dependencies from wherever we find them, because that would be a security issue (any configured remote can supply a runtime for any applications). So, the solution here is for flatpak to become interactive:

$ flatpak update org.gnome.gedit
Looking for updates...
Required runtime for org.gnome.gedit/x86_64/stable (org.gnome.Platform/x86_64/3.22) is not installed, searching...
Found in remote gnome, do you want to install it? [y/n]: y
Installing: org.gnome.Platform/x86_64/3.22 from gnome
Installing: org.gnome.Platform.Locale/x86_64/3.22 from gnome
Updating: org.gnome.gedit/x86_64/stable from gnome-apps
Updating: org.gnome.gedit.Locale/x86_64/stable from gnome-apps

If you have remotes you never want to install dependencies from, you can install them with --no-use-for-deps, and they will not be used. Flatpakrepo files for app-only repositories should set NoDeps=true.

Note that this is not a package-system-like dependency solver that can solve sudoku. It is still a very simple two-way split.

Flatpakref files

The primary way Flatpak is meant to be used is that you configure a few remotes that has most of the applications that you use, then you install from these either on the command line, or via a graphical installer. However, sometimes it is nice to have a single link you can put on a website to install your application. Flatpak now supports that via .flatpakref files. These are very similar to flatpakrepo files, in that they describe a repository, but they additionally contain a particular application in that repository which will be installed.

Such files can be installed by just clicking on them in your web-browser (which will open them for installation in gnome-software) or on the command line:

flatpak install --from https://sdk.gnome.org/gedit.flatpakref

This will try to install the required runtime, so you first need to add the remote with the runtimes.

Launching runtimes

During development and testing it is often common to launch commands in a sandbox to experiment with how the runtime works. This was always possible by running a custom command in an application that used the runtime, like so:

$ flatpak run --command=sh org.gnome.gedit
sh-4.3$ ls /app
bin lib libexec manifest.json share

You can even specify a custom runtime with --runtime. However, there really should be no need to have an application installed to do this, so the new version allows you to directly run a runtime:

$ flatpak run org.gnome.Platform//3.22
sh-4.3$ ls /app

Software Freedom Kosova 2016

Posted by bee2502 on October 25, 2016 05:01 PM

Software Freedom Kosova (SFK) 2016 took place in Prishtina from October 21-23, 2016. We were able to push a special Fedora badge for SFK  to be awarded to SFK attendees who vist the Fedora booth. The badge was awarded 14 times out of which 12 were existing contributors while 2 new contributors were onboarded at the event ! Yaay – we look forward to seeing you in the community nafieshehu and marianab.

I was supposed to be speaking on ‘What is Machine Learning and how FOSS Organizations use it?’ at SFK. However, there were problems with my visa and Easyjet crew wouldn’t let me board my flight to Prishtina. The SFK organizers (most of them contributors from FLOSSK) were really understanding and helpful. I was able to finally conduct my talk remotely. I am especially thankful to Jona Azizaj and Ardian Haxha .

The talk was about –

  • What is Machine Learning ? and it’s applications.
  • Basic Machine Learning Algorithms with examples
  • Case Studies of FOSS organizations : How do FOSS organizations like Fedora, Wikimedia, Mozilla use Machine Learning ?
  • Interested in Machine Learning ? Resources

You can find the preliminary version of slides for the talk here

The talk aimed at introducing Machine Learning to the audience and I would love to interact with anyone interested in knowing more about learning Machine Learning or contributing to FOSS organizations in this area.

Jona and I were also going to conduct a workshop on Introduction to Non Technical Contributions in Free and Open Source Software. It is a common misconception that you need to know programming to contribute to FLOSS organizations and that contributors only work on technical tasks. This workshop aimed to bust this myths by introducing participants to non coding tasks in FLOSS organizations and mainly Fedora. We (Jona and I) were going to talk about diverse non technical contribution opportunities in FOSS by taking example of Fedora like Marketing, Translation, Design, Writing articles for Fedora Magazine or contributing to Community Operations. Jona went on to conduct the workshop with other Fedora contributors at SFK like Giannis K. , Anxhela H, Elio Qoshi and others.

The slides for the workshop are here. You can view it better after you download it.

Even though I couldn’t attend SFK, I could feel it to be a huge success all the way to Berlin !

I hope to learn from the presentations of the talks I wanted to but couldn’t attend – especially ‘Hacking the tenders data: the quest for public spending patterns’ by Victor Netu and ‘Using Open Source Technology for Social Change’ by Blinera Merta. I also planned to visit FLOSSK , Prishtina Hackerspace and Girls Coding Kosova – which are some awesome FOSS communities in Prishtina and learn about their ongoing projects, their journey (Prishtina Hackerspace was founded by crowdsourcing the funds!) and hoped to apply those learnings in India – but better luck next time (maybe during next SFK).

Shihemi se shpejti, Prishtina !



Microsoft Catalog Files and Digital Signatures decoded

Posted by Andreas Schneider on October 25, 2016 03:17 PM

TL;DR: Parse and print .cat files: parsemscat


Günther Deschner and myself are looking into the new Microsoft Printing Protocol [MS-PAR]. Printing always means you have to deal with drivers. Microsoft package-aware v3 print drivers and v4 print drivers contain Microsoft Catalog files.

A Catalog file (.cat) is a digitally-signed file. To be more precise it is a PKCS7 certificate with embedded data. Before I started to look into the problem understanding them I’ve searched the web, if someone already decoded them. I found a post by Richard Hughes: Building a better catalog file. Richard described some of the things we already discovered and some new details. It looks like he gave up when it came down to understand the embedded data and write an ASN.1 description for it. I started to decode the myth of Catalog files the last two weeks and created a tool for parsing them and printing what they contain, in human readable form.


The embedded data in the PKCS7 signature of a Microsoft Catalog is a Certificate Trust List (CTL). Nikos Mavrogiannopoulos taught me ASN.1 and helped to create an ASN.1 description for the CTL. With this description I was able to start parsing Catalog files.



CatalogNameValue ::= SEQUENCE {
    name       BMPString, -- UCS2-BE
    flags      INTEGER,
    value      OCTET STRING -- UCS2-LE




The PKCS7 part of the .cat-file is the signature for the CTL. Nikos implemented support to get the embedded raw data from the PKCS7 Signature with GnuTLS. It is also possible to verify the signature using GnuTLS now!
The CTL includes members and attributes. A member holds information about file name included in the driver package, OS attributes and often a hash for the content of the file name, either SHA1 or SHA256. I’ve written abstracted function so it is possible to create a library and a simple command line tool called dumpmscat.

Here is an example of the output:

  CHECKSUM: E5221540DC4B974F54DB4E390BFF4132399C8037

  FILE: sambap1000.inf, FLAGS=0x10010001
  OSATTR: 2:6.0,2:6.1,2:6.4, FLAGS=0x10010001
  MAC: SHA1, DIGEST: E5221540DC4B974F54DB4E39BFF4132399C8037

In addition the CTL has normally a list of attributes. In those attributes are normally OS Flags, Version information and Hardware IDs.

  NAME=OS, FLAGS=0x10010001, VALUE=VistaX86,7X86,10X86
  NAME=HWID1, FLAGS=0x10010001, VALUE=usb\\vid_0ff0&pid_ff00&mi_01

Currently the projects only has a command line tool called: dumpmscat. And it can only print the CTL for now. I plan to add options to verify the signature, dump only parts etc. When this is done I will create a library so it can easily be consumed by other software. If someone is interested and wants to contribute. Something like signtool.exe would be nice to have.

F24-20161023 updated Lives released

Posted by Ben Williams on October 25, 2016 03:14 PM
It is with great pleasure to announce that the Community run respin team has yet another Updated ISO round. This round carries the 4.7.9-200 kernel along with over 755 MB of updates (avg, some Desktop Environments more, some less) since the Gold release back in June. This kernel has the patch for the Dirty Cow vulnerbility.
as always the iso and torrent links can be found at http://tinyurl.com/Live-respins2

Containerization and Deployment of Application on Atomic Host using Ansible Playbook

Posted by Trishna Guha on October 25, 2016 10:52 AM

This article describes how to build Docker image and deploy containerized application on Atomic host (any Remote host) using Ansible Playbook.

Building Docker image for an application and run container/cluster of containers is nothing new. But the idea is to automate the whole process and this is where Ansible playbooks come in to play.

Note that you can use Cloud/Workstation based Image to execute the following task. Here I am issuing the commands on Fedora Workstation.

Let’s see How to automate the containerization and deployment process for a simple Flask application:

We are going to deploy container on Fedora Atomic host.

First, Let’s Create a simple Flask Hello-World Application.

This is the Directory structure of the entire Application:

├── ansible
│   ├── ansible.cfg
│   ├── inventory
│   └── main.yml
├── Dockerfile
└── flask-helloworld
    ├── hello_world.py
    ├── static
    │   └── style.css
    └── templates
        ├── index.html
        └── master.html


from flask import Flask, render_template

APP = Flask(__name__)

def index():
    return render_template('index.html')

if __name__ == '__main__':
    APP.run(debug=True, host='')


body {
  background: #F8A434;
  font-family: 'Lato', sans-serif;
  color: #FDFCFB;
  text-align: center;
  position: relative;
  bottom: 35px;
  top: 65px;
.description {
  position: relative;
  top: 55px;
  font-size: 50px;
  letter-spacing: 1.5px;
  line-height: 1.3em;
  margin: -2px 0 45px;


<!doctype html>
    {% block head %}
    <title>{% block title %}{% endblock %}</title>
    {% endblock %}
    												<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.6/css/bootstrap.min.css" integrity="sha384-1q8mTJOASx8j1Au+a5WDVnPi2lkFfwwEAa8hDDdjZlpLegxhjVME1fgjWPGmkzs7" crossorigin="anonymous">
    												<link href="https://maxcdn.bootstrapcdn.com/font-awesome/4.6.3/css/font-awesome.min.css" rel="stylesheet" integrity="sha384-T8Gy5hrqNKT+hzMclPo118YTQO6cYprQmhrYwIiQ/3axmI1hQomh7Ud2hPOy8SP1" crossorigin="anonymous">
    												<link rel="stylesheet" href="{{ url_for('static', filename='style.css') }}">
    												<link href='http://fonts.googleapis.com/css?family=Lato:400,700' rel='stylesheet' type='text/css'>

<div id="container">
    {% block content %}
    {% endblock %}</div>


{% extends "master.html" %}

{% block title %}Welcome to Flask App{% endblock %}

{% block content %}
<div class="description">

Hello World</div>
{% endblock %}

Let’s write the Dockerfile.

FROM fedora
MAINTAINER Trishna Guha<tguha@redhat.com>

RUN dnf -y update && dnf -y install python-flask python-jinja2 && dnf clean all
RUN mkdir -p /app

COPY files/ /app/

ENTRYPOINT ["python"]
CMD ["hello_world.py"]

Now we will work on Ansible playbook for our application that deals with the automation part:

Create inventory file:

IP_ADDRESS_OF_HOST ansible_ssh_private_key_file=<'PRIVATE_KEY_FILE'>

Replace IP_ADDRESS_OF_HOST with the IP address of the atomic/remote host and ‘PRIVATE_KEY_FILE’ with your private key file.

Create ansible.cfg file:



Replace USER with the user of your remote host.

Create main.yml file:

- name: Deploy Flask App
  hosts: atomic
  become: yes

    src_dir: [Source Directory]
    dest_dir: [Destination Directory]

    - name: Create Destination Directory
       path: "{{ dest_dir }}/files"
       state: directory
       recurse: yes

    - name: Copy Dockerfile to host
       src: "{{ src_dir }}/Dockerfile"
       dest: "{{ dest_dir }}"

    - name: Copy Application to host
       src: "{{ src_dir }}/flask-helloworld/"
       dest: "{{ dest_dir }}/files/"

    - name: Make sure that the current directory is {{ dest_dir }}
      command: cd {{ dest_dir }}

    - name: Build Docker Image
      command: docker build --rm -t fedora/flask-app:test -f "{{ dest_dir }}/Dockerfile" "{{ dest_dir }}"

    - name: Run Docker Container
      command: docker run -d --name helloworld -p 5000:5000 fedora/flask-app:test

Replace [Source Directory] in src_dir field in main.yml with your /path/to/src_dir of your current host.

Replace [Destination Directory] in dest_dir field in main.yml with your /path/to/dest_dir of your remote atomic host.

Now simply run $ ansible-playbook main.yml :).  To verify if the application is running issue this command $ curl http://localhost:5000 on your atomic/remote host.

You can also manage your containers running on remote host using Cockpit. Check this article to know how to use Cockpit to manage your containers: https://fedoramagazine.org/deploy-containers-atomic-host-ansible-cockpit



Here is the repository of the above example:  https://github.com/trishnaguha/fedora-cloud-ansible/tree/master/examples/flask-helloworld

My future post will be related to ansible-container where I will describe how we can build Docker image and orchestrate container without writing any Dockerfile🙂.

Owncloud no seu Fedora

Posted by Daniel Lara on October 25, 2016 10:27 AM
Essa dica estamos utilizando o Fedora 24 Server

Bom vamos começar a instalação pelo apache

 # dnf install httpd -y

Ative na inicialização:

# systemctl enable httpd.service

Agora vamos ao banco de dados MariaDB:

# dnf install mariadb mariadb-server -y

Agora ative o MariaDB na inicialização do sistema:

# systemctl enable mariadb.service

Inicie o serviço:

# systemctl start mariadb.service

Defina a senha do root do MariaDB:

# mysqladmin -u root password 'fedora'

Vamos criar a base de dados do ownCloud:

# mysql -u root -p

MariaDB [(none)]> CREATE DATABASE owncloud;
MariaDB [(none)]> GRANT ALL ON owncloud.* to 'owncloud'@'localhost' IDENTIFIED BY 'fedora';
MariaDB [(none)]> exit

Instale agora o ownCloud:

# dnf install owncloud -y

Inicie o Apache:

# systemctl start httpd.service

após só reiniciar e pronto só acessar


guia de referencia :

Episode 10 - The super botnet that nobody can stop

Posted by Open Source Security Podcast on October 24, 2016 09:25 PM
Kurt and Josh discuss Dirty COW, the big IoT DDoS, and Josh can't pronounce Mirai or Dyn.

Download Episode
<iframe frameborder="no" height="150" scrolling="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/289791587&amp;auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false&amp;visual=true" width="100%"></iframe>

Show Notes

Fedora Join meeting 24 October 2016 - Summary

Posted by Ankur Sinha "FranciscoD" on October 24, 2016 08:26 PM

We had another fortnightly Fedora Join SIG meeting today. We had decided on the specific goal of the SIG the last time we'd met. In short, we'd established that the SIG would work on setting up and maintaining channels where newbies can communicate with the community before they've begun contributing. We'd leave the tooling to CommOps, who are working on this already. Following up from then, this week we got together to see what tasks we should get on with. The links are here:

We began with a discussion of where the SIG needs to be mentioned for us to gain visibility. Of course, this discussion kept returning to tooling - wcidff, wiki pages, the website, and so on. We quickly agreed that we need to first make the SIG more visible within the community so that when we begin to funnel newbies to the channels, we have enough folks to help them out. Only then would it make sense to include our presence on our public facing pages.

So, for the next two weeks, we're working on:

  • find a way to get some metrics on the activity of the IRC channel - to see how many of us are there, how many newbies come in, what the most active times are and so on and so forth.

This is partially solved already. I remembered that the IRC support SIG used to have a weekly stats HTML page back in the day. nirik (who knows pretty much everything!) quickly told me that a package called pisg does this and I've started playing with it already. An example stats page is here: https://ankursinha.fedorapeople.org/fedora-join-stats/201610.html. There are other alternatives too - superseriousstats for example. I'll tinker with these a bit more to see if we get the stats we need. Neither can be clubbed with FAS, for example. The other tasks are about spreading the word to make the community better aware of the channels:

  • announce the SIG on the community blog
  • announce the SIG on the announce mailing list

We'll get these out in the next few days. Since you're reading this already, please do hang out in the channels that we've set up. We've already got some newbies trickling in to the IRC channel and sometimes we miss their questions if we're not actively monitoring the channel:

If you do run into people that need help getting started, please send them over to one of our channels too.

The next stage of course would be to publicise these channels outside the community, but that's a few weeks away at the moment. That's it for this meeting. Have a good week ahead!

The perils of long development cycles

Posted by Tomasz Torcz on October 24, 2016 07:37 PM

As for today, latest version of systemd is v231, released in July 2016. This is the version that will be in Fedora 25 (to be GA in three weeks). That's quite a long time between releases for systemd – we used to have a new version every two weeks.

During the hackfest at systemd.conf 2016, I've tried to tackle three issues biting me with Fedora 24 (v229, released in February this year) and F25. The outcome was… unexpected.

First one was a minor in issue in networking part of systemd suite. Data received via LLDP protocol had last letter cut off. I started searching through networkd code, looking for this off-by-one error. But it looked different, the output I was seeing didn't match the code. Guess what? LLDP presentation was reworked shortly after v229. The rework fixed the issue, but Fedora 24 newer received the backport. Which is fair, as the issue was cosmetic and not really important.

So I gone through another cosmetic issue which was annoying me. In list-timers display, multi-byte characters in day names were causing columns to be misaligned. I've discussed the issue as it appeared in Fedora 25 (v231) with Zbyszek during the conference, and I was ready to fix it. Again, I wasn't able to find the code, because it was reworked after v231 and the problem was corrected. I just wasted some time digging through codebase.

Last issue, which I didn't have time to chase, was more serious. Sometimes output from services stopped appearing in journal. It turned out to be genuine bug (#4408). Zbyszek fixed it last week, it will be in v232. This is serious flaw, the fix should be backported to v231 in Fedora 25 and other stable versions.

In summary, long development periods increase the need to do backports, escalating the workload on maintainers. Delays cause contributors using stable versions to lose time looking through evolved code, which may no longer be buggy. Prolonged delays between stable releases make the distributions ship old code. Even Fedora, which tries to be first.

Security Score Card For Fedora Infrastructure

Posted by Kevin Fenzi on October 24, 2016 05:41 PM

Josh is asking folks to send him their security score card via twitter. Since I’ve been trying to blog more and like pontificating, I thought I would respond here in a blog post. 😉

There’s 4 parts to the scorecard:

  1. Number of staff
  2. Number of “systems”
  3. Lines of code
  4. Number of security people

For Fedora Infrastructure, some of these are pretty hard to answer, but here’s some attempts:

  1. Fedora Infrastructure is a Open organization. People who show up and start doing things are granted more and more permissions based on their merit. Sometimes people drift away to other things, sometimes new people show up. There’s some people employed by Fedora’s primary sponsor Red Hat, specifically to work on Fedora. Those account for 3.5 sysadmins, 5 applications developers, 2 release engineers, and 2 design folks. Specific areas will have potentially lots more community folks working on them. So, answer: 13-130?
  2. This one is easier to quantify. We have (almost) everything in ansible, so right now our ansible inventory + some misc non ansible hosts is around 616 hosts.
  3. This is another one thats difficult. We have a lot of applications (see https://apps.fedoraproject.org/) Some of them are just upstream projects we have instances of (mediawiki, askbot, etc). Others are things where we are primary developers on (fedocal, pagure, etc). It would be a fun project to look at all these and count up lines of code. Answer: dunno. ;(
  4. If this is full time security people working only on security issues, then 0. We do have a excellent security office in Patrick who is super smart and good at auditing and looking for issues before they bite us, but he’s not doing that full time. Others of the sysadmin teams do security updates and monitoring lists/errata and watching logs for out of the ordinary behavior, but thats also not full time. So, answer: 0 or 1 0r 3?

So, from this I think it would be nice to have a better idea of our applications (not lines of code), but just where to keep track of things better and who knows that application. It would be awesome to get some full time security folks, but I am not sure that will be in the cards.

I’d like to thank Josh for bringing up the discussion… it’s an interesting one for sure.

Spark on Kubernetes at Spark Summit EU

Posted by Will Benton on October 24, 2016 02:54 PM

I’ll be speaking about Spark on Kubernetes at Spark Summit EU this week. The main thesis of my talk is that the old way of running Spark in a dedicated cluster that is shared between applications makes sense when analytics is a separate workload. However, analytics is no longer a separate workload — instead, analytics is now an essential part of long-running data-driven applications. This realization motivated my team to switch from a shared Spark cluster to multiple logical clusters that are co-scheduled with the applications that depend on them.

I’m glad for the opportunity to get together with the Spark community and present on some of the cool work my team has done lately. Here are some links you can visit to learn more about our work and other topics related to running Spark on Kubernetes and OpenShift:

From There to Here (But Not Back Again)

Posted by Red Hat Security on October 24, 2016 01:30 PM

Red Hat Product Security recently celebrated our 15th anniversary this summer and while I cannot claim to have been with Red Hat for that long (although I’m coming up on 8 years myself), I’ve watched the changes from the “0day” of the Red Hat Security Response Team to today. In fact, our SRT was the basis for the security team that Mandrakesoft started back in the day.

In 1999, I started working for Mandrakesoft, primarily as a packager/maintainer. The offer came, I suspect, because of the amount of time I spent volunteering to maintain packages in the distribution. I also was writing articles for TechRepublic at the time, so I also ended up being responsible for some areas of documentation, contributing to the manual we shipped with every boxed set we sold (remember when you bought these things off the shelf?).

Way back then, when security flaws were few and far between (well, the discovery of these flaws, not the existence of them, as we’ve found much to our chagrin over the years), there was one individual at Mandrakesoft who would apply fixes and release them. The advisory process was ad-hoc at best, and as we started to get more volume it was taking his time away from kernel hacking and so they turned to me to help. Having no idea that this was a pivotal turning point and would set the tone and direction of the next 16 years of my life, I accepted. The first security advisory I released for Linux-Mandrake was an update to BitchX in July of 2000. So in effect, while Red Hat Product Security celebrated 15 years of existence this summer, I celebrated my 16th anniversary of “product security” in open source.

When I look back over those 16 years, things have changed tremendously. When I started the security “team” at Mandrakesoft (which, for the majority of the 8 years I spent there, was a one-man operation!) I really had no idea what the future would hold. It blows my mind how far we as an industry have come and how far I as an individual have come as well. Today it amazes me how I handled all of the security updates for all of our supported products (multiple versions of Mandriva Linux, the Single Network Firewall, Multi-Network Firewall, the Corporate Server, and so on). While there was infrastructure to build the distributions, there was none for building or testing security updates. As a result, I had a multi-machine setup (pre-VM days!) with a number of chroots for building and others for testing. I had to do all of the discovery, the patching, backporting, building, testing, and the release. In fact, I wrote the tooling to push advisories, send mail announcements, build packages across multiple chroots, and more. The entire security update “stack” was written by me and ran in my basement.

During this whole time I looked to Red Hat for leadership and guidance. As you might imagine, we had to play a little bit of catchup many times and when it came to patches and information, it was Red Hat that we looked at primarily (I’m not at all ashamed to state that quite often we would pull patches from a Red Hat update to tweak and apply to our own packages!). In fact, I remember the first time I talked with Mark Cox back in 2004 when we, along with representatives of SUSE and Debian, responded to the claims that Linux was less secure than Windows. While we had often worked well together through cross-vendor lists like vendor-sec and coordinated on embargoed issues and so on, this was the first real public stand by open source security teams against some mud that was being hurled against not just our products, but open source security as a whole. This was one of those defining moments that made me scary-proud to be involved in the open source ecosystem. We set aside competition to stand united against something that deeply impacted us all.

In 2009 I left Mandriva to work for Red Hat as part of the Security Response Team (what we were called back then). Moving from a struggling small company to a much larger company was a very interesting change for me. Probably the biggest change and surprise was that Red Hat had the developers do the actual patching and building of packages they normally maintained and were experienced with. We had a dedicated QA team to test this stuff! We had a rigorous errata process that automated as much as possible and enforced certain expectations and conditions of both errata and associated packages. I was actually able to focus on the security side of things and not the “release chain” and all parts associated with it, plus there was a team of people to work with when investigating security issues.

Back at Mandriva, the only standard we focused on was the usage of CVE. Coming to Red Hat introduced me to the many standards that we not only used and promoted, but also helped shape. You can see this in CVE, and now DWF, OpenSCAP and OVAL, CVRF, the list goes on. Not only are we working to make, and keep, our products secure for our customers, but we apply our expertise to projects and standards that benefit others as these standards help to shape other product security or incident response teams, whether they work on open source or not.

Finally (as an aside and a “fun fact”) when I first started working at Mandrakesoft with open source and Linux, I got a tattoo of Tux on my calf. A decade later, I got a tattoo of Shadowman on the other calf. I’m really lucky to work on things with cool logos, however I’ve so far resisted getting a tattoo of the heartbleed logo!

I sit and think about that initial question that I was asked 16 years ago: “Would you want to handle the security updates?”. I had no idea it would send me to work with the people, places, and companies that I have. No question that there were challenges and more than a few times I’m sure that the internet was literally on fire but it has been rewarding and satisfying. And I consider myself fortunate that I get to work every day with some of the smartest, most creative, and passionate people in open source!

valgrind 3.12.0 and Valgrind@Fosdem

Posted by Mark J. Wielaard on October 24, 2016 01:13 PM

Valgrind 3.12.0 was just released with lots of exciting improvements. See the release notes for all the details. It is already packaged for Fedora 25.

Valgrind will also have a developer room at Fosdem on Saturday 4 February 2017 in Brussels, Belgium. Please join us, regardless of whether you are a Valgrind core hacker, Valgrind tool hacker, Valgrind user, Valgrind packager or hacker on a project that integrates, extends or complements Valgrind.

Please see the Call for Participation for more information on how to propose a talk or discussion topic.

Switchable / Hybrid Graphics support in Fedora 25

Posted by Hans de Goede on October 24, 2016 12:56 PM
Recently I've been working on improving hybrid graphics support for the upcoming Fedora 25 release. Although Fedora 25 Workstation will use Wayland by default for its GNOME 3 desktop, my work has been on hybrid gfx support under X11 (Xorg) as GNOME 3 on Wayland does not yet support hybrid gfx,

So no Wayland, still there are a lot of noticable hybrid gfx support and users of laptops with hybrid gfx using the open-source drivers should have a much smoother userexperience then before. Here is an (incomplete) list of generic improvements:

  • Fix the discrete GPU not suspending after using an external monitor, halving laptop battery life

  • xrandr --listproviders no longer shows 3 devices instead of 2 on laptops with 2 GPUs

  • Hardware cursor support when both GPUs have active video outputs, previously X would fallback to software cursor rendering in this case which would typically lead to a flickering, or entirely invisible cursor at the top of the screen

Besides this a lot of work has been done on fixing hybrid gfx issues in the modesetting driver, this is important since with Fedora we use the modesetting driver on Skylake and newer Intel integrated gfx as well as on Maxwell and newer Nvidia discrete GPUs, the following issues have been fixed in the modesetting driver (and thus on laptops with a Skylake CPU and/or a Maxwell or newer GPU):

  • Hide HW cursor on init, this fixes 2 cursors showing on some setups

  • Make the modesetting driver support DRI_PRIME=1 render offloading when the secondary GPU is using the modesetting driver

  • Fix misrendering (tiled vs linear) when using DRI_PRIME=1 render offloading and the primary GPU is using the modesetting driver

  • Fix GL apps running at 1 fps when shown on a video-output of the secondary GPU and the primary GPU is using the modesetting driver

  • Fix secondary GPU video output partial screen updates (part of the screen showing a previous frame) when the discrete GPU is the secondary GPU and the primary GPU is using the modesetting driver

  • Fix secondary GPU video output updates lagging (or sometimes the last frame simply not being shown at all because no further rendering is happening) when the discrete GPU is the secondary GPU and the primary GPU is using the modesetting driver

Note coming Wednesday (October 26th) we're having a Fedora Better Switchable Graphics Test Day, if you have a laptop with hybrid gfx please join us to help further improving hybrid gfx support.

Participez à la journée de test de Fedora 25 sur les images Cloud et Atomic

Posted by Charles-Antoine Couret on October 24, 2016 12:16 PM

Aujourd'hui, ce lundi 24 octobre, est une journée dédiée à un test précis : sur les images Cloud et Atomic de Fedora. En effet, durant le cycle de développement, l'équipe d'assurance qualité dédie quelques journées autours de certains composants ou nouveautés afin de remonter un maximum de problèmes sur le sujet.

Elle fournit en plus une liste de tests précis à effectuer. Il vous suffit de les suivre, comparer votre résultat au résultat attendu et le notifier.

Qu'est-ce que c'est ?

Les images clouds sont en fait des images d'installation de Fedora dédiée au Cloud. À l'instar de Workstation qui est la version de base, et Server pour les serveurs, Cloud fait parti des produits de Fedora pour gérer des cas d'utilisations spécifiques et offrir une expérience utilisateur cohérente autour de ceux-ci.

La particularités des images clouds sont d'être légères pour être instanciées plusieurs fois dans une même machine via des machine virtuelles ou autre solution similaire.

Les tests du jour couvrent :

  • Le bon démarrage du système, avec un accès SSH ouvert ;
  • La mise à jour du système atomiquement ;
  • Le retour en arrière suite à une mise à jour atomique ;
  • Le lancement des applications via Docker ;
  • La gestion de l'espace disque de Docker.

Comment y participer ?

Vous pouvez vous rendre sur la page des tests pour lister les tests disponibles et rapporter vos résultats. La page wiki récapitule les modalités de la journée.

En cas de bogue, il est nécessaire de le rapporter sur le BugZilla. Si vous ne savez pas faire, n'hésitez pas à consulter la documentation correspondante.

De plus, si une journée est dédiée à ces tests, il reste possible de les effectuer quelques jours plus tard sans problème ! Les résultats seront globalement d'actualité.

Lack of availability

Posted by Paul Mellors [MooDoo] on October 24, 2016 10:21 AM

A few people have been inquiring where I’ve been online the last few weeks/months and am I ok?

Short answer – Yes I’m fine🙂

Longer [only slightly] answer – Yes I’m fine, my laptop broke.   A couple of weeks ago, my laptop started powering off for no reason, it wouldn’t stay up for more than 2 mins at a time.  To be honest it’s an old laptop so apart from all the standard checks/taking apart/cleaning/brief investigation, it’s not worth spending any more time on it.  As it was my workhorse, I don’t have a replacement yet.  I’ve been loaned a laptop which I’m grateful for, but really need to replace my own.   For the time being then, I’ll not be online much in the evenings and weekends.  Please bear with me.  Guess I’m on the lookout for a cheap laptop replacement that runs linux.

This has been a #scheduledpost announcement.

What is the GRUB2 boot loader?

Posted by Fedora Magazine on October 24, 2016 08:00 AM

There are various things that make up an operating system. In any operating system, one of the most critical parts is powering on the machine. During this process, the computer will execute a small program in read-only memory (ROM) to begin initiating the startup process. This small program is known by many names, but most often called a boot loader. In almost every Linux distribution, including Fedora, GRUB2 (or GRand Unified Bootloader 2) is the default boot loader. Even though it is a critical piece of the operating system, many people aren’t aware of the boot loader, all that goes into it, or how it can be customized.

Every computer operating system needs a kernel and boot loader to load and boot the operating system. In a Linux system, the kernel and the initial ramdisk (or initrd) play major roles for loading the operating system from a disk drive and into memory (or RAM). GRUB2 supports all computer operating systems including Windows, all Linux distributions, and nearly all Unix-like operating systems like macOS.

Why GRUB2?

Windows 10, Fedora 24, CentOS 7.2, and macOS 10.11.5 El Capitan operating systems in GRUB2 menu

Windows 10, Fedora 24, CentOS 7.2, and macOS 10.11.5 El Capitan operating systems in GRUB2 menu

There are many different types of firmware that initialize system hardware during the startup process. Some of these include options such as Open-Efi and legacy / UEFI BIOS. GRUB2 supports these. This broad range of compatibility with various firmware ensures that GRUB2 can be used on almost any system. It works on some of the oldest machines still running as well as some of the newest hardware on the market.

It also has a special ability to boot from any file system format, such as HFS+ (macOS), NTFS (often Windows), ext3/4 (often Linux), XFS, and more. It also supports MBR (Master Boot Record) and GPT (GUID Partition Tables) partitioning schemes.

GRUB2 security

The design of GRUB2 is security-oriented and flexible for a variety of needs. It has two well-known security and privacy features to help protect your system. Normally when starting your option, you are able to enter your BIOS or UEFI settings and change them without logging in. GRUB2 allows you to set a password that must be entered to change these settings. This helps keep your system safe and secure from someone who may have physical access to your machine. An example of this being used is blocking USB devices from booting up on the system.

Additionally, GRUB2 supports Linux Unified Key Setup, or LUKS. When installing an operating system for the first time or formatting a hard drive, there is an extra security option to encrypt the entire file system. LUKS is the tool used to add an encryption layer from the root directory across all parts of the machine. Before you are able to reach the login screen of Fedora or another Linux distribution, you must enter an encryption passphrase to unlock your system. GRUB2 integrates with LUKS and supports booting with an encrypted file system.

In the world of security, not every threat may come from the Internet or a remote hacker. Sometimes the biggest security breach is at the physical layer with who has access to a system. Together, these two security features allow you to keep your system locked down and secure in the event you lose access to your machine.


Example of a customized GRUB2 menu using a custom wallpaper

Example of a customized GRUB2 menu using a custom wallpaper

GRUB2 has powerful settings for security and initializing the operating system, but it also has more features to customize the user experience. It supports background wallpapers, different fonts, and more to personalize your system. The Grub Customizer tool allows you to make these customizations quickly and easily.

To install the tool, open up a command line and enter the following command. You are prompted to install the package.

$ sudo dnf install grub-customizer

After installing, you will be able to open the application on your desktop. When opening it, you will enter your password so the application can make changes to the GRUB2 configuration file. This tool can make advanced changes to your system configuration, but for this guide, we will focus on the Appearance settings tab at the top.

This tab will present you with a variety of options to theme and customize your system. You can select a background image, change the font family, and alter the font colors. The center part of the screen will update with a live preview of what it will look like while you make changes. When you’re done making changes, hit the Save button in the upper-left corner and reboot to see your changes take effect.

Note that if you use a custom font, you MUST use a font size that will fit in your system’s resolution. If the font size is too large for the resolution of your system, GRUB2 will enter a crash loop and you will have to boot from live media (e.g. a USB or CD) to fix this.

Using Grub Customizer to add a background image and custom font to the GRUB2 menu

Using Grub Customizer to add a background image and custom font to the GRUB2 menu

Fudcon 2016, días de experiencias compartidas…

Posted by Bernardo C. Hermitaño Atencio on October 24, 2016 05:53 AM

Después de una semana y algo más, es momento de hacer un recuerdo de lo que fué el Fedora Users and Developers Conference – Fudcon Puno 2016, un evento de gran concurrencia, donde se llevaron muchas charlas y talleres en los días 13,14 y 15 de Octubre; en las siguientes líneas narro un breve resumen de mi aporte en el evento.

El día Jueves 13 por la  mañana, la inaguración con participación de las autoridades y estudiantes de la Universidad Nacional del Altiplano, estudiantes de otras universidades, institutos, profesionales afines a la computación y sistemas, usuarios, desarrolladores y embajadores de Fedora se congregraron a ser partícipes del FUDCON con mucho entusiasmo de poder brindar y obtener experiencias que sumen a todos.

Por la tarde me tocó realizar una charla denominada “Experiencias con Fedora en Educación Superior Tecnológica”, donde pude hacer mención y enfatizar la influencia de Fedora como aporte en comunidad que punto de partida que ayudo a generar ideas y/proyectos que están permitiendo resolver problemas álgidos y que ahora ya es posible observar algunas mejoras en el Instituto Superior Tecnológico Público Manuel Seoane Corrales en San Juan de Lurigancho – Lima.

1 2 3 4 5 7 8 9 10

El día Viernes 14 mi participación era ahora por la mañana, el tema denominado “Primeros pasos en el terminal de Linux”, tuvo mucha concurrencia, que después del final había un gran interés de los partipantes por querer continuar con el tema, es una evidencia ya muy conocida en el Perú de la desactualización y el alejamiento de estas tecnologías en las aulas,  se requieren una urgente reformulación de los contenidos para los estudiantes de informática y sistemas en la Educación Superior Peruana.

El resto del día pude observar varias charlas muy importantes y con aportes bastante interesantes de los ponentes que me motivaron a seguir experimentando…

11 11a 12 13 14 15 16 17 18 19 20 SAMSUNG CSC

El día 15, varios estudiantes de la UNAP camino a ser parte de la comunidad, los embajadores Lorddemon, Barto y con la guía de Echevemaster decidimos ser parte del taller de empaquetamiento en todo el día, donde nos dedicamos a buscar aplicaciones que no son parte de la paquetería de Fedora para luego estudiarlo e intentar empaquetarlo cosa que aún tengo pendiente a finalizar el proceso.

Por la tarde día 15 llegó la clausura del evento, muy emotiva las palabras de los organizadores Tonnet, Aly, de los ponentes y participantes, después de haber conocido nuevos amigos, nuevos conocimientos, nuevas ideas, llegan los nuevos retos que nos motivan a seguir aportando, pendiente de la comunidad y de sus actividades.

Fedora 25 supplemental wallpapers

Posted by Fedora Magazine on October 24, 2016 03:54 AM

Each release, the Fedora Design team works with the community on a set of 16 additional wallpapers. Users can install and use these to supplement the standard wallpaper. The Fedora Design team encourages submissions from the whole community. Contributors then use the Nuancier app to vote on the top 16 to include.

Voting has closed on the extra wallpapers for Fedora 25. Voters chose from among 113 submissions. A total of 262 Fedora contributors voted. They chose the following 16 backgrounds to include in Fedora 25:

Drops On Pebble by skamath Winter in Bohemia by Helena Bartosova Matterhorn by  Marc Bieler CherryBlossom by aikidouke Milky Way over the Mangart by Ales Krivec Zen by hhlp Raindrop by zdenek Manhattan by laurent1979 Grytviken by Andrew England Forest Path by Barbora Hrončoková The Bois de Vincennes by atiana Castro Junqueira Poetry by laurent1979 color by komcy Chihuly by davidsandilands Soft Blue by cprofitt Caion-Do-Xingo-Brazil by Claudia Regina

We congratulate all the winners. And we look forward to many high-quality submissions for Fedora 26!

Everything you know about security is wrong

Posted by Josh Bressers on October 23, 2016 10:21 PM
If I asked everyone to tell me what security is, what do you do about it, and why you do it. I wouldn't get two answers that were the same. I probably wouldn't even get two that are similar. Why is this? After recording Episode 9 of the Open Source Security Podcast I co-host, I started thinking about measuring a lot. It came up in the podcast in the context of bug bounties, which get exactly what they measure. But do they measure the right things? I don't know the answer, nor does it really matter. It's just important to keep this in mind as in any system, you will get exactly what you measure.

Why do we do the things we do?
I've asked this question before, and I often get answers from people. Some are well thought out reasonable answers. Some are overly simplistic. Some are just plain silly. All of them are wrong. I'm going to go so far as to say we don't know why we do what we do in most instances. Sure, there might be compliance, with a bunch of rules, that everyone knows don't really increase security. Some of us fix security bugs so the bad guys don't exploit them (even though very few actually get exploited). Some of us harden systems using rules that probably don't stop a motivated attacker.

Are we protecting data? Are we protecting the systems? Are we protecting people? Maybe we're protecting the business. Sure, that one sounds good.

Measuring a negative
There's a reason this is so hard and weird though. It's only sort of our fault, it's what we try to measure. We are trying to measure something not happening. You cannot measure how many times an event didn't happen. It's also impossible to prove a negative.

Do you know how many car accidents you didn't get in last week? How about how many times you weren't horribly maimed in an industrial accident? How many times did you not get mugged? These questions don't even make sense, no sane person would even try to measure those things. This is basically our current security metrics.

The way we look at security today is all about the negatives. The goal is to not be hacked. The goal is to not have security bugs. Those aren't goals, those are outcomes.

What's our positive?
In order to measure something, it has to be true. We can't prove a negative, we have to prove something to measure it, so what's the "positive" we need to look for and measure. This isn't easy. I've been in this industry for a long time and I've done a lot of thinking about this. I'm not sure I'm right in my list below, but getting others to think about this is more important than being right.

As security people, we need to think about risk. Our job isn't to stop bad things, it's to understand and control risk. We cannot stop bad things from happening, the best we can hope for is to minimize damage from bad things. Right about now is where many would start talking about the NIST framework. I'm not going to. NIST is neat, but it's too big for my liking, we need something simple. I'm going to suggest you build a security score card and track it over time. The historical trends will be very important.

Security Score Card
I'm not saying this is totally correct, it's just an idea I have floating in my mind, you're welcome to declare it insane. Here's what I'm suggesting you track.

1) Number of staff
2) Number of "systems"
3) Lines of code
4) Number of security people

That's it.

Here's why though. Let's think about measuring positives. We can't measure what isn't happening, but we can measure what we have and what is happening. If you work for a healthy company, 1-3 will be increasing. What does your #4 look like? I bet in many organizations it's flat and grossly understaffed. Good staff will help deal with security problems. If you have a good leader and solid staff, a lot of security problems get dealt with. Things like the NIST framework is what happens when you have competent staff who aren't horribly overworked, you can't force a framework on a broken organization, it just breaks it worse. Every organization is different, there is no one framework or policy that will work. The only way we tackle this stuff is by having competent motivated staff.

The other really important thing this does is makes you answer the questions. I bet a lot of organizations can't answer 2 and 3. #1 is usually pretty easy (just ask ldap), #2 is much harder, and #3 may be impossible for some. These look like easy things to measure and just like quantum physics - by measuring it we will change it, probably for the better.

If you have 2000 employees, 200 systems, 4 million lines of code, and 2 security people, that's clearly a disaster waiting to happen. If you have 20, there may be hope. I have no idea what the proper ratios should be, if you're willing to share ratios with me I'd love to start collecting data. As I said, I don't have scientific proof behind this, it's just something I suspect is true.

I should probably add one more thing. What we measure not only needs to be true, it needs to be simple.

Send me your scorecard via Twitter

Tangerine 0.23

Posted by Petr Šabata on October 23, 2016 09:51 PM

Just a quick update. I’ve finally found some time to answer an old RFE and extended the Tangerine testrequires hook with some basic support for Test::Needs.

Releasing Tangerine 0.23. This should land in Fedora and EL within the next two weeks, as usually.

Another Fedora cycle, another painless Fedora upgrade

Posted by Kevin Fenzi on October 23, 2016 07:06 PM

As we near the release of Fedora 25, as always I upgrade my main servers to the new release before things come out so I can share any problems or issues I hit and possibly get them fixed before the official release.

As with the last few cycles, there was almost no problems. The one annoying item I hit was a configuration change in squid. In the past squid allowed you to have a ‘http_port NNNN intercept’ and no regular http_port defined, however the Fedora 25 version ( squid-4.0.11-1.fc25 ) fails to start with a cryptic: “mimeLoadIcon: cannot parse internal URL: http://myhost:0/
squid-internal-static/icons/…” error. It took me a while to find out that I needed to add a ‘http_port NNNN’ also now.

Otherwise everything went just fine. Many thanks to our excellent QA and upgrade developers.


Posted by Richard W.M. Jones on October 23, 2016 11:27 AM

I was reading about JWZ’s awesome portrait serial terminal and wondering what would a serial terminal look like today if we could implement it using modern technology.

You could get a flat screen display and mount it in portrait mode. Most have VESA attachments, so it’s a matter of finding a nice portrait VESA stand.

To implement the terminal you could fix a Raspberry Pi or similar to the back of the screen. Could it be powered by the same PSU as the screen? Perhaps if the screen had a USB port.

For the keyboard you’d use a nice USB keyboard.

Of course there’s no reason these days to use an actual serial line, nor to limit ourselves to just a text display. Use wifi to link to the host computer. Use software to emulate an old orange DEC terminal, and X11 to display remote graphics.

Building and Booting Upstream Linux and U-Boot for Orange Pi One ARM Board (with Ethernet)

Posted by Christopher Smart on October 23, 2016 04:16 AM

My home automation setup will make use of Arduinos and also embedded Linux devices. I’m currently looking into a few boards to see if any meet my criteria.

The most important factor for me is that the device must be supported in upstream Linux (preferably stable, but mainline will do) and U-Boot. I do not wish to use any old, crappy, vulnerable vendor trees!

The Orange Pi One is a small, cheap ARM board based on the AllWinner H3 (sun8iw7p1) SOC with a quad-Core Cortex-A7 ARM CPU and 512MB RAM. It has no wifi, but does have an onboard 10/100 Ethernet provided by the SOC (Linux patches incoming). It has no NAND flash (not supported upstream yet anyway), but does support SD. There is lots of information available at http://linux-sunxi.org.

Orange Pi One

Orange Pi One

Note that while Fedora 25 does not yet support this board specifically it does support both the Orange Pi PC (which is effectively a more full-featured version of this device) and the Orange Pi Lite (which is the same but swaps Ethernet for WiFi). Using either of those configurations should at least boot on the Orange Pi One.

Connecting UART

The UART on the Pi One uses the GND, TX and RX connections which are next to the Ethernet jack. Plug the corresponding cables from a 3.3V UART cable onto these pins and then into a USB port on your machine.

Orange Pi One UART Pin Connections

UART Pin Connections (RX yellow, TX orange, GND black)

Your device will probably be /dev/ttyUSB0, but you can check this with dmesg just after plugging it in.

Now we can simply use screen to connect to the UART, but you’ll have to be in the dialout group.

sudo gpasswd -a ${USER} dialout
newgrp dialout
screen /dev/ttyUSB0 115200

Note that you won’t see anything just yet without an SD card that has a working bootloader. We’ll get to that shortly!

Partition the SD card

First things first, get yourself an SD card.

While U-Boot itself is embedded in the card and doesn’t need it to be partitioned, it will be required later to read the boot files.

U-Boot needs the card to have an msdos partition table with a small boot partition (ext now supported) that starts at 1MB. You can use the rest of the card for the root file system (but we’ll boot an initramfs, so it’s not needed).

Assuming your card is at /dev/sdx (replace as necessary, check dmesg after plugging it in if you’re not sure).

sudo umount /dev/sdx* # makes sure it's not mounted
sudo parted -s /dev/sdx \
mklabel msdos mkpart \
primary ext3 1M 10M \
mkpart primary ext4 10M 100%

Now we can format the partitions (upstream U-Boot supports ext3 on the boot partition).
sudo mkfs.ext3 /dev/sdx1
sudo mkfs.ext4 /dev/sdx2

Leave your SD card plugged in, we will need to write the bootloader to it soon!

Upstream U-Boot Bootloader

Install the arm build toolchain dependencies.

sudo dnf install gcc-arm-linux-gnu binutils-arm-linux-gnu

We need to clone upstream U-Boot Git tree. Note that I’m checking out the release directly (-b v2016.09.01) but you could leave this off to get master, or change it to a different tag if you want.
cd ${HOME}
git clone --depth 1 -b v2016.09.01 git://git.denx.de/u-boot.git
cd u-boot

There is a defconfig already for this board, so simply make this and build the bootloader binary.
CROSS_COMPILE=arm-linux-gnu- make orangepi_one_defconfig
CROSS_COMPILE=arm-linux-gnu- make -j$(nproc)

Write the bootloader to the SD card (replace /dev/sdx, like before).
sudo dd if=u-boot-sunxi-with-spl.bin of=/dev/sdx bs=1024 seek=8

Wait until your device has stopped writing (if you have an LED you can see this) or run sync command before ejecting.

Testing our bootloader

Now we can remove the SD card and plug it into the powered off Orange Pi One to see if our bootloader build was successful.

Switch back to your terminal that’s running screen and then power up the Orange Pi One. Note that the device will try to netboot by default, so you’ll need to hit the enter key when you see a line that says the following.

(Or you can just repeatedly hit enter key in the screen console while you turn the device on.)

Note that if you don’t see anything, swap the RX and TX pins on the UART and try again.

With any luck you will then get to a U-Boot prompt where we can check the build by running the version command. It should have the U-Boot version we checked out from Git and today’s build date!

U-Boot version

U-Boot version

Hurrah! If that didn’t work for you, repeat the build and writing steps above. You must have a working bootloader before you can get a kernel to work.

If that worked, power off your device and re-insert the SD card into your computer and mount it at /mnt.

sudo umount /dev/sdx* # unmount everywhere first
sudo mount /dev/sdx1 /mnt

Creating an initramfs

Of course, a kernel won’t be much good without some userspace. Let’s use Fedora’s static busybox package to build a simple initramfs that we can boot on the Orange Pi One.

I have a script that makes this easy, you can grab it from GitHub.

Ensure your SD card is plugged into your computer and mounted at /mnt, then we can copy the file on!

cd ${HOME}
git clone https://github.com/csmart/custom-initramfs.git
cd custom-initramfs
./create_initramfs.sh --arch arm --dir "${PWD}" --tty ttyS0

This will create an initramfs for us in your custom-initramfs directory, called initramfs-arm.cpio.gz. We’re not done yet, though, we need to convert this to the format supported by U-Boot (we’ll write it directly to the SD card).

gunzip initramfs-arm.cpio.gz
sudo mkimage -A arm -T ramdisk -C none -n uInitrd \
-d initramfs-arm.cpio /mnt/uInitrd

Now we have a simple initramfs ready to go.

Upstream Linux Kernel

The Ethernet driver has been submitted to the arm-linux mailing list (it’s up to its 4th iteration) and will hopefully land in 4.10 (it’s too late for 4.9 with RC1 already out).

Clone the mainline Linux tree (this will take a while). Note that I’m getting the latest tagged release by default (-b v4.9-rc1) but you could leave this off or change it to some other tag if you want.

cd ${HOME}
git clone --depth 1 -b v4.9-rc1 \

Or, if you want to try linux-stable, clone this repo instead.
git clone --depth 1 -b v4.8.4 \
git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git linux

Now go into the linux directory.
cd linux

Patching in EMAC support for SOC

If you don’t need the onboard Ethernet, you can skip this step.

We can get the patches from the Linux kernel’s Patchwork instance, just make sure you’re in the directory for your Linux Git repository.

Note that these will probably only apply cleanly on top of mainline v4.9 Linux tree, not stable v4.8.

# [v4,01/10] ethernet: add sun8i-emac driver
wget https://patchwork.kernel.org/patch/9365783/raw/ \
-O sun8i-emac-patch-1.patch
# [v4,04/10] ARM: dts: sun8i-h3: Add dt node for the syscon
wget https://patchwork.kernel.org/patch/9365773/raw/ \
-O sun8i-emac-patch-4.patch
# [v4,05/10] ARM: dts: sun8i-h3: add sun8i-emac ethernet driver
wget https://patchwork.kernel.org/patch/9365757/raw/ \
-O sun8i-emac-patch-5.patch
# [v4,07/10] ARM: dts: sun8i: Enable sun8i-emac on the Orange PI One
wget https://patchwork.kernel.org/patch/9365767/raw/ \
-O sun8i-emac-patch-7.patch
# [v4,09/10] ARM: sunxi: Enable sun8i-emac driver on sunxi_defconfig
wget https://patchwork.kernel.org/patch/9365779/raw/ \
-O sun8i-emac-patch-9.patch

We will apply these patches (you could also use git apply, or grab the mbox if you want and use git am).

for patch in 1 4 5 7 9 ; do
    patch -p1 < sun8i-emac-patch-${patch}.patch

Hopefully that will apply cleanly.

Building the kernel

Now we are ready to build our kernel!

Load the default kernel config for the sunxi boards.
ARCH=arm CROSS_COMPILE=arm-linux-gnu- make sunxi_defconfig

If you want, you could modify the kernel config here, for example remove support for other AllWinner SOCs.
ARCH=arm CROSS_COMPILE=arm-linux-gnu- make menuconfig

Build the kernel image and device tree blob.
ARCH=arm CROSS_COMPILE=arm-linux-gnu- make -j$(nproc) zImage dtbs

Mount the boot partition and copy on the kernel and device tree file.
sudo cp arch/arm/boot/zImage /mnt/
sudo cp arch/arm/boot/dts/sun8i-h3-orangepi-one.dtb /mnt/

Bootloader config

Next we need to make a bootloader file, boot.cmd, which tells U-Boot what to load and boot (the kernel, device tree and initramfs).

The bootargs line says to output the console to serial and to boot from the ramdisk. Variables are used for the memory locations of the kernel, dtb and initramfs.

Note, if you want to boot from the second partition instead of an initramfs, change root argument to root=/dev/mmcblk0p2 (or other partition as required).

cat > boot.cmd << EOF
ext2load mmc 0 \${kernel_addr_r} zImage
ext2load mmc 0 \${fdt_addr_r} sun8i-h3-orangepi-one.dtb
ext2load mmc 0 \${ramdisk_addr_r} uInitrd
setenv bootargs console=ttyS0,115200 earlyprintk root=/dev/root \
rootwait panic=10
bootz \${kernel_addr_r} \${ramdisk_addr_r} \${fdt_addr_r}

Compile the bootloader file and output it directly to the SD card at /mnt.
sudo mkimage -C none -A arm -T script -d boot.cmd /mnt/boot.scr

Now, unmount your SD card.

sudo umount /dev/sdx*

Testing it all

Insert it into the Orange Pi One and turn it on! Hopefully you’ll see it booting the kernel on your screen terminal window.

You should be greeted by a login prompt. Log in with root (no password).

Login prompt

Login prompt

That’s it! You’ve built your own Linux system for the Orange Pi One!


Log in as root and give the Ethernet device (eth0) an IP address on your network.

Now test it with a tool, like ping, and see if your network is working.

Here’s an example:

Networking on Orange Pi One

Networking on Orange Pi One

Memory usage

There is clearly lots more you can do with this device…

Memory usage

Memory usage


FUDCon Puno 2do y 3er Día

Posted by Tonet Jallo on October 22, 2016 11:36 PM

Hola de nuevo, hoy después de casi una semana que terminó FUDCon me pongo a escribir acerca de los sucesos de esta gran experiencia, empezando por…

2do Día

Este día empezó mejor que el primero, ya que el barcamp tenia ya los horarios definidos y los asistentes ya asimilaban la idea de estar delante de una desconferencia, la lógica fue simple para este FUDCon, que sea un evento fuera de lo común, no tener personas cuadriculadas adelante y hermetismo entre el asistente y los expositores, algunas fotos de las sesiones programadas para el segundo día son las siguientes:


Cabe destacar que se empezaron a identificar a aproximadamente diez potenciales contribuidores quienes a partir de este día estuvieron siendo asesorados por sus respectivos mentores, de los cuales se formaron tres grupos, uno de empaquetadores guiado por Eduardo Echeverria, uno de diseñadores guiado por Junior Wolnei y uno de traductores/escritores guiado por Eduardo Mayorga.


Terminando el evento, todo el grupo de Fedora Latam nos reunimos para hablar algo sobre el evento, la charla se hizo muy amena.

3er Día

Este fue RELATIVAMENTE el peor día de FUDCon, y pongo énfasis en lo de RELATIVAMENTE pues este día todo empezó a la misma hora, con el único problema de que no se tenían asistentes, literalmente habían entre cinco a diez personas hasta las 11:00am, en realidad es normal que suceda algo así en un evento de varios días que tiene como ultimo un sábado, bien, esto no significaría  un problema tan grande pues de puede aprovechar el tiempo para realizar un Fedora Activity Day en este día lo cual podría resultar incluso mas productivo que tan solo tener conferencias, pero no fue así, luego de las 11:00am llegaron mas personas, no eran muchas pero considero yo que eran quienes realmente querían llevarse algo aprendido de este evento, personas realmente interesadas en el software libre o personas realmente interesadas en Fedora, al final el evento termino con unos cincuenta asistentes, y fue algo genial, nosotros los organizadores nunca nos preocupamos en asistentes, NO NOS IMPORTÓ NUNCA TENER MUCHA GENTE ASISTENTE, no consideramos que el éxito de un evento se base en la cantidad de asistentes, como quien dice, lo que se quiere es CALIDAD, no CANTIDAD, así que creo que eso lo explica todo.


Volviendo a la palabra esta de RELATIVAMENTE, este FUDCon no fue enfocado a juntar mucha gente, antes de FUDCon se realizaron muchos eventos relacionados con software libre, desde mi participación en 2012 empezando con los FLISOL los cuales solo hablaron de temas genéricos relacionados a software libre, y terminando en los Fedora Weekends los cuales se enfocaron en despertar el interés en la comunidad y en buscar contribuidores específicamente para Fedora, y muy bien, lo logramos, no son muchos pero ahí están, hay que darles seguimiento y mucho apoyo para que puedan desarrollar sus habilidades para aportar a la comunidad, y que así puedan también aportar para crear una comunidad local mucho mas solida.

Este es un resumen a muy grandes rasgos de lo que fue FUDCon para mi, en realidad nunca pensé que este evento podría llegar a Puno, hasta donde me contaron es el primer FUDCon que se realiza en una ciudad donde NO HAY AEROPUERTO, no si sea un FUDCon malo o bueno, pero realmente me cambió la vida como organizador, y estoy seguro que a todos los que metieron mano en este evento el cual no podria haberse realizado como se realizó sin  ese granito de arena que cada uno puso.

Para finalizar, dentro de poco escribiré un articulo sobre las conclusiones de FUDCon a modo de feedback (yes, I promised you @bexelbie), los obstáculos, los problemas, lo bonito, lo obtenido, etc etc.

Así que hasta pronto, y espero que este evento haya sido para el provecho de la comunidad y de todos los asistentes que se quedaron con nosotros hasta el final.