Fedora People

Statistics proposal and self-hosting ListenBrainz

Posted by Justin W. Flory on December 18, 2017 08:30 AM
On the data refrain: Contributing to ListenBrainz

This post is part of a series of posts where I contribute to the ListenBrainz project for my independent study at the Rochester Institute of Technology in the fall 2017 semester. For more posts, find them in this tag.

This week is the last week of the fall 2017 semester at RIT. This semester, I spent time with the MetaBrainz community working on ListenBrainz for an independent study. This post explains what I was working on in the last month and reflects back on my original objectives for the independent study.

Running my own ListenBrainz

The RIT Linux Users Group hosts various virtual machines for our projects. I requested one to set up and host a “production” ListenBrainz site. The purpose of doing this was to…

  1. Test my changes in a “production” environment
  2. Offer a service for the RIT Linux Users Group to poke around with

I spent most of this time working with our system administrator to set up the machine and adjust hardware specs for ListenBrainz. Once we fixed storage space and memory issues, it was easy to set it up and get ListenBrainz running. My experience writing the development guide made it easy to get set up and get working. On the first run, it worked!

Now, listen.ritlug.com is live.

Figuring out HTTPS

My next challenge for the site is to set up HTTPS. I tried using a reverse proxy in nginx to set up HTTPS, but I received 502 Bad Gateway errors. I realized I spent too much time figuring this out on my own and decided to ask for help in the MetaBrainz community forums.

Proposing new statistics

Halfway through the independent study, I realized I would fall short of my original objective of implementing basic statistics in ListenBrainz. To compromise, I wrote a proposal for new statistics to start in the project. My proposal looked at other proprietary platforms that compete with ListenBrainz to see some of their statistics. I also came up with some of my own.

I proposed this to the MetaBrainz community on the community forums. I’m awaiting feedback on my ideas. Once I get feedback, I plan to file new tickets for each statistic to track their implementation over time.

I don’t expect statistics being at the forefront of ListenBrainz for some time. A lot of work is going towards other areas of the project. But later in 2018, I expect more focus on the user-facing side of the project.

My statistic and Google BigQuery

My biggest blocker over the last month was Google BigQuery. I wrote a statistic to calculate play counts over a time period, but was asked to test my statistic. To test my statistic, I needed real data to work with.

Originally, I tried using the Simple Last.fm Scrobbler to submit listens to the local IP address for my development environment, but I wasn’t able to get the app to reach my ListenBrainz server. To get the data, I had to set up Google BigQuery credentials so I could make queries against data on the production site, listenbrainz.org.

I tried working through the Google BigQuery documentation. There’s a lot of documentation for using BigQuery as a developer, but it was confusing where to find the information I needed to set it up in my development environment. I tried creating a new project in the Google Cloud Platform, but I was confused because it prompted me to upload my own data instead of accessing data already in BigQuery.

Too late, I realized I spent too much time on my own and not asking for help. I submitted a pull request with the statistic I made and asked for help in the MetaBrainz community. I also offered to write documentation for setting this up once I learn how to do it.

Reflecting back

I looked back on my original objectives for the independent study, and I was satisfied and dissatisfied.

Not enough programming

I wanted this independent study to enhance my programming knowledge. I especially wanted to focus on Python because I wanted to become more familiar with the language. However, I actually didn’t do much programming during the independent study, to my own fault.

My biggest challenge was I bit off more than I could chew. I wanted to write code, and made a big goal before I knew the code base of the project. Even now, I still am not completely comfortable with the code yet. It’s a big project with a lot of things going on. I was able to understand the things I did work on, but there’s still a lot.

I realized that next time, I need to spend more time evaluating the code base of a project before writing out my milestones. I wish I set more realistic, smaller milestones for myself. My milestone of implementing basic reports was lofty given my existing programming knowledge.


One of my other objectives was to write documentation for the project. I felt I succeeded in this milestone, and actually found it enjoyable and interesting to do! I helped separate out documentation from the README into the dedicated ReadTheDocs site. I wrote the development environment guide and helped fix some build issues with the docs site. I also plan to write more for some of the other pain points I found, like Google BigQuery.

My last milestone was to create a use case for a data visualization course at RIT. While I didn’t implement my basic reports, I did create the proposal and make an effort to write new statistics. There’s a lot of potential now to work with the data in Google BigQuery and do front-end work with tools like D3.js and Plotly.js. I believe there’s significant potential to use ListenBrainz as a hands-on project for students to explore data visualization with real data. I hope to support my independent study professor, Prof. Roberts, with questions and logistics of using it as a tool for learning in the future.

Unexpected success

I also think I had an unplanned success too. I immersed myself in the community for ListenBrainz too. Over the last few months, I realized that many of my strengths are in community management and tooling. During my time in the community, I did the following:

To the future!

This ends my independent study with ListenBrainz, but it doesn’t end my time contributing! I chose ListenBrainz because it’s a project I’m passionate about. An independent study allowed me to justify more time on it than a side project in my free time. I’m happy to have that opportunity, but I don’t want to end here!

I want to follow through on the statistics because I’m passionate about understanding music listening trends. I think there’s a lot of power for psychological research through music data. To this point, I filed a ticket to request tagging listens with “emotion” words that are synced back to MusicBrainz entities.

I won’t have as much time to work on the project without the course credit, but I hope to stay involved for the future. I love the project and I love the community. I’m thankful for the opportunity to work on this project as an independent study, and learn some things along the way.


The post Statistics proposal and self-hosting ListenBrainz appeared first on Justin W. Flory's Blog.

Getting started with Jitsi

Posted by Fedora Magazine on December 18, 2017 08:00 AM

The Jitsi community offers a fully open source video conferencing solution, built using HTML5. There are other video conferencing solutions available, so why use this one? One reason is that Jitsi is fully free and open source. It relies on HTML 5 technologies, meaning it works out of the box with any modern web browser, without the need for Flash or any other plugins or extensions. This post helps you get started with it.

Create your meeting

To create a meeting, simply visit a named URL:


For example, the last Fedora Council meeting happened at:


Set your name and avatar

By default, meet.jit.si is anonymous. Someone who joins shows up in the members list as “Fellow Jitser.” However, if you lead a meeting with multiple people it’s nice to know who’s present.

To set your username and avatar, click on the icon in the top left. Both are optional, of course:

meet.jit.si interface

Mute unless speaking

Being muted during video conferences is a good practice to follow. It helps keep the meeting quiet and manageable. It also avoids letting everyone know from your keyboard clicking that you’re checking email instead of following the meeting!

By default when you join a meeting you won’t be muted. Click on the “mute” icon in the top middle of the window to mute your own line. Then can use the space bar to toggle this status when you want to speak, as a “push to talk” control.

You also have a similar video on/off control, so you can show your webcam feed only when desired. If you turn off the video, your avatar represents you if you have one.

Link Jitsi with YouTube

If the audience of your meeting is too big to interact on Jitsi, you can link meet.jit.si with YouTube. You can then live stream the Jitsi meeting via YouTube. This also divides your audience into participants (or presenters) and audience, which may be helpful in many cases. Afterward, you can do some post-processing of the video if desired and share it via YouTube.

Fedora 27 : Go and atom editor.

Posted by mythcat on December 17, 2017 08:53 PM
The Go programming language was created at Google in 2009 by Robert Griesemer, Rob Pike, and Ken Thompson.
The Go often referred to as golang is a programming language created at Google.
Using go with Fedora 27 is very simple , just install it with dnf tool.
#sudo dnf install golang
To use it with atom editor you need to install the atom editor , see this tutorial.
The next step is to set the atom editor with the packages for go programming language, like:
  • go-plus
  • go-get
  • go-imports
  • platformio-ide-terminal
The go command come with this help:
Go is a tool for managing Go source code.


go command [arguments]
The commands are:

build compile packages and dependencies
clean remove object files
doc show documentation for package or symbol
env print Go environment information
bug start a bug report
fix run go tool fix on packages
fmt run gofmt on package sources
generate generate Go files by processing source
get download and install packages and dependencies
install compile and install packages and dependencies
list list packages
run compile and run Go program
test test packages
tool run specified go tool
version print Go version
vet run go tool vet on packages
Use "go help [command]" for more information about a command.

Additional help topics:

c calling between Go and C
buildmode description of build modes
filetype file types
gopath GOPATH environment variable
environment environment variables
importpath import path syntax
packages description of package lists
testflag description of testing flags
testfunc description of testing functions
The next step is to install your packages with go command and get:
go get -u golang.org/x/tools/
go get -u github.com/golang/lint/golint
Let's make a simple example:
package main
import "fmt"
func main() {
fmt.Println("Hello world !")
Let's test it with go command. To run the program, create a file named hello-world.go put the code in and use go run:
$ go run hello-world.go
hello world
If you want to build our programs into binaries, we can do this using go build :
$ go build hello-world.go
$ ls
hello-world hello-world.go
Finally, we can then execute the built binary directly.
$ ./hello-world
hello world
After I searched the internet I found a website with many examples and I recommend it. You can find him here.

ATO2017 - A (late) summary

Posted by Susan Lauber on December 17, 2017 07:40 PM
Just a few thoughts on All Things Open 2017:
(and a record of sessions attended for CISSP continuing education credits).

This event - which happened way back in October - just keeps growing. It is already almost too big!

Sunday: I made it to the early checkin and social in the evening. The location for the social is a cute place. It hosts local art and for the October dates, some spooky themes. Many thanks to Red Hat - specifically the Red Hat Open Source Stories team - for the sponsorship. I am not sure how many people realized that their videos (which are amazing!) were running on the TVs around the space.

Monday: After scoring a pair of socks from OpenSource.com, I focused on the Security track with the following sessions:





I also attended one security related talk from the DevOps track:


Chatter on twitter was coming mostly from the the community track which was nice since those talks always have some good stuff in them but I would have liked to hear a bit more about the other technical talks I was skipping. That is the problem with SO MANY tracks. It can be hard to choose where to invest your time.

Tuesday: I attended a couple talks in the Education track and explored the hallway a bit. Unfortunately one of the talks I had to leave due to asthma triggered by (chemical) cologne worn by another attendee. I never really got to feeling all that great the rest of day and headed out early to go home.



I was also asked about the Fedora branded long sleeve white button up shirt I was wearing.  Info is here:

Some slides from the conference are posted at:

A comment on the focus and participation.
I overhead a conversation at lunch one of the two days that despite the name of "ALL" things open, this conference was very developer focused. I think that is and always has been  the intent of this conference. The person who was a bit disappointed is more of an admin and ops person.  I do remember having a few more interesting admin and community options in previous years but that may have more to do with what I was looking for those years. Also this year was in competition with a big conference in Europe that altered the attendance some. This is not necessarily a bad thing. Seeing the same people at all the conferences can result in really good talks by experienced presenters but it can also mean that there is not enough growth and encouragement for new talent in the industry.

As long as this a local, low cost, and fits my schedule, I will continue to attend and offer to speak. Even if it is more developer focused than my usual activities.

Save the dates:  Oct 21-23, 2018.


Share files securely using OnionShare

Posted by Kushal Das on December 17, 2017 05:15 AM

Sharing files securely is always a open discussion topic. Somehow the relationship between security/privacy and usability stand in the opposite sides. But, OnionShare managed to create a bridge between them. It is a tool written by Micah Lee which helps to share files of any size securely and anonymously using Tor.

In the rest of the post I will talk about how you can this tool in your daily life.

How to install OnionShare?

OnionShare is a Python application and already packaged for most of the Linux distributions. If you are using Windows or Mac OS X, then visit the homepage of the application, and you can find the download links there.

On Fedora, you can just install it using dnf command.

sudo dnf install onionshare -y

For Ubuntu, use the ppa repository from Micah.

sudo add-apt-repository ppa:micahflee/ppa
sudo apt-get update
sudo apt-get install onionshare

How to use the tool?

When you start the tool, it will first try to connect to the Tor network. After a successful connection, it will have a window open where you can select a number of files, and then click on Start sharing button. The tool will take some time to create a random onion URL, which you can then pass to the person who is going to download the files using the Tor Browser.

You can mark any download to stop after the first download (using the settings menu). Because the tool is using Tor, it can punch through standard NAT. Means you can share files from directly your laptop or home desktop. One can still access the files using the Tor Browser.

Because of the nature of Tor, the whole connection is end to end encrypted. This also makes the sharer and downloader anonymous, but you have to make sure that you are sharing the download URL in a secure way (for example, you can share it using Signal). OnionShare also has a rate-limit so that an attacker can not do many attempts to guess the full download URL.

#PeruRumboGSoC2018 – Session 5

Posted by Julita Inca Chiroque on December 17, 2017 05:08 AM

Today we have celebrated another session for the #PeruRumboGSoC2018 program at CCPP UNI. It was one of the longest sessions we have experienced.

We were able to cope with different packages and versions to work with WebKit and GTK on Fedora 26 (one of the students have a 32 bit arch), and on Fedora 27One of the accomplishes today was coding a Language Selector using GtkListBox and GtkLinkButton with GTK and Python, here in detail.

Newcomers bug list on gitlab was also checked today, specially of the applications gnome-todo and gnome-music. As well Fedora Docs Dev,system-config-language and the implementation of Elastic Search were evaluated and discussed as possible GSoC 2018 proposals for Fedora. Thanks @zodiacfireworkThis is the final chart of the effort of the participants. In this picture we have Cristian (18) as @pystudent1913 and Fiorella (21) as @aweba. They are the top two! 🙂We have shared a lunch and some food at the afternoon. Thanks again to our sponsors: GNOME, Fedora & Linux Foundation for the support of this challenge!gogogo PeruRumboGSoC2018!

Filed under: FEDORA, GNOME, τεχνολογια :: Technology Tagged: #PeruRumboGSoC2018, fedora, GNOME, GSoC, GSoC Fedora proposal, GSoC GNOME gitlab, GTK 3, gtk.py, Julita Inca, Julita Inca Chiroque, LinuXatUNI, PeruGSoC, Python

Internationalization FAD, Pune 2017

Posted by Rafał Lużyński on December 17, 2017 05:06 AM

For the second time in a short period of time I participated in an important Fedora event. November 20–22, 2017, an Internationalization FAD was organized by a group of Fedora contributors from Red Hat Pune. FAD stands for Fedora Activity Day, it is a mini-conference. It differs from large conferences like Flock because it is attended by small number of people and it is focused on one subject.

Day #0

November 19, 2017

Actually I should write Day #-1 (November 18) and Day #0 because my as well as some other attendees’ travel lasted more than 24 hours. Due to the time zone difference and all the mess it’s difficult to define when a day ended and when it began. In general, the travel went smoothly and without any problem except one: I spent 1.5 hours in a huge queue to the immigration desk at the Mumbai Airport. Somewhere far behind me there was Mike Fabian and even further behind him there were Takao Fujiwara, Akira Tagoh, and Peng Wu, who arrived little later than me.

The long queue in Mumbai Airport

The long queue in Mumbai Airport

I really don’t know why it took so long. Probably because several large jumbo jets with many foreign tourists arrived in a short time. The immigration officers worked rather fast and without unnecessary delays. However, we all met and left Mumbai only after 4 AM local time and we reached our hotel in Pune before 8 AM. Big shouts to Sundeep Anand and Parag Nemade who despite the night and the weekend were contacting us online all the time, giving us advices and making sure that we were OK.

Our first day in India must have been spent on taking some rest after the journey. The hotel turned out to be very comfortable. Parag perfectly organized our time: first he let us take as long rest as we wanted and then in the afternoon he took us for a Red Hat office visit. That was my first Red Hat office visit ever so everything was impressing for me. A brand new office building, some places still being finished, everything in a perfect order.

Day #1

November 20, 2017

The actual first day of the FAD was for presentations. It started with an official opening and self-introductions.

Opening and self-introductions

Opening and self-introductions: Jens Petersen, Pooja Yadav, and Pravin Satpute.

Next everyone had an opportunity to present their current works. It turns out that each of us works on a tasks which are personally familiar. Takao Fujiwara, Akira Tagoh and Peng Wu work on rendering (Pango library) and input (IBus) of the text in East Asian languages. Unfortunately, I know almost nothing about these languages so I don’t understand much of their work – except obvious things like that it’s more complex than in European languages and needed for their speakers. But, on the other hand I spoke about my current work on formatting dates in inflected languages. Each time I talk about it to the foreign people I have a feeling that the audience don’t know that I’m talking about. I guess that time it was the same.

My talk about formatting dates. Photo credits: Jens Petersen.

My talk about formatting dates. Photo credits: Jens Petersen.

Inflection is an original feature of Proto-Indo-European language which disappeared totally or almost totally in most of the contemporary Indo-European languages. However, it still exists in Slavic and Baltic languages, also in Greek, Sanskrit and several more. But this diversity of the discussed topics only means that the term “Internationalization” is very broad, it includes features local to some groups of languages. There is a place for both inflected languages and logographic scripts and more phenomenons than you can think of.

There were more familiar for me topics discussed as well by Mike Fabian with whom I have been working directly since July this year on the maintenance of locale data in glibc project, and Jens Petersen who works on improving the localization support in Fedora (separation of translation packages from the main software packages, installing them depending on the languages chosen by the administrator etc.)

It’s nice that Mike Fabian, Takao Fujiwara and others work on a better support (input and displays) of emojis in Fedora.

Takao Fujiwara and his emojis Mike works on ibus-typing-booster

Transtats project is getting more and more interesting. While at this, I learned that Sundeep Anand is not working on it alone. FAD was attended by several people from the Quality Assurance team of Red Hat who support him. Those people also actively test other projects, like IBus and East Asian fonts.

Sundeep Anand works on Transtats… … but he is not working alone.

Day #2

November 21, 2017
Working on our projects

Working on our projects. Photo credits: Jens Petersen.

The second and the third day were meant for the common work on our project. Most of the time I spent working with Mike Fabian. Despite my initial plans we were neither working on my project of formatting dates in inflected languages nor on the automatic locale data import from CLDR to glibc. Mike says that my work is basically completed and we can’t add anything more, we can only wait for more positive reviews. Instead of this we were working on fixing the collation orders in Latvian and Polish, the nearest plans include more languages, like Czech and Upper Sorbian. It’s a really hard and dirty work. In most of the languages there are established rules of collation order of the letters of their proper alphabets but what should we do if there are foreign letters? Language scientists are free to say “this is unlikely to happen” or “we don’t define how to handle this” but we developers must be able to handle every Unicode string. Moreover, some languages have really unusual collation rules. Usually the rules say that we should compare the letters starting from the beginning and towards the end. If there is a difference between letters it determines the collation order. If the letters differ in the diacritical marks only then some languages treat them like different letters and some like the same letters. But in French language there is, or rather there was a rule saying that if two words differ in diacritical marks only then for the collation order we must take the diacritics… counting from the end of the word! This rule is so weird that finally it has been rejected from most of the French variants but it is still in use in Canadian French. How to deal with this? But Mike has managed to fix it.

We were talking with Jens about the Fedora bug 1401096. While installing Fedora Workstation you can select the user interface language but the localization packages are not installed because they are missing from the installation disk. They must be downloaded from the net. This problem does not occur with the network installation which by definition downloads the packages. I think that we need a way to mark in the package management system that some packages are required and they should be installed in future, as soon as the network becomes available. It’s crucial that I understood the problem because in the past I contributed to gnome-software (and I still hope to contribute in future) and I think this is a task for that project or rather to powering it PackageKit.

Another unplanned task which we had together with Mike and Pravin Satpute was adding the Filipino language to Fedora. Actually all we had to do was to coordinate some tasks because most of them had been finished already or must wait until at least one application translation is ready.

After this hard working day we spent the evening bowling and having BBQ at Amanora Mall. We also celebrated Takao Fujiwara’s birthday.

Day #3

November 22, 2017

The last day of FAD was similar to the second one: we were working on our projects. Also I continued yesterday’s works with Mike. Besides this, I files some more suggestions of changes in glibc:

In the afternoon there was also the Fedora 27 Release Party. How was it? There came more people working in the same office and a large cake with the beautiful printed image was put on the table.

I have a feeling that the Release Party was dominated by us, the FAD attendees. The organizers asked the oldest of us, that means Mike, Jens Petersen and myself to cut the cake. It was really yummy!

That was, unfortunately, my last (so far!) day in India. I warmly thank the organizers for all their help, mostly I thank Satyabrata Maitra but also Parag Nemade, Pravin Satpute and Sundeep Anand. I really regret that I couldn’t stay longer.

Day #4

November 23, 2017

Most of that day I spent traveling which went absolutely without any problem. See you online or in real life! नमस्ते!

Application process — redesign

Posted by Suzanne Hillman (Outreachy) on December 17, 2017 02:09 AM

I recently applied for a job somewhere, and found the initial application process confusing and dismaying.

The reason, I think, is that it was not clear a) if the entire process actually happened, and b) what all I was actually submitting. So, I decided to take a bit of time and add some redesign to make things a little less confusing. I’ve also blurred out the company name for politeness’ sake.

What did it look like at first?

When you look at a job description, you get something like this (with a bright orange ‘apply now’ button that is not visible in this screenshot). This seems fine.


After you click Apply Now, you get an odd sort of thing about your personal data collection. I’m guessing this is because it’s a security company, but it reads all sorts of weird. Whatever, that’s not a huge deal.


Next, you get your first page of the application. I like that they remind you what you’re applying for!


If you upload your resume, your name and email are auto-filled. That’s cool, thanks! When you select ‘Next’, you get this:


Wait. What? We just jumped to questions about my nationality and my affirmative action status? What about my work experience? My education? A cover letter? Did the resume upload skip the need for work and education info? Maybe, let’s keep going.

You might notice (I didn’t at the time) that this button says ‘Submit’, not ‘Next’. I didn’t grab a screenshot (and didn’t want to apply twice), but that’s the end of the application process. It thanks you, and it sends you email confirming your application.

What? I don’t even know for sure what it sent! I don’t know how well it parsed my resume. I have no clue at this point what just happened.

What would I fix?

Ok, so that was all sorts of confusing. Enough so that last night as I was falling asleep, I was distracted by wondering what would help. I considered a progress indicator, as that would at least make the extreme brevity of the application not a surprise. I also wondered if they’d labeled the final button ‘Submit’, which they actually had. (but perhaps ‘Submit Application’ would have been a clearer signal!) Finally, right before I fell asleep, I realized that what I most missed was a summary of what I was about to submit.

So, my version of the first page, with a progress bar added (using their font as detected by What Font and the same color as the next button for the progress indication):

<figure><figcaption>Look! It’s the first step of three!</figcaption></figure>

My version of the second page (which was the last in the previous version) also has a progress bar, and changed the button to say ‘Next’. Not sure why I couldn’t make the carets a little more visible when they are between things. And perhaps I need some sort of ‘completed’ indicator for the first step, like a checkmark.

<figure><figcaption>Still a weird jump, but at least I had a chance to expect it.</figcaption></figure>

Finally, I made the very barest of bones summary page (the progress bar, what one was applying for, and a brief statement about the summary page). I didn’t make the whole page, which means that I didn’t get to include a “Submit Application” button instead of just ‘Submit” or suggest ways to make it easy for people to change things they don’t agree with. The latter seems important, especially if it really is automatically interpreting the resume; perhaps offer inline editing?

<figure><figcaption>Not entirely sure how to end progress bars of this type, but you get the point.</figcaption></figure>


I’m struggling with the visual design part of things, but at least I feel a little better about the weird application process, having “fixed” it (at least in theory).

I’m not sure what happens if you don’t submit a resume in that first page (or if you use linkedin or something instead). It seems like it might be a kindness for them to tell you what submitting your resume (or associating with social media) did for you, so that it’s less confusing when it never asks about jobs or education.

Also, Gravit Designer is a pretty nice tool for this purpose!

Fedora BTRFS+Snapper - The Fedora 27 Edition

Posted by Dusty Mabe on December 17, 2017 12:00 AM
History I’m back again with the Fedora 27 edition of my Fedora BTRFS+Snapper series. As you know, in the past I have configured my computers to be able to snapshot and rollback the entire system by leveraging BTRFS snapshots, a tool called snapper, and a patched version of Fedora’s grub2 package. I have some great news this time! You no longer need a patched version of Fedora’s grub package in order to pull this off.

Transitioning to Neomutt and friends for e-mail

Posted by Ankur Sinha "FranciscoD" on December 16, 2017 12:01 AM

I'm constantly seeking out applications that provide Vim like keyboard bindings---it ensures that I have one set of keys that does the same thing everywhere, and so, it saves me from having to:

  • remember different hot keys for different applications
  • leave the home row to use the mouse/touchpad (Yeh, the home row is a thing!)

So, I now use the excellent byobu where I run:

  • ncmpcpp for music: it provides Vim like key bindings.
  • Vifm for file management, although, a command line is usually sufficient.
  • Vit as a Taskwarrior terminal based front-end, which, yep, provides Vim like key-bindings.
  • Weechat for IRC which also has Vim bindings.

Vim has an in-built file browser, and one can use other plug-ins such as NerdTree for more advanced tasks. I even have a Taskwarrior plug-in for Vim that let's me quickly look up my tasks while writing code and the sort.

For other uses where the terminal is insufficient, I've found:

  • Vimiv for viewing images
  • Qutebrowser as a full featured browser. One can also use add-ons to Firefox/Chrome, but I've quite fallen for Qutebrowser.
  • Zathura for viewing various document types.

I rarely use LibreOffice---I mostly stick to LaTeX, and Vim deals with it rather well.

In all of the above mentioned applications, hjkl moves about, other hot keys such as G and gg, and so on work too, and they even have a command mode that can be accessed using : as in Vim. So, I don't have to think of the shortcuts now---it's all muscle memory!

Evolution, being a modern GUI productivity tool, does not have a method to navigate around only using a keyboard, and this got me to look for an e-mail client that provided Vim like bindings. The answer I found was the rather well known mutt terminal client. I'd been thinking of giving it a go for a while now---more than a year. However, as I document later in this post, setting up mutt isn't as trivial as setting up Evolution where one simply uses Gnome Online Accounts and can get up and running in a few minutes.

At no point will I suggest that anyone migrate to such a terminal oriented setup. This is tailored to my personal, rather Vim-y needs. One should use whatever tools fit their personal tastes. We needn't spend time on "But, I prefer this, and it's better!" themed conversations.

Please note that everything that is documented here is for an up to date Fedora 27 system. Most steps should be general enough to work on other distributions. One will have to go find the right packages, though. I followed this guide as the main source of information, and the looked around when I needed some more info. I've collected a list of links at the bottom of this post.

E-mail: the details

When a majority of us use e-mail, we simply interact with a client. These clients: Evolution/Thunderbird/Outlook or the web applications that we access, keep the nitty-gritty details away from end users. The wikipedia article on e-mail explains the process quite well:

  • An MUA (mail user agent) is the client that we use to read/write email.
  • The MUA interacts with an MSA (mail submission agent) to send e-mail, or an MDA (mail delivery agent) to receive e-mail to and fro a mailbox.

mutt is an MUA, so we need to set up the other bits for it to be able to interact with an MSA and an MDA, and that's why it is a little more work than setting up Evolution and so on where the tool takes care of setting up the whole chain.

Fetch e-mail with Offlineimap

There are a few tools that fetch e-mail. Offlineimap seemed to be widely used, so I settled for it as well. On Fedora, one can use DNF:

sudo dnf install offlineimap

One must then set up their accounts with credentials and the sort. An example config file is provided with the package at /usr/share/doc/offlineimap/offlineimap.conf

The config format is quite self explanatory. Here's an example:

accounts = account1

[Account account1]
localrepository = account1-local
remoterepository = account1-remote
status_backend = sqlite
postsynchook = notmuch new

[Repository account1-remote]
type = IMAP
remotehost = mailhost.com
remoteport = 587
remoteuser = username@mailhost.com
remotepass = password
ssl = no
folderfilter = lambda foldername: foldername in ['INBOX', 'Sent', 'Spam', 'Trash', 'Drafts']
createfolders = False
maxconnections = 2

[Repository account1-local]
type = Maildir
localfolders = ~/Mail
restoreatime = no

There's a "general" section where one defines what accounts are to be used. One can also define global options that will apply to all accounts here.

For each account, one then sets up the main configurations, and then set up the remote and local repositories. There are other advanced options that one can use too. The folderfilter, for example, is a python statement that lets one select what folders on the remote should by synced. More in the offlineimap documentation.

The postsynchook bit lets one run a command after Offlineimap has finished syncing. Here, it calls notmuch to update its database. More on notmuch later.

Once configured, one can run Offlineimap to fetch one's mail. The first sync will take quite a while, but subsequent syncs will be much quicker.


I set up a cronjob to sync my e-mail regularly. Most users also use a script that kills previously running Offlineimap instances that may have hung, so a script like this may be more useful:

check ()
    while pkill offlineimap
        sleep 2

quick ()
    offlineimap -u quiet -q -s

full ()
    offlineimap -u quiet -s

# parse options
while getopts "qf" OPTION
    case $OPTION in
            exit 0
            exit 0
            echo "Nothing to do."
            exit 1

My crontab then looks like this:

*/20 * * * * /home/asinha/bin/fetch-mail.sh -q
10 */8 * * * /home/asinha/bin/fetch-mail.sh -f

So, every 20 minutes, I do a quick sync, and once every 8 hours, I do a full sync.

Sending e-mail with msmtp

Now that we can fetch our e-mail, we look at sending e-mail. sendmail is quite a well known client, but the setup is a bit cludgy for me. msmtp was recommended by quite a few users. On Fedora, one can install it using DNF:

sudo dnf install msmtp

The configuration for msmtp is quite simple too. The package provides two example configuration files:


Here's an example:

protocol smtp
auth on
tls on
tls_trust_file /etc/ssl/certs/ca-bundle.crt
syslog LOG_USER
logfile ~/.msmtp.log
timeout 60

account account1
host smtp.hostname.com
port 587
domain hostname.com
from something@hostname.com
user username@hostname.com
password password

account account2
host smtp.anotherhostname.com
port 587
domain anotherhostname.com
from something@anotherhostname.com
user username@anotherhostname.com
password password

It has a default section where options common to all accounts can be set up. here it does to usual setup regarding TLS, and so on.

A separate section for each account then holds the credentials. One can then send e-mail from the command line:

echo "Subject: Test" | msmtp -a account1 someone@anotherhost.com

Setting up the MUA: (neo)mutt

The two MTAs are now set up, and we can fetch and send mail. We can now link these up to our MUA, mutt. Instead of mutt, I use neomutt which is mutt with additional patches and features. It isn't in the Fedora repos yet, but there's a COPR repository set up for users:

sudo dnf enable copr flatcap/neomutt
sudo dnf install neomutt

The neomutt configuration is based on the mutt bits, and it's rather extensive. The package provides an example that I use as a starting point:


The important bits are here:

mailboxes ="account1"
mailboxes `find ~/Mail/account1/* -maxdepth 0 -type d | grep -v "tmp\|new\|cur" | sed 's|/home/asinha/Mail/|=\"|g' | sed 's|$|\"|g' | tr '\n' ' '`
set from = "user@hostname.com"
set use_from = "yes"
set reply_to = "yes"
set sendmail = "msmtp -a account1"
set sendmail_wait = 0
set mbox = "+account1/INBOX"
set postponed = "+account1/Drafts"
set record = "+account1/Sent"

The mailboxes list what folders the sidebar in neomutt. These are what we've set up offlineimap to fetch for us. Similarly, the sendmail setting tells neomutt to use msmtp to send e-mail.

If it all went well, running neomutt should bring up a window like the figure below:

A screenshot of Neomutt in action

On the left, there's the sidebar where all folders are listed. These can be configured using mailboxes as explained in the documentation here. On the right hand side, the various e-mails are listed on top in the index, and a particular e-mail is visible in the pager view. As can be seen, the index view also shows threads! (This is running in byobu, by the way, which shows the other information in the bottom information bar.) More on all of this in the documentation, of course.

Searching e-mail with notmuch

We have our e-mail set up, but we at the moment, it has a very basic search feature that mutt provides. notmuch, which thinks "not much mail" of your massive e-mail collection helps here. notmuch is called after each Offlineimap sync above, in the postsynchook. Then, using simple keyboard shortcuts, one can use notmuch search their whole e-mail database. notmuch has quite a few advanced features, like searching as threads, and searching e-mail addresses, and the sort. notmuch comes with the handy notmuch-config which makes configuration trivial. Here's an example below:

$ notmuch address from:*lists.fedoraproject.org

The same can be used within neomutt with a few simple hotkeys:

macro index <F8> \
"<enter-command>set my_old_pipe_decode=\$pipe_decode my_old_wait_key=\$wait_key nopipe_decode nowait_key<enter>\
<shell-escape>notmuch-mutt -r --prompt search<enter>\
<change-folder-readonly>`echo ${XDG_CACHE_HOME:-$HOME/.cache}/notmuch/mutt/results`<enter>\
<enter-command>set pipe_decode=\$my_old_pipe_decode wait_key=\$my_old_wait_key<enter>" \
 "notmuch: search mail"

macro index <F9> \
"<enter-command>set my_old_pipe_decode=\$pipe_decode my_old_wait_key=\$wait_key nopipe_decode nowait_key<enter>\
<pipe-message>notmuch-mutt -r thread<enter>\
<change-folder-readonly>`echo ${XDG_CACHE_HOME:-$HOME/.cache}/notmuch/mutt/results`<enter>\
<enter-command>set pipe_decode=\$my_old_pipe_decode wait_key=\$my_old_wait_key<enter>" \
 "notmuch: reconstruct thread"

macro index <F6> \
"<enter-command>set my_old_pipe_decode=\$pipe_decode my_old_wait_key=\$wait_key nopipe_decode nowait_key<enter>\
<pipe-message>notmuch-mutt tag -- -inbox<enter>\
<enter-command>set pipe_decode=\$my_old_pipe_decode wait_key=\$my_old_wait_key<enter>" \
 "notmuch: remove message from inbox"

The three commands in a neomuttrc file will respectively:

  • bind F8 to open a neomutt search
  • bind F9 to find a whole thread based the currently selected e-mail. This includes all folders.
  • binds F6 to untag an e-mail (more on notmuch tagging in the docs)

Other tweaks

The aforementioned bits cover most of the main functions that one would need with e-mail. Here are some more tips that I found helpful.

I have not yet set up a command line address book client. There seem to be a few that sync with Gmail and other providers and can be used with mutt, but I don't need them yet. notmuch provides sufficient completion for the time being, and when I begin to use newer addresses that are not already in my mailbox, I shall look at address book clients. For those that are interested, these are what I've found:

Storing passwords using pass

Storing passwords as plain text is a terrible idea. Instead most use password managers. pass is an excellent command line password manager that uses GPG to encrypt password files. It even integrates with Git so that a central repository can hold the encrypted files, and can be cloned to various systems.

Both Offlineimap and msmtp permit a user to store passwords in a tool and then run a command to extract it. In the offlineimaprc, for example, one can use:

remotepasseval = get_pass("E-mail")

to fetch passwords from pass. Here get_pass is a python function that does the dirty work:

def get_pass(account):
        return (check_output("pass " + account, shell=True).splitlines()[0]).decode("utf-8")

Similarly, msmtp lets one use a shell command to get a password:

passwordeval pass E-mail

where the E-mail file is associated with the password for a certain account using pass.

Multiple accounts

Both Offlineimap and msmtp can handle multiple accounts. neomutt can too, but to set sane defaults each time one switches mailboxes, a bit of trickery is required. The gist here shows what's needed. Essentially, using a folder-hook, one updates the required configurations (signature, from address, sent mail folder, draft folder) when one switches to a folder associated with a different account. I use four accounts in neomutt currently. It works rather well. The snippet below is what I have in my neomutt configuration file. It sets up host3 as the default account, and each time I change to a different host folder, the folder-hook updates some configurations. Here, I have different files for each host.

# Hooks for multi-setup
# default
set folder ="~/Mail"
set spoolfile = "+host3/INBOX"
source ~/Documents/100_dotfiles/mail/host1.neomuttrc
source ~/Documents/100_dotfiles/mail/host4.neomuttrc
source ~/Documents/100_dotfiles/mail/host2.neomuttrc
source ~/Documents/100_dotfiles/mail/host3.neomuttrc

# folder hook
folder-hook host4/* source ~/Documents/100_dotfiles/mail/host4.neomuttrc
folder-hook host1/* source ~/Documents/100_dotfiles/mail/host1.neomuttrc
folder-hook host2/* source ~/Documents/100_dotfiles/mail/host2.neomuttrc
folder-hook host3/* source ~/Documents/100_dotfiles/mail/host3.neomuttrc

GPG signing

I sign my e-mails with my GPG key. neomutt supports this via a few configuration options:

set pgp_sign_as = 0xE629112D
set crypt_autosign = "yes"
set crypt_verify_sig = "yes"
set crypt_replysign = "yes"

E-mails will be signed when they're going out, and when a signed e-mail comes in, neomutt will verify the signature if the key is available and so on. If you're not using GPG keys, this guide on the Fedora wiki is a great guide for beginners.

Viewing HTML mail and attachments

Even though I send all my e-mail as plain text, I do receive lots of HTML mail. neomutt can be set up to automatically view HTML e-mail. It does so by using a tool such as w3m to strip the e-mail of HTML tags and show the text. The screenshot below shows an example HTML from Quora.

A screenshot of Neomutt showing HTML e-mail.

A simple configuration line tells neomutt what to do:

auto_view text/html

neomutt uses information from mailcap to do this. For those that are unaware of what mailcap is, like I was, here's the manual page.

The configuration file for mailcap is ~/.mailcaprc. Mine looks like this:

audio/*; /usr/bin/xdg-open %s ; copiousoutput

image/*; /usr/bin/xdg-open %s ; copiousoutput

application/msword; /usr/bin/xdg-open %s ; copiousoutput
application/pdf; /usr/bin/xdg-open %s ; copiousoutput
application/postscript ; /usr/bin/xdg-open %s ; copiousoutput

text/html; qutebrowser %s && sleep 5 ; test=test -n "$DISPLAY";
nametemplate=%s.html; needsterminal
# text/html; lynx -dump %s ; copiousoutput; nametemplate=%s.html
text/html; w3m -I %{charset} -T text/html ; copiousoutput; nametemplate=%s.html

One can use either lynx or w3m. I tried both and settled for w3m. Fedora systems have a default mailcaprc file at /etc/mailcap which I adapted from. The copiousoutput option tells neomutt not to quickly delete the temporary file.

For cases where HTML e-mails also contain images, one can simply open the HTML e-mail in a browser. The HTML e-mails are present as attachements to the e-mail message. Pressing v on an e-mail message shows the attachement menu. The screenshot below shows the attachment menu for the same e-mail as above. Hitting enter opens up the HTML attached version in the browser I've set up in my mailcap above, qutebrowser.

A screenshot of Neomutt showing e-mail attachments.

Note: all attachments can be viewed like this.

Right then, let's stick to the home row!

This post turned out to be a lot lengthier than I'd expected. There's always so much tweaking one can do. I hope this helps somewhat. It isn't complete by a far stretch, but it should include enough hints and links to enable a reader to Google up and figure things out. Read the docs, read the manuals---it's all in there.

Happy e-mailing!

Ensuring keepalived starts after the network is ready

Posted by Major Hayden on December 15, 2017 09:18 PM

Wait on pavementAfter a recent OpenStack-Ansible (OSA) deployment on CentOS, I found that keepalived was not starting properly at boot time:

Keepalived_vrrp[801]: Cant find interface br-mgmt for vrrp_instance internal !!!
Keepalived_vrrp[801]: Truncating auth_pass to 8 characters
Keepalived_vrrp[801]: VRRP is trying to assign ip address to unknown br-mgmt interface !!! go out and fix your conf !!!
Keepalived_vrrp[801]: Cant find interface br-mgmt for vrrp_instance external !!!
Keepalived_vrrp[801]: Truncating auth_pass to 8 characters
Keepalived_vrrp[801]: VRRP is trying to assign ip address to unknown br-mgmt interface !!! go out and fix your conf !!!
Keepalived_vrrp[801]: VRRP_Instance(internal) Unknown interface !
systemd[1]: Started LVS and VRRP High Availability Monitor.
Keepalived_vrrp[801]: Stopped
Keepalived[799]: Keepalived_vrrp exited with permanent error CONFIG. Terminating

OSA deployments have a management bridge for traffic between containers. These containers run the OpenStack APIs and other support services. By default, this bridge is called br-mgmt.

The keepalived daemon is starting before NetworkManager can bring up the br-mgmt bridge and that is causing keepalived to fail. We need a way to tell systemd to wait on the network before bringing up keepalived.

Waiting on NetworkManager

There is a special systemd target, network-online.target, that is not reached until all networking is properly configured. NetworkManager comes with a handy service called NetworkManager-wait-online.service that must be complete before the network-online target can be reached:

# rpm -ql NetworkManager | grep network-online

Start by ensuring that the NetworkManager-wait-online service starts at boot time:

systemctl enable NetworkManager-wait-online.service

Using network-online.target

Next, we tell the keepalived service to wait on network-online.target. Bring up an editor for overriding the keepalived.service unit:

systemctl edit keepalived.service

Once the editor appears, add the following text:


Save the file in the editor and reboot the server. The keepalived service should come up successfully after NetworkManager signals that all of the network devices are online.

Learn more by reading the upstream NetworkTarget documentation.

The post Ensuring keepalived starts after the network is ready appeared first on major.io.

Some predictions for 2018

Posted by Christian F.K. Schaller on December 15, 2017 08:53 PM

So I spent a few hours polishing my crystal ball today, so here are some predictions for Linux on the Desktop in 2018. The advantage of course for me to publish these now is that I can then later selectively quote the ones I got right to prove my brilliance and the internet can selectively quote the ones I got wrong to prove my stupidity :)

Prediction 1: Meson becomes the defacto build system of the Linux community

Meson has been going from strength to strength this year and a lot of projects
which passed on earlier attempts to replace autotools has adopted it. I predict this
trend will continue in 2018 and that by the end of the year everyone agrees that Meson
has replaced autotools as the Linux community build system of choice. That said I am not
convinced the Linux kernel itself will adopt Meson in 2018.

Prediction 2: Rust puts itself on a clear trajectory to replace C and C++ for low level programming

Another rising start of 2017 is the programming language Rust. And while its pace of adoption
will be slower than Meson I do believe that by the time 2018 comes to a close the general opinion is
that Rust is the future of low level programming, replacing old favorites like C and C++. Major projects
like GNOME and GStreamer are already adopting Rust at a rapid pace and I believe even more projects will
join them in 2018.

Prediction 3: Apples decline as a PC vendor becomes obvious

Ever since Steve Jobs died it has become quite clear in my opinion that the emphasis
on the traditional desktop is fading from Apple. The pace of hardware refreshes seems
to be slowing and MacOS X seems to be going more and more stale. Some pundits have already
started pointing this out and I predict that in 2018 Apple will be no longer consider the
cool kid on the block for people looking for laptops, especially among the tech savvy crowd.
Hopefully a good opportunity for Linux on the desktop to assert itself more.

Prediction 4: Traditional distro packaging for desktop applications
will start fading away in favour of Flatpak

From where I am standing I think 2018 will be the breakout year for Flatpak as a replacement
for gettings your desktop applications as RPMS or debs. I predict that by the end of 2018 more or
less every Linux Desktop user will be at least running 1 flatpak on their system.

Prediction 5: Linux Graphics competitive across the board

I think 2018 will be a breakout year for Linux graphics support. I think our GPU drivers and API will be competitive with any other platform both in completeness and performance. So by the end of 2018 I predict that you will see Linux game ports by major porting houses
like Aspyr and Feral that perform just as well as their Windows counterparts. What is more I also predict that by the end of 2018 discreet graphics will be considered a solved problem on Linux.

Prediction 6: H265 will be considered a failure

I predict that by the end of 2018 H265 will be considered a failed codec effort and the era of royalty bearing media codecs will effectively start coming to and end. H264 will be considered the last successful royalty bearing codec and all new codecs coming out will
all be open source and royalty free.

More Bluetooth (and gaming) features

Posted by Bastien Nocera on December 15, 2017 03:57 PM
In the midst of post-release bug fixing, we've also added a fair number of new features to our stack. As usual, new features span a number of different components, so integrators will have to be careful picking up all the components when, well, integrating.

PS3 clones joypads support

Do you have a PlayStation 3 joypad that feels just a little bit "off"? You can't find the Sony logo anywhere on it? The figures on the face buttons look like barbed wire? And if it were a YouTube video, it would say "No copyright intended"?

Bingo. When plugged in via USB, those devices advertise themselves as SHANWAN or Gasia, and implement the bare minimum to work when plugged into a PlayStation 3 console. But as a Linux computer would behave slightly differently, we need to fix a couple of things.

The first fix was simple, but necessary to be able to do any work: disable the rumble motor that starts as soon as you plug the pad through USB.

Once that's done, we could work around the fact that the device isn't Bluetooth compliant, and hard-code the HID service it's supposed to offer.

Bluetooth LE Battery reporting

Bluetooth Low Energy is the new-fangled (7-year old) protocol for low throughput devices, from a single coin-cell powered sensor, to input devices. What's great is that there's finally a standardised way for devices to export their battery statuses. I've added support for this in BlueZ, which UPower then picks up for desktop integration goodness.

There are a number of Bluetooth LE joypads available for pickup, including a few that should be firmware upgradeable. Look for "Bluetooth 4" as well as "Bluetooth LE" when doing your holiday shopping.

gnome-bluetooth work

Finally, this is the boring part. Benjamin and I reworked code that's internal to gnome-bluetooth, as used in the Settings panel as well as the Shell, to make it use modern facilities like GDBusObjectManager. The overall effect of this is, less code, less brittle and more reactive when Bluetooth adapters come and go, such as when using airplane mode.

Apart from the kernel patch mentioned above (you'll know if you need it :), those features have been integrated in UPower 0.99.7 and in the upcoming BlueZ 5.48. And they will of course be available in Fedora, both in rawhide and as updates to Fedora 27 as soon as the releases have been done and built.


Outreachy 2017: Meet the interns!

Posted by Fedora Community Blog on December 15, 2017 08:30 AM

The results of Outreachy are out! Outreachy is a paid, remote internship program that helps traditionally underrepresented people in tech make their first contributions to Free and Open Source Software (FOSS) communities. Fedora is participating in this round of Outreachy as a mentoring organization. We have two interns for this round which started on December 5 and goes until March 5, 2018. We found some time to interview both of Fedora’s Outreachy interns!

Shaily, India

  • Fedora mentor: Aurélien Bompard
  • Project: Fedora Hubs: adding a full-text search feature

Tell us a little about yourself?

I’m a political science graduate from Delhi University, India. I started learning programming as part of high school coursework and then continued exploring on and off out of interest through the internet.

How did you hear about Outreachy?

I spend a lot of time on Quora, that’s where I read about Outreachy while browsing through a summer internships topic thread.

What caught your attention about Fedora? How does it align with your personal interests?

Prior to Outreachy, I had only made small apps for the sake of learning specific frameworks, so I thought it would be difficult for me to grasp a relatively large code base well enough. However, I felt quite comfortable with Hubs after completing the required task, which was structured in the form of an interactive tutorial and explained how the stack was set up. I think what caught my attention here is how everything was so well documented – not just the code, but I could also find blog posts and videos which helped me see the inspiration behind the project and put forth the points of view of many people involved in it.

I feel motivated to contribute to Hubs because it’s going to enable more people to get involved in Fedora. It’ll help make things more accessible to both new and experienced contributors – which will improve the general experience of working with projects associated with Hubs.

What are you looking forward to most during this Outreachy cycle?

I want to use this time in learning the most that I can since I’m working with people who are much more experienced in various fields.

Where do you see yourself after you complete this Outreachy cycle?

I see Outreachy as a great opportunity to acquire real-world experience. I hope to find an exciting full-time position after the internship!

Alisha Aneja

Tell us a little about yourself?

I am a Masters student at the University of Melbourne. Having pursued my undergraduate degree in Computer Science as well, I always knew I have a love for computers and want to be a programmer. I am a FOSS enthusiast and contribute to Mozilla and Rust projects as well.

How did you hear about Outreachy?

Through a friend who was an Outreachy intern for OpenStack in 2015 and is now employed full-time at Red Hat.

What caught your attention about Fedora? How does it align with your personal interests?

My project is to develop and improve both new and existing administrative tools for 389 Directory Server using Python. The directory server uses LDAP and that is one of the reasons I applied for this project. LDAP is used in many medium and large organizations; however, there are not many resources for learning about it online and it has a steep learning curve. Doing this project is the best way to get more experience in Python to play with and get to know about LDAP, which is surely going to be exciting!

What are you looking forward to most during this Outreachy cycle?

Getting more experience in Python, learning about LDAP, learning the concepts of the Directory Server and how administrators use it and getting involved with the amazing Fedora community!

Where do you see yourself after you complete this Outreachy cycle?

I see myself…

  • Contributing to 389 Directory Server more confidently, in various areas like re-writing the plugins, integration of new technologies like fuzzing, performance analysis and enhancement.
  • As a more experienced Python programmer and administrator with knowledge of LDAP.
  • Giving talks about the internals of the Directory Server.
  • As a better community member.

Best wishes for Outreachy 2017!

We wish them both a successful journey as Outreachy interns and look forward to hearing about their experiences soon!

The post Outreachy 2017: Meet the interns! appeared first on Fedora Community Blog.

Fedora Classroom Session: Fedora QA 102

Posted by Fedora Magazine on December 15, 2017 08:00 AM

Fedora Classroom sessions continue next week with a session on Fedora QA. The general schedule for sessions appears on the wiki. You can also find resources and recordings from previous sessions there. Here are details about this week’s session on Wednesday, December 22 at 16:00 UTC. That link allows you to convert the time to your timezone.

Topic: Fedora QA 102

As the Fedora QA wiki page explains, this project covers testing of the software that makes up Fedora. The team’s goal is to continually improve the quality of Fedora releases and updates. You can find more information on the activities of the QA team on their wiki page.

This is the second classroom in the Fedora QA series. If you missed the previous Fedora QA 101 session, you can find the classroom resources in the Classroom archive and the agenda on the magazine announcement post. This Classroom session covers the topics listed below:

  1.  Screen sharing and walk-through of release validation
  2. Screen sharing and walk-through of updates testing
  3. Testing cloud images with Amazon EC2
  4. How to write test cases for packages
  5. Proposing and hosting your own test days


Sumantro Mukherjee works at Red Hat and contributes to numerous open source projects in his free time. He also loves to contribute to Fedora QA and takes pleasure in helping new joiners contribute. Furthermore, Sumantro represents the Asia Pacific region in Fedora Ambassadors Steering Committee (FAmSCo). You can get in touch with him via his Fedora project e-mail or on IRC. Sumantro also goes by the nickname sumantrom on Freenode.

Geoffrey Marr, also known by his IRC name as coremodule, is a Red Hat employee and Fedora contributor with a background in Linux and cloud technologies. While working, he spends his time lurking the Fedora QA wiki and test pages. Away from work, he enjoys RaspberryPi projects, especially those focusing on software-defined radio.

Joining the session

This session takes place on Blue Jeans. The following information will help you join the session:

We hope you attend, learn from, and enjoy this session. Also, If you have any feedback about the sessions, have ideas for a new one or want to host a session, feel free to comment on this post or edit the Classroom wiki page.

PHP version 7.2.1RC1

Posted by Remi Collet on December 15, 2017 05:56 AM

Release Candidate versions are available in remi-test repository for Fedora and Enterprise Linux (RHEL / CentOS) to allow more people to test them. They are available as Software Collections, for a parallel installation, perfect solution for such tests (for x86_64 only), and also as base packages.

RPM of PHP version 7.2.1RC1 are available as SCL in remi-test repository and as base packages in the remi-php71-test repository for Fedora 25-27 and Enterprise Linux.


PHP version 7.2.1 is planed for  January 4th.

emblem-notice-24.pngInstallation : read the Repository configuration and choose your version.

Parallel installation of version 7.2 as Software Collection:

yum --enablerepo=remi-test install php72

Update of system version 7.2:

yum --enablerepo=remi-php72,remi-php72-test update php\*

Notice: version 7.2.1RC1 is also available in Fedora 27 rawhide for QA.

emblem-notice-24.pngEL-7 packages are built using RHEL-7.4.

emblem-notice-24.pngRC version is usually the same as the final version (no change accepted after RC, exception for security fix).

Software Collections (php72)

Base packages (php)

Fedora 27 : Using atom editor with teletype.

Posted by mythcat on December 14, 2017 08:44 PM
The atom editor is a very good free and open-source text and source code editor for macOS, Linux, and Microsoft Windows.
This editor come with support for plug-ins written in Node.js, and embedded Git Control, developed by GitHub and more features.
Today I will show you how to install this tool with teletype into Fedora 27 distro linux.
Go to the Atom homepage from your web browser and click to download the RPM version.
Use this command to install it:
$sudo su 
#cd Download
# dnf install atom.x86_64.rpm
Let's see this install:

The next step is to use teletype from atom.

Just install the teletype package into atom editor into settings area.
The teletype tool introduces the concept of real-time "portals" for sharing workspaces.
This tool uses WebRTC to encrypt all communication between collaborators.
Use the teletype with one click on the radio tower icon in the Atom status bar.
This will open a dialog into the right of the screen and ask you for teletype token.
You can get this token from here.
After you put the token then use the check button to share your content and atom teletype will get a ID.
Just share this ID with your development team to share your work.

RETerm to The Terminal with a GUI

Posted by Mo Morsi on December 14, 2017 04:45 PM

When it comes to user interfaces, most (if not all) software applications can be classified into one of three categories:

  • Text Based - whether they entail one-off commands, interactive terminals (REPL), or text-based visual widgets, these saw a major rise in the 50s-80s though were usurped by GUIs in the 80s-90s
  • Graphical - GUIs, or Graphical User Interfaces, facilitate creating visual windows which the user may interact with via the mouse or keyboard. There are many different GUI frameworks available for various platforms
  • Web Based - A special type of graphical interface rendered via a web browser, many applications provide their frontend via HTML, Javascript, & CSS
Interfaces comparison

In recent years modern interface trends seem to be moving in the direction of the Web User Interfaces (WUI), with increasing numbers of apps offering their functionality primarily via HTTP. That being said GUIs and TUIs (Text User Interfaces) are still an entrenched use case for various reasons:

  • Web browsers, servers, and network access may not be available or permissable on all systems
  • Systems need mechanisms to access and interact with the underlying components, incase higher level constructs, such as graphics and network subsystems fail or are unreliable
  • Simpler text & graphical implementations can be coupled and optimized for the underlying operational environment without having to worry about portability and cross-env compatability. Clients can thus be simpler and more robust.

Finally there is a certain pleasing ascethic to simple text interfaces that you don't get with GUIs or WUIs. Of course this is a human-preference sort-of-thing, but it's often nice to return to our computational roots as we move into the future of complex gesture and voice controlled computer interactions.

Scifi terminal

When working on a recent side project (to be announced), I was exploring various concepts as to the user interface which to throw ontop of it. Because other solutions exist in the domain which I'm working in (and for other reasons), I wanted to explore something novel as far as user interaction, and decided to expirement with a text-based approach. ncurses is the goto library for this sort of thing, being available on most modern platforms, along with many widget libraries and high level wrappers


Unfortunately ncurses comes with alot of boilerplate and it made sense to seperate that from the project I intend to use this for. Thus the RETerm library was born, with the intent to provide a high level DSL to implmenent terminal interfaces and applications (... in Ruby of couse <3 !!!)

Reterm sc1

RETerm, aka the Ruby Enhanced TERMinal allows the user to incorporate high level text-based widgets into an orgnaized terminal window, with seemless standardized keyboard interactions (mouse support is on the roadmap to be added). So for example, one could define a window containing a child widget like so:

require 'reterm'
include RETerm

value = nil

init_reterm {
  win = Window.new :rows => 10,
                   :cols => 30

  slider = Components::VSlider.new
  win.component = slider
  value = slider.activate!

puts "Slider Value: #{value}"

This would result in the following interface containing a vertical slider:

Reterm sc2

RETerm ships with many built widgets including:

Text Entry

Reterm sc3

Clickable Button

Reterm sc4

Radio Switch/Rocker/Selectable List

Reterm sc5 Reterm sc6 Reterm sc7

Sliders (both horizontal and vertical)


Ascii Text (with many fonts via artii/figlet)

Reterm sc8

Images (via drawille)

Reterm sc9

RETerm is now available via rubygems. To install, simplly:

  $ gem install reterm

That's All Folks... but wait there is more!!! Afterall:

Delorian meme

For a bit of a value-add, I decided to implement a standard schema where text interfaces could be described in a JSON config file and loaded by the framework, similar to xml schemas which GTK and Android use for their interfaces. One can simply describe their interface in JSON and the framework will instantiate the corresponding text interface:

  "window" : {
    "rows"      : 10,
    "cols"      : 50,
    "border"    : true,
    "component" : {
      "type" : "Entry",
      "init" : {
        "title" : "<C>Demo",
        "label" : "Enter Text: "
Reterm sc10

To assist in generating this schema, I implemented a graphical designer, where components can be dragged and dropped into a 2D canvas to layout the interface.

That's right, you can now use a GUI based application to design a text-based interface.

Retro meme

The Designer itself can be found in the same repo as the RETerm project, loaded in the "designer/" subdir.

Reterm designer

To use if you need to install visualruby (a high level wrapper to ruby-gnome) like so:

  $ gem install visualruby

And that's it! (for real this time) This was certainly a fun side-project to a side-project (toss in a third "side-project" if you consider the designer to be its own thing!). As I to return to the project using RETerm, I aim to revisit it every so often, adding new features, widgets, etc....



Graylog as destination in syslog-ng

Posted by Peter Czanik on December 14, 2017 10:11 AM

Version 3.13 of syslog-ng introduced a graylog2() destination and a GELF (Graylog Extended Log Format) template to make sending syslog messages to Graylog easier. You can also use them to forward simple name-value pairs where the name starts with a dot or underscore. If names of your name-value pairs include dots other than the first character, you should use JSON formatting directly instead of the GELF template and send logs to a raw tcp port in Graylog, which can then extract fields from nested JSON.

Before you begin

The graylog2() destination was added in syslog-ng version 3.13. If you want to utilize the GELF template, you need to use this or a later version. Sending JSON formatted messages is possible with any recent syslog-ng versions. For my tests, I used a ready-to-go Graylog 2.3.2 virtual appliance and syslog-ng 3.13.2.

Using the graylog2() destination

Starting with syslog-ng version 3.13, you can now send syslog messages to Graylog using the grayog2() destination. It uses the GELF template, the native data format of Graylog.

On the Graylog side, you have to configure a GELF TCP input.

On the syslog-ng side, configuration is also quite simple. All you need to configure is the name or IP address of the host running Graylog.

destination d_graylog {

If you parsed your messages using syslog-ng, the template also forwards any name-value pairs where the name starts with a dot or underscore.

Note that if there is a dot in a field name other than the first character, syslog-ng creates nested JSON while formatting the message. Nested JSON is not automatically parsed in GELF messages.

Sending nested JSON to Graylog

While sending nested JSON inside GELF is possible, it is not really convenient. If you make heavy use of parsing and normalization in syslog-ng and use dot notation in field names, you should rather use pure JSON instead of GELF to forward your messages.

On the Graylog side, create a new raw TCP input. Once it is ready, add a JSON extractor to it.

On the syslog-ng side, use a network destination combined with a template utilizing format-json:

destination d_jsontcp {
    template("$(format-json --scope all-nv-pairs)\n")

You are now ready to query any of the fields sent to Graylog.

Recommended reading

In this blog I gave you a quick overview of how you can send logs to Graylog using syslog-ng. For more in-depth information, I recommend reading the documentation of different components:


If you have questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by email or you can even chat with us. For a long list of possibilities, check our contact page at https://syslog-ng.org/contact-us/. On Twitter, I am available as @PCzanik.

The post Graylog as destination in syslog-ng appeared first on Balabit Blog.

The Intel ME vulnerabilities are a big deal for some people, harmless for most

Posted by Matthew Garrett on December 14, 2017 01:31 AM
(Note: all discussion here is based on publicly disclosed information, and I am not speaking on behalf of my employers)

I wrote about the potential impact of the most recent Intel ME vulnerabilities a couple of weeks ago. The details of the vulnerability were released last week, and it's not absolutely the worst case scenario but it's still pretty bad. The short version is that one of the (signed) pieces of early bringup code for the ME reads an unsigned file from flash and parses it. Providing a malformed file could result in a buffer overflow, and a moderately complicated exploit chain could be built that allowed the ME's exploit mitigation features to be bypassed, resulting in arbitrary code execution on the ME.

Getting this file into flash in the first place is the difficult bit. The ME region shouldn't be writable at OS runtime, so the most practical way for an attacker to achieve this is to physically disassemble the machine and directly reprogram it. The AMT management interface may provide a vector for a remote attacker to achieve this - for this to be possible, AMT must be enabled and provisioned and the attacker must have valid credentials[1]. Most systems don't have provisioned AMT, so most users don't have to worry about this.

Overall, for most end users there's little to worry about here. But the story changes for corporate users or high value targets who rely on TPM-backed disk encryption. The way the TPM protects access to the disk encryption key is to insist that a series of "measurements" are correct before giving the OS access to the disk encryption key. The first of these measurements is obtained through the ME hashing the first chunk of the system firmware and passing that to the TPM, with the firmware then hashing each component in turn and storing those in the TPM as well. If someone compromises a later point of the chain then the previous step will generate a different measurement, preventing the TPM from releasing the secret.

However, if the first step in the chain can be compromised, all these guarantees vanish. And since the first step in the chain relies on the ME to be running uncompromised code, this vulnerability allows that to be circumvented. The attacker's malicious code can be used to pass the "good" hash to the TPM even if the rest of the firmware has been tampered with. This allows a sufficiently skilled attacker to extract the disk encryption key and read the contents of the disk[2].

In addition, TPMs can be used to perform something called "remote attestation". This allows the TPM to provide a signed copy of the recorded measurements to a remote service, allowing that service to make a policy decision around whether or not to grant access to a resource. Enterprises using remote attestation to verify that systems are appropriately patched (eg) before they allow them access to sensitive material can no longer depend on those results being accurate.

Things are even worse for people relying on Intel's Platform Trust Technology (PTT), which is an implementation of a TPM that runs on the ME itself. Since this vulnerability allows full access to the ME, an attacker can obtain all the private key material held in the PTT implementation and, effectively, adopt the machine's cryptographic identity. This allows them to impersonate the system with arbitrary measurements whenever they want to. This basically renders PTT worthless from an enterprise perspective - unless you've maintained physical control of a machine for its entire lifetime, you have no way of knowing whether it's had its private keys extracted and so you have no way of knowing whether the attestation attempt is coming from the machine or from an attacker pretending to be that machine.

Bootguard, the component of the ME that's responsible for measuring the firmware into the TPM, is also responsible for verifying that the firmware has an appropriate cryptographic signature. Since that can be bypassed, an attacker can reflash modified firmware that can do pretty much anything. Yes, that probably means you can use this vulnerability to install Coreboot on a system locked down using Bootguard.

(An aside: The Titan security chips used in Google Cloud Platform sit between the chipset and the flash and verify the flash before permitting anything to start reading from it. If an attacker tampers with the ME firmware, Titan should detect that and prevent the system from booting. However, I'm not involved in the Titan project and don't know exactly how this works, so don't take my word for this)

Intel have published an update that fixes the vulnerability, but it's pretty pointless - there's apparently no rollback protection in the affected 11.x MEs, so while the attacker is modifying your flash to insert the payload they can just downgrade your ME firmware to a vulnerable version. Version 12 will reportedly include optional rollback protection, which is little comfort to anyone who has current hardware. Basically, anyone whose threat model depends on the low-level security of their Intel system is probably going to have to buy new hardware.

This is a big deal for enterprises and any individuals who may be targeted by skilled attackers who have physical access to their hardware, and entirely irrelevant for almost anybody else. If you don't know that you should be worried, you shouldn't be.

[1] Although admins should bear in mind that any system that hasn't been patched against CVE-2017-5689 considers an empty authentication cookie to be a valid credential

[2] TPMs are not intended to be strongly tamper resistant, so an attacker could also just remove the TPM, decap it and (with some effort) extract the key that way. This is somewhat more time consuming than just reflashing the firmware, so the ME vulnerability still amounts to a change in attack practicality.

comment count unavailable comments

Cockpit 158

Posted by Cockpit Project on December 13, 2017 09:30 AM

Cockpit is the modern Linux admin interface. We release regularly. Here are the release notes from version 158.

Add check boxes for common NFS mount options

Setting up an NFS mount point on the Storage page now offers check boxes for the most common mount options: “Mount at boot” and “Mount read only”. Other arbitrary options can still be given in the “Custom” input line, as before.

NFS option checkboxes

Clarify Software Update status if only security updates are available

In that case the status message is now “n security fixes” instead of “n updates, including n security fixes”.

Create self-signed certificates with SubjectAltName

When connecting to Cockpit through SSL (https://…) without explicitly configuring a certificate, Cockpit generates a self-signed one. This certificate now has a SubjectAltName: field that is valid for localhost/ Some browsers, like Chromium, require this field to accept a certificate for an SSL connection.

This allows administrators or users to import Cockpit’s certificate into the system oder user certificate database so that web browsers can connect to Cockpit without SSL errors:

openssl s_client -connect < /dev/null | \
    sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > /tmp/cert.pem
certutil -d sql:$HOME/.pki/nssdb -A -t "TC,C,P" -n cockpit -i /tmp/cert.pem

Try it out

Cockpit 158 is available now:

Election night hackathon supports civic engagement

Posted by Justin W. Flory on December 13, 2017 09:00 AM
Open source civic projects at annual Election Night Hackathon

This article was originally published on Opensource.com.

On November 7, 2017, members of the RIT community came together for the annual Election Night Hackathon held in the Simone Center for Student Innovation. This year marked the seventh anniversary of a civic tradition with the FOSS@MAGIC community. As local and state election results come in across nine projectors, students and professors work together on civic-focused projects during the night. Dan Schneiderman, the FOSS@MAGIC Community Liaison, compiled lists of open APIs that let participants use public sets of data made available by governments at the federal, state, and local level.

The hackathon officially began at 5:00pm and went until 10:00pm. Plenty of pizza and drinks were provided to fuel participants during the evening.

Open source with open government

Each year, the hackathon welcomes students and faculty to analyze civic problems happening in the local community, state, or country, and then propose a project to address them. MAGIC Center faculty help students choose open source licenses to share their projects. Organizers encourage students to use a site like GitHub to publish and share their code.

Second Avenue Learning, an educational game company in Rochester, demonstrated their Voter’s Ed app that replays historic elections, keeps voters up-to-date on current ones, and lets them simulate their own using open data and HTML. It also allows users to examine key issues and hot topics related to national level events. The company, represented by the founder Victoria Van Voorhis and two employees (one an RIT alum) held a design discussion for new features to prototype with students and the community. Sean Sutton and Paul Ferber (RIT faculty) provided subject matter expertise to the application.

While people began their projects, coverage of the local and state elections were displayed across nine different projectors. As the night progressed, votes from local and state elections rolled in. Rochester coverage was enhanced, since Monroe County is one of three counties in New York that releases public data for election coverage. Some participants even used the local Henrietta data for their own projects.

LibreCorps internship

Pratik Shirish Kulkarni, a second-year computer science major from Mumbai, India, presented the current status of his FOSS@MAGIC LibreCorps internship. LibreCorps placed Pratik with UNICEF Innovation in Manhattan, where he worked on MagicBox, a set of big data APIs and technologies used to chart Zika outbreaks and connectivity across schools in Africa.

Pratik demoed some of the work, which he is continuing part-time on campus this semester, funded by UNICEF. Another internship to work on the project is currently posted in Handshake.

Where can I vote?

Chris Bitler demonstrated his "Where can I vote?" app at the end of the Election Night Hackathon

Chris Bitler demonstrated his “Where can I vote?” app at the end of the night

Third-year student Chris Bitler created a tool to make it easier to get to the polling booth. His web application, “Where can I vote?“, takes a specific election and your address, and gives you directions from your address to the closest polling location. It uses the Google Civic Information API to find election data and calculate a specific address’s voting district and candidates.

Chris was exploring for project ideas in the beginning of the hackathon, but quickly found the civic data API returned interesting data about polling locations. “After seeing that, I gave some thought to how some people don’t know their polling location and how a simple website could be useful for that,” Chris said. His web application was motivated by simplicity, so anyone could navigate without being lost in information.

In the spirit of open source, Chris open sourced his project on GitHub under the MIT License.

Linkybook: Local election data in real-time

Another project during the night focused on tracking local election data in Chautauqua, Monroe, and Suffolk Counties. RIT and FOSS@MAGIC alumni Nathaniel Case continued work on his monroe-elections application during the night. The site shows data for all races in the three counties.

During the night, his web application updated in real-time as the results from the local elections began to appear. Election results for the races is quick to understand and read. Additionally, referendum results and other non-partisan elections are available.

Nathaniel open sourced his project on GitHub under both the DBAD and GPLv3 licenses.

Join us next time!

The night ended after a quick round of project demos and finishing up project work. FOSS@MAGIC has more events planned in the near future. On November 15th, Christine Abernathy from Facebook’s Open Source Program talks about how Facebook approaches open source and how they’ve solved engineering problems with it.

You can learn more about the FOSS@MAGIC initiative on their website. Participation on the mailing list is welcome.

The post Election night hackathon supports civic engagement appeared first on Justin W. Flory's Blog.

Fedora 27 Release Party – Bengaluru

Posted by Fedora Community Blog on December 13, 2017 08:30 AM


I have been a part of Fedora community for last 4 releases and have seen a lot of changes. Among things that have not been changed, Fedora Release Party in Bangalore is one of them.

Thanks to a bunch of very active contributors and an ambassador & mentor (Sumantro Mukherjee), it has now become a custom to celebrate Fedora Release Party in  Bangalore.

Different venues, different topics, different cakes but similar faces joining us and sharing their experiences in Fedora community to encourage new contributors to continue their noble work, and users to start contributing. The success story of a student who got an internship in a big company then became an employee always boosts the energy of college students.

Last RP, we tried to on-board few students for QA contributions and this time they came with a lot of reports. They are now active contributors and they introduced themselves to others. I invited my juniors to be a part of RP so that we can help them see the benefits of Open Source contributions.

We started with latest Fedora features followed by some success stories and introductions.

Cake cut by a Participant

Then we cut the cake. After cake, we discussed few interesting things going around. Sumantro talked about Fedora Modularity and explained how this can solve the problem of sacrificing stability over bleeding edge features.

Sumantro talking about new features and modularity

I briefly explained ‘How Fedora uses ansible and how they can use ansible in their daily life/work’

Vipul discussing Ansible

Lunch had arrived and I am sure everyone was waiting to grab something. We were having our lunch when few of my juniors asked how scalable is Ansible. We got an Idea! With public’s demand, Saurabh Bhadwar explained how performance engineering team uses Fedora and Ansible to do their stuff.

Saurabh Badhwar discussing performance and scale

After leaving everyone amazed, Saurabh waived the party. Everyone was discussing ‘How they started with their contribution’, ‘Challenges faced’ etc. Meanwhile, my juniors, who were not contributors (yet) were installing Fedora in their laptops. Akshat Ahuja managed the installation booth amazingly. We also showed Fedora on my Raspberry Pi.

Happy participant using Fedora on ARM

We thought of talking to people who started their Fedora contribution since the last release.

Here is the shorter version of their interview (not really, just normal conversation).

1. Akshat Ahuja

I was always interested in changing my OS and wanted power to customise OS according to me , but my journey in fedora started when I met Vipul Siddharth and Prakash Mishra who introduced me to Fedora and I fell in love with the way Linux and Fedora works and was very much interested in FOSS.

2. Buvanesh Sivasubramaniyam

My first contribution was in f24 release cycle by participation in a test day with Sumantro and Vipul. But I started with localization  contribution since last release. Right now, I am willing to contribute to Fedora Infra.

3. Srijita Mukherjee

It has just been a year I am contributing to Open-Source, more precisely to Fedora and mostly to package testing.Have actively taken part in Test Days(like i18n, Gnome, Upgrade Test days and few more). It was Sumantro, who has explained me the past and future of Open-Source and hence I am here at Red Hat working as a Technical Writer for gluster. But that does not stop me contributing to other projects. Gluster upstream docs is another place where I contribute. Will definitely do more in coming days.

4. Manik Chugh

Recently, I started my contribution journey with Fedora design team and Kanika has been guiding me in this. Till date, journey has been great and I wish to get involved in community as much as possible.

And in the end, here are few more pictures from the party


Vipul sharing his contribution story

The cake that more yum than dnf

Last picture

The post Fedora 27 Release Party – Bengaluru appeared first on Fedora Community Blog.

Make a DIY digital jukebox: Part 2

Posted by Fedora Magazine on December 13, 2017 08:00 AM

Welcome to Part 2 of a DIY digital jukebox project. Part 1 of this project addressed how to tweak the system to provide optimal audio performance. With a minimal installation of Fedora Server, the system focused on doing one thing: playing audio. The article covered how to configure audio processing with real-time priority and enhanced disk response time with the I/O scheduler to aid with audio data delivery from the hard drive. It also touched on what bit-perfect playback is, why it’s essential to maintain the purity of the audio source, and how to avoid tainting that data on its way to the hardware. All these contribute to achieving that crisp sound demanded by audiophiles.

This guide provides “do-it-yourselfers” who may be audiophiles and new to Linux the resources and collective information to build their own audio transport system from scratch. Part 2 completes the transformation from computer to appliance. You’ll learn how to:

  1. Install common open-source codecs like FLAC and Ogg-Vorbis from the official Fedora repository
  2. Install and configure the XMMS2 software
  3. Configure XMMS2 systemd service and open TCP ports via Firewalld
  4. Connect the client to the digital jukebox
  5. Test for bit-accurate playback via the command-line

Installing the codecs

To play your audio files you must install some codecs. This article only covers popular, open-source codecs available in the official Fedora repository. To install non-free or legally encumbered codecs, you’ll need to use third-party repositories.

To play FLAC, Vorbis and MP3 files, run this command using sudo:

sudo dnf install flac vorbis libmad

Installing the XMMS2 server software

For server/client software you’ll use XMMS2 (X-Platform Music Multiplexing System 2). XMMS2 is a framework built specifically for audio processing. XMMS2 is modular by design and has a variety of plugins to expand its capabilities. These plugins apply to audio transport, decoders, effects, output, and importing and exporting playlists. For the sake of simplicity, and to avoid additional processing, you won’t install any plugins in this article.

To install the XMMS2 server software, and allow the jukebox to broadcast for discovery on the local network, run this command:

sudo dnf install xmms2 avahi

When the installation is complete, run:

xmms2d &

This command executes the XMMS2 daemon and creates the configuration files in the user’s home directory. Some errors may appear, but that’s OK since you still need to configure the player. For now, press Enter to return to the shell prompt. Now run the command:


The xmms2> prompt appears. XMMS2 is a sophisticated piece of software, and to go through all the commands is beyond the scope of this article. To learn more, type help at the prompt, or consult the man pages. To exit the prompt type:


Finally, reboot the computer.

Configuring XMMS2

Now comes the fun part of the project: tweaking the configuration files for XMMS2. You’ll also create a systemd service file to start the XMMS2 daemon on boot and open ports that allow client access.

First, you need the IP address of the jukebox. To determine that, run this command:


Below is an example of results:

ens3: connected to ens3 "Red Hat Virtio network device"
ethernet (virtio_net), 52:54:00:8B:D5:A1, hw, mtu 1500
ip4 default inet4
inet6 fe80::34fa:a011:f20e:c0a2/64

Note the numbers beside inet4. The IP address in the above example is This is what you need to configure the XMMS2 server for client connectivity. Now run:

cat /proc/asound/cards

This command displays a list of audio hardware connected and recognized by the system. In the example from Part 1 you saw:

0 [HDMI]: HDA-Intel - HDA Intel HDMI   HDA Intel HDMI at 0xf731c000 irq 49 
1 [PCH]: HDA-Intel - HDA Intel PCH  HDA Intel PCH at 0xf7318000 irq 48

If you have a Linux-compatible USB DAC it is listed here. The examples in this article use PCH (or card1). Now type:

cat /proc/asound/card1/pcm0p/info

You should get something similar to the following:

card: 1
device: 0
subdevice: 0
stream: PLAYBACK
id: ALC3234 Analog
name: ALC3234 Analog
subdevice #0
class: 0s
ubclass: 0
subdevices_count: 1
subdevices_avail: 0

Note the numbers beside card and device.  You need this information for the next step when you configure ALSA to process the audio straight to the hardware.

XMMS2 configuration files

To open the XMMS2 configuration file, run:

nano ~/.config/xmms2/xmms2.conf

Find this line:

<section name="alsa">

<property name=”device”> is set to default. You want to change that to direct the audio straight to the hardware. Remove default and enter: hw:x,y (where x is the card number and y is the device number).

Next, change the mixer property to Master, and mixer_dev to hw:x (again where x is the card number). Your file should look similar to the example below:

<section name="alsa">
     <property name="device">hw:1,0</property>
     <property name="mixer">Master</property>
     <property name="mixer_dev">hw:1</property>
     <property name="mixer_index">0</property>

Scroll down to <section name=”clients”>. The entry for watch_dirs property is empty. Here, add the path to the directory that contains the music files. Remember to enter the directory path for your music.

<section name="clients">
     <section name="mlibupdater">
          <property name="watch_dirs">/home/sassam/Music/</property>

Next, configure <section name=”core”>. Look for the ipsocket property, and remove the unix:///tmp/xmms… entry and enter a TCP port 6600 for your system:

<property name="ipcsocket">tcp://</property>

Remember to replace with your IP address. The section should look similar to the example below:

<section name="core">
     <property name="ipcsocket">tcp://</property>
     <property name="logtsfmt">%H:%M:%S </property>
     <property name="shutdownpath">/home/sassam/.config/xmms2/shutdown.d</property> 
     <property name="startuppath">/home/sassam/.config/xmms2/startup.d</property>

The last setting to configure is the output section. Change the plugin property from pulse to alsa.

<section name="output">
     <property name="buffersize">32768</property>
     <property name="flush_on_pause">1</property>
     <property name="plugin">alsa</property>

Be careful if you edit the buffersize property. If it’s set too low you may encounter playback issues. This example keeps the defaults, but feel free to experiment. Save the file and exit.

Creating the XMMS2 systemd service

Edit a new service file using sudo:

sudo nano /etc/systemd/system/xmms2d.service

In the file enter:

Description=XMMS2 Service

ExecStart=/bin/bash -c "sleep 3; /usr/bin/xmms2d --yes-run-as-root --conf=/home/sassam/.config/xmms2/xmms2.conf"


Save the file and exit. To enable the service on boot, and in this session, enter:

sudo systemctl daemon-reload; sudo systemctl enable --now xmms2d.service

Now, open port 6600 in your firewall, to allow the XMMS2 client access to the system over your local area network:

sudo firewall-cmd --permanent --add-port 6600/tcp; firewall-cmd --reload

Success! Your digital jukebox is fully configured. You can adjust the volume on the server-side with:


The XMMS2 Client

The client used is GXMMS2, available in the Fedora repository. GXMMS2 is a simple client with a small window that allows users to control audio playback on the XMMS2 server. To install GXMMS2 on your Fedora desktop, open the terminal and enter:

sudo dnf install gxmms2

When the installation is complete, open the GXMMS2 program to create the client configuration file. To edit the file, go back into the Terminal and enter:

nano ~/.config/xmms2/clients/gxmms2.conf

Here, set AutoReconnect to yes, and change the IPCPath to the socket path you entered in the server configuration file above. For example:


Again, remember to use the IP address of your digital jukebox. Feel free to go through the configuration file and adjust the settings to your liking. Then save the file, exit the editor, and close the GXMMS2 window.

Re-open GXMMS2, click the Open playlist editor button, then click the MLib Browser tab. A list of your audio files appears. Double-click the artist, then double-click the album. GXMMS2 displays a list of songs. Highlight the songs you want to add the playlist and click the Add button, or double-click the song. Click the Playlist tab, then double-click on a song to start playing.

Testing for Bit-Perfect Playback

Now let’s check for bit-perfect playback. Go back to your server and at the shell prompt enter:

cat /proc/asound/card1/pcm0p/sub0/hw_params

Remember to use the card number you configured in the xmms2.conf file. For the song playing in the above screencast, the following information is shown:

format: S16_LE
subformat: STD
channels: 2
rate: 44100 (44100/1)
period_size: 735
buffer_size: 22050

This shows the playback format is 16-bit using two channels with a sample rate of 44100. The software is not resampling and playback is identical to the audio file, and bit-perfect playback has been achieved. Remember, if you try to play an audio format beyond the resolution of your audio device, like a 24-bit FLAC file on a 16-bit DAC/soundcard, you may get unstable, or no, playback. To play these files you can configure ALSA/XMMS2 for software mixing, or install and configure a sound server like Pulseaudio or JACK to resample the audio information. However, bit-perfect playback will be lost.

Congratulations! You have successfully created your own digital jukebox optimized for sound quality. What was once old is now new again thanks to Fedora and open source software.

This article touched upon some of the fundamentals of digital audio. Readers and audiophiles looking to refine their systems, and improve upon the information provided, can find more details from the sites below. Some of the referenced software may be outdated, but the concepts are still relevant.

ALSA Project:



24bit96 | high res audio and hd music:

Dynobot’s Computer Audio – https://sites.google.com/site/computeraudioorg/setting-up-alsa


[PDF] Real-Time Audio Servers on BSD Unix Derivatives – https://jyx.jyu.fi/dspace/bitstream/handle/123456789/12485/URN_NBN_fi_jyu-2005243.pdf 

XMMS2 Man Page – https://www.gsp.com/cgi-bin/man.cgi?section=1&topic=xmms2

Episode 74 - Facial recognition and physical security

Posted by Open Source Security Podcast on December 13, 2017 02:21 AM
Josh and Kurt talk about facial recognition, physical security, banking, and Amazon Alexa.

<iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="http://html5-player.libsyn.com/embed/episode/id/6039839/height/90/theme/custom/autoplay/no/autonext/no/thumbnail/yes/preload/no/no_addthis/no/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

Show Notes

Fedora Women Day in Prishtina

Posted by Jona Azizaj on December 13, 2017 12:12 AM

We, the Fedora Diversity Team, were thinking where else we could help organize a Fedora Women Day. Of course, that Fedora Kosovo Community came in my mind and I thought to contact Ardian and Renata to see when we could organize a FWD at Prishtina Hackerspace. Since Renata and I had some exams during September […]

The post Fedora Women Day in Prishtina appeared first on Jona Azizaj.

Fedora 27 : Firefox and selinux intro .

Posted by mythcat on December 12, 2017 06:07 PM
Today I made a summary of selinux.
This is a protection and security utility in linux operating systems.
It is quite complex and requires a little guidance in learning.
The basic thing is to secure a grid that matches the security gaps.
The tutorial today simply exemplifies how you can change these rules.
First, check with these commands for the status of selinux:
#sestatus -b
#cat /etc/selinux/config
#ls -lZ /usr/bin/firefox
#chcon -v -t user_home_t /user/bin/firefox
This will change the selinux target type to user_home_t . That will allow firefox to run with this label (like that users) are allowed to read/write and manage. This is the default label for all content in a users home directory. This last command try to prevent confined applications from being able to read and write this content just from users home.

Fedora 27 : About Cockpit linux tool.

Posted by mythcat on December 12, 2017 06:06 PM
About the Cockpit the official website tell us:
Cockpit makes Linux discoverable, allowing sysadmins to easily perform tasks such as starting containers, storage administration, network configuration, inspecting logs and so on.
If you use Fedora 27 the this tool can be used very easy.
If your Fedora Spin don't come with this tool then you can install it with this command:
#dnf -y install cockpit
First you need to follow this steps:
- starting Cockpit requires only a single command:
#systemctl start cockpit
- we’ll configure it to start on boot with:
#systemctl enable cockpit.socket
- you can check the status of Cockpit with:
#systemctl status cockpit
- the Cockpit tool runs on port 9090, so you’ll need to allow it through the firewall with this command:
#firewall-cmd --add-service=cockpit
- or simply add with the open port with:
#firewall-cmd --permanent --add-port=9090/tcp
- you now should reload the firewall for the rule to take effect:
#firewall-cmd --reload
Testing is the next step by log into Cockpit from your localhost (your server’s IP address) with your server’s root credentials.
Once you logged in you’ll see the Dashboard web page containing information about the server itself and graphs showing CPU and Memory Usage as well as Disk I/O and Network Traffic.
Let's see the Dashboard:
  • System come with infos about your system;
  • Logs displays the server’s system and service logs. That allows you to click on any entry for more detailed information, such as the process ID. 
  • Storage gives you a graphical look at disk reads and writes, and also allows you to view relevant logs. Also, you can set up and manage RAID devices and volume groups, and format, partition, and mount/unmount drives. 
  • Networking contains an overview of inbound and outbound traffic, logs and network interface information. You also can configure the network interface from this page. 
  • Containers allows you to manage your Docker containers. You can search for new containers, add or remove containers, start and stop them, and set runtime variables on this page. 
  • Accounts lets you to : add and manage users, set up and change passwords, and add and manage public SSH keys for each user. 
  • Services lists all services, and clicking on any entry takes you to a detail page showing the service log and allowing you to start/stop, enable/disable, reload/isolate, or mask/unmask each service.
  • Terminal let you a fully functional terminal, with tab completion, allowing you to perform any task you could perform through its web interface.This come with the same privileges your login credentials would allow via SSH.
You can take a look at documentation for Cockpit to learn more about this tool.

Qubes OS 4.0rc3 and latest UEFI systems

Posted by Kushal Das on December 12, 2017 03:19 PM

Last week I received a new laptop, I am going to use it as my primary work station. The first step was to install Qubes OS 4.0rc3 on the system. It is a Thinkpad T470 with 32GB RAM and a SSD drive.

How to install Qubes on the latest UEFI systems?

A few weeks back, a patch was merged to the official Qubes documentation, which explains in clear steps how to create a bootable USB drive on a Fedora system using livecd-tools. Please follow the guide and create a USB drive which will work on these latest machines. Just simply using dd will not help.

First step after installing Qubes

I upgraded the dom0 to the current testing packages using the following command.

$ sudo qubes-dom0-update --enablerepo=qubes-dom0-current-testing
$ sudo qubes-dom0-update qubes-template-fedora-26

I also installed the Fedora 26 template on my system using the next command. One of the important point to remember that Fedora 25 is going to be end of life today. So, better to use updated version of the distribution :)

There was another important thing happened in the last two weeks. I was in the Freedom of the Press Foundation office in San Fransisco. Means not only I managed to meet my amazing team, I also met many of my personal heroes in this trip. I may write a separate blog post about that later. But for now I can say that I managed to sit near to Micah Lee for 2 weeks and learn a ton about various things, including his Qubes workflow. The following two things were the first change I did to my installation (with his guidance) to make things working properly.

How to modify the copy-paste between domains shortcuts?

Generally Ctrl+Shift+c and Ctrl+Shift+v are used to copy-paste securely between different domains. But, those are the shortcuts to copy-paste from the terminal in all the systems. So, modifying them to a different key combination is very helpful for the muscle memory :)

Modify the following lines in the /etc/qubes/guid.conf file in dom0, I did a reboot after that to make sure that I am using this new key combination.

secure_copy_sequence = “Mod-c”;
secure_paste_sequence = “Mod-v”;

The above configuration will modify the copy paste shortcuts to Windows+c and Windows+v in my keyboard layout.

Fixing the wireless driver issue in suspend/resume

I also found that if I suspend the system, after starting it on again, the wireless device was missing from the sys-net domain. Adding the following two module in the /rw/config/suspend-module-blacklist file on the sys-net domain helped me to fix that.


The official documentation has a section on the same.

You can follow my posts on Qubes OS here.

Fedora 27 Server classic release after all — and Modularity goes back to the drawing board

Posted by Fedora Magazine on December 12, 2017 02:52 PM

You may remember reading about plans for Fedora 27 Server. The working group decided not to release that at the same time as the general F27 release, and instead provided a beta of Fedora 27 Modular Server. Based on feedback from that beta, they decided to take a different approach, and the Modularity subproject is going back to the drawing board.

Fortunately, there is a contingency plan: Fedora’s release engineering team made a “classic” version of Fedora 27 Server — very similar to F26 Server, but with F27’s updated package set. The quality assurance ran this version through validation testing, and it’s being released, so:

Quick Summary

  • You can now download Fedora 27 Server from the Get Fedora site. This is the “classic” Fedora Server, without Modularity.
  • The Modularity Working Group is going back to the drawing board. Plans are still in progress, but it will likely produce a separate package repository which will build on top of and coexist with the traditional Fedora operating systems.

Modularity Past and Future

Modularity has a very straightforward mission: to enable Fedora to deliver multiple versions of components on different lifecycles across multiple base OS releases. It includes some other ideas about improving packager and user experiences in the process, but that’s the basic thing. Every Linux user has some things they want to move quickly, and others they want to not worry about. Fedora wants to give you that choice.

The approach in last summer’s “Boltron” and the recent beta envisioned an entirely new distribution of Fedora software, with the base operating system itself composed as a module. This offers some interesting benefits — in particular, it keeps the build dependencies of a piece of software well-defined and well-contained. But it has a huge drawback: if some random piece of software isn’t contained in a module, it wouldn’t be available on that edition of Fedora at all. Also, the definition files for modules were another layer of complication, and it became clear that wouldn’t get to an acceptable level of available software for real use.

So, the Modularity Working Group and Server Working Group together decided, rather than offer users and early adopters another iteration down that path, to release the traditional Fedora 27 Server you can find above and take a different approach. The teams are still working out what exactly that will look like, but the most promising involves adding an entirely separate package repository which can be layered on top of traditional Fedora, rather than building a new modular base operating system. This will make it easy for users to opt-in when they want to, and greatly reduces the complication for packagers.

“First” is one of the core foundations of the Fedora Project. At the leading edge of innovation, every step Fedora takes advances the state of the art, even when it’s not directly successful. And, if every try succeeds, Fedora’s not trying hard enough. Sometimes experiments produce negative results. That’s okay — the project learns even when trying a path that doesn’t work out, and it iterates to something better. That process is happening now, and if you’re interested, please join the conversation on the devel mailing list or watch for updates on the Fedora Community Blog in the Modularity Category.

Fedora Women Day in Lima, Peru

Posted by Fedora Community Blog on December 12, 2017 08:30 AM

Fedora Women Day in Lima, Peru

On September 30, 2017, we celebrated the Fedora Women Day in Lima, Peru at PUCP from 8:00 a.m. to 2:00 p.m.

Acknowledged with Thanks

I’ve just wrapped up and I wanted to say thanks for the support throughout the process in having a nice place. Thanks to the staff of the Pontificia Universidad Catolica del Peru: Giohanny, Felipe Solari, Corrado and Walter. Congrats to the initiative of the Fedora Diversity team to foster more women involve in Linux. In addition, thanks to the help of Chhavi in the design and Bee for the help in planning the event. These were our FWD speakers:

We had three previous session with the speakers and members of our local Linux team. In the following picture you can see our work behind the scenes. I highlight the support and help of Solanch Ccasa in this event:

The core day

I started my talk by giving a brief history of Fedora, since 1985 when GNU was conformed, until 2017 with Fedora 26 version. I also have shown help received from other Fedora Women as Marina, Robyn, Bee, Chhavi and Amita when I had technical and administrative issues. The “Google Summer of Code” program, how to join to the Fedora community, its philosophy and topics related were explained. It took me 20 minutes.

Other women talks and workshops followed as planned DNF, GIT, Fedora loves Python, Linux commandsD3

It was great to see many women interested in the Linux world. More than seven years of organising events in Lima related to Linux, and this was first time I see several women using Fedora with GNOME at the same time.  

We shared a special FWD cake, and posted on a Fedora board pros and cons of why you use Fedora or not. 

Special thanks to guys that helped us during this event: Martin Vuelta, Rodrigo Lindo and Rommel Zavaleta.

The post Fedora Women Day in Lima, Peru appeared first on Fedora Community Blog.

Fin de vie de Fedora 25

Posted by Charles-Antoine Couret on December 12, 2017 12:49 AM

C'est en ce mardi 12 décembre 2017 que Fedora 25 a été déclaré comme en fin de vie.

Qu'est-ce que c'est ?

Un mois après la sortie d'une version de Fedora n, ici Fedora 27, la version n-2 (donc Fedora 25) est déclarée comme en fin de vie. Ce mois sert à donner du temps aux utilisateurs pour faire la mise à niveau. Ce qui fait qu'en moyenne une version est officiellement maintenue pendant 13 mois.

En effet, la fin de vie d'une version signifie qu'elle n'aura plus de mises à jour et plus aucun bogue ne sera corrigé. Pour des questions de sécurité, avec des failles non corrigées, il est vivement conseillé aux utilisateurs de Fedora 25 et antérieurs d'effectuer la mise à niveau vers Fedora 27 ou 26.

Que faire ?

Si vous êtes concernés, il est nécessaire de faire la mise à niveau de vos systèmes. Vous pouvez télécharger des images CD ou USB plus récentes.

Il est également possible de faire la mise à niveau sans réinstaller via DNF ou GNOME Logiciels.

GNOME Logiciels a également dû vous prévenir par une pop-up de la disponibilité de Fedora 26 ou 27. N'hésitez pas à lancer la mise à niveau par ce biais.

#PeruRumboGSoC2018 – Session 4

Posted by Julita Inca Chiroque on December 11, 2017 11:21 PM

We celebrated yesterday another session of the local challenge 2017-2 “PeruRumboGSoC2018”. It was held at the Centro Cultural Pedro Paulet of FIEE UNI.  GTK on C was explained during the fisrt two hours of the morning based on the window* exercises from my repo to handle some widgets such as windows, label and buttons.Before he scheduled lunch, we were able to program a Language Selector using grid with GTK on C. These are some of the students git: Fiorella, Cris, Alex, Johan Diego & GiohannyWe’ve shared a deliciuos Pollo a la Brasa, and a tasty Inca Cola to drink during our lunch.

Martin Vuelta helped us to make our miniapplication works by clicking a single or multiple languages in our language selector to open a useful links to learn those programming languages. We needed to install webkit4 packages on Fedora 27.Thank you so much Martin for supporting the group with your expertise and good sense of humor! We are going to have three more sessions this week to finish the program.

Filed under: Education, Events, FEDORA, GNOME, τεχνολογια :: Technology, Programming Tagged: #PeruRumboGSoC2018, apply GSoC, fedora, Fedora + GNOME community, Fedora Lima, gnome 3, GSoC 2018, GSoC Peru Preparation, Julita Inca, Julita Inca Chiroque, Lima, Peru Rumbo al GSoC 2018

Re: High-level Problems with Git and How to Fix Them

Posted by Matěj Cepl on December 11, 2017 11:00 PM

(originally written as a comment on the blogpost by Gregory Szorc)

Some comments:

  1. I like your ideas about the improved documentation and better sounding options (yes, git commit --cached is stupid). Did you file them as RFE to git@vger.kernel.org? I don’t think they are that controversial …

Rawhide notes from the trail, the early December issue

Posted by Kevin Fenzi on December 11, 2017 07:37 PM

Once again, things have been real busy and I haven’t kept up with sending my rawhide notes, but I will try and do better!

Astute rawhide trail travelers would have noted that there was a distinct lack of updates to rawhide recently. There were composes up to December 4th, and then nothing until late yesterday (December 11th). Here’s a list of the reasons all those composes failed:

* As part of a big infrastructure move (Moving every server in our main datacenter to a new area) we also applied updates all around. The December 5th rawhide failed because the new koji package had a bug with python3 and ‘runroot’ plugin we use for koji. This has since been fixed upstream and a fixed version applied to infrastructure machines.

* Another few runs failed because we were moving the signing infrastructure and didn’t have everything back up and working. Recent changes to the rawhide compose require signing because it signs ostrees commits that are made as part of the compose (unlikely packages where they are signed before the compose).

* A firefox update came out that did not build on armv7. This was ok until a new hunspell update came out, a bunch of things built against it and the existing firefox had a broken dependency on the old version. Ideally we could have untagged the hunspell update and all the things that had already built against it, but that was a pile of packages to look for. In addition, the firefox update was a security one (of course). So, I filed a bug on it, excluded armv7 (just until the compile is fixed) and did a new build. This was only part of the rawhide fix however, as the Xfce armv7 image is release blocking and it no longer had a firefox. So, I dropped firefox from it for now (until the compile is fixed).

* While dealing with the firefox thing, I merged some more kickstart PR’s folks had submitted. Sadly, one of them had a syntax error in KDE (which is release blocking), causing another compose to fail.

So, we are back on track for now. Hopefully firefox will get fixed soon as we can revert hacks. I’ll keep watching things over the holidays.

Bodhi 3.1.0 released

Posted by Bodhi on December 11, 2017 05:05 PM

Special instructions

  • The Alembic configuration file has changed to use the Python path of the migrations.
    In order to run the new migrations, you should ensure your alembic.ini has
    script_location = bodhi:server/migrations.

Dependency changes

  • The client formally depends on iniparse now. It needed this before but the dependency was
    undocumented (ddf47eb).
  • Bodhi no longer uses or requires webhelpers. RSS feeds are now generated by feedgen, a new
    required dependency.
  • Bodhi no longer uses or requires bunch.


  • The CLI now prints a helpful hint about how to use koji wait-repo when creating or editing a
    buildroot override, or when a query for overrides returns exactly one result (#1376).
  • Bodhi now uses connection pooling when making API requests to other services (#1753).
  • The bindings now conditionally import dnf (#1812).
  • It is now possible to query for Releases by a list of primary keys, by using the querystring
    ids with the releases/ API.
  • Builds now serialize their release_id field.
  • It is now possible to configure a maximum number of mash threads that Bodhi will run at once,
    which is handy if the new Pungi masher has been mean to your NAS. There is a new
    max_concurrent_mashes setting in production.ini, which defaults to 2.
  • There is now a man page for bodhi-clean-old-mashes.
  • The documentation was reorganized by type of reader (14e81a8).
  • The documentation now uses the Alabaster theme (f15351e).
  • The CLI now has a --arch flag that can be used when downloading updates to specify which
    architecture is desired (6538c9e).
  • Bodhi's documentation now includes an administration section which includes documentation
    on its various settings (310f56d).


  • Bodhi now uses the correct comment on critical path updates regarding how many days are required
    in testing (#1361).
  • All home page update types now have mouseover titles (#1620).
  • e-mail subjects again include the version of the updates (#1635).
  • The bindings will re-attempt authentication upon captcha failures (#1787).
  • The formatting is fixed on mobile for the edit/create update form (#1791).
  • The "Push to Stable" button is now rendered in the web UI on batched updates (#1907).
  • Do not fail the mash if a changelog is malformed (#1989).
  • bodhi-dequeue-stable no longer dies if it encounters updates that can't be pushed
    stable (#2004).
  • Unreachable RSS Accept-header based redirects were fixed (6f3db0c).
  • Fixed an unsafe default in bodhi.server.util.call_api() (9461b3a).
  • Bodhi now distinguishes between testing and stable when asking Greenwave for gating decisions
  • The CLI now renders the correct URL for updates without aliases (:commit:caaa0e6e).

Development improvements

  • The database migrations are now shipped as part of the Python distribution
    (#1777 <https://github.com/fedora-infra/bodhi/pull/1777>_).
  • The developer docs pertaining to using virtualenvs have been corrected and improved
  • The test_utils.py tests now use the BaseTestCase, which allows them to pass when run by
    themselves (:issue:1817).
  • An obsolete mash check for symlinks was removed (:issue:1819).
  • A mock was moved inside of a test to avoid inter-test dependencies (:issue:1848).
  • Bodhi is now compliant with flake8's E722 check (:issue:1927).
  • The JJB YAML file is now tested to ensure it is valid YAML (:#1934).
  • Some code has been prepared for Python 3 compatibility (d776356).
  • Developers are now required to sign the DCO (34d0ceb).
  • There is now formal documentation on how to submit patches to Bodhi (bb20a0e).
  • Bodhi is now tested by Fedora containers in the CentOS CI environment (36d603f).
  • Bodhi is now tested against dependencies from PyPI (1e8fb65).
  • The development.ini.example file has been reduced to a minimal form, which means we no longer
    need to document the settings in two places (2b7dc4e).
  • Bodhi now runs CI tests for different PRs in parallel (6427309).
  • Vagrantfile.example has been moved to devel/ for tidiness (21ff2e5).
  • It is now easier to replicate the CI environment locally by using the devel/run_tests.sh
  • Many more docblocks have been written across the codebase.
  • Line test coverage is now at 93%.

Release contributors

The following developers contributed to Bodhi 3.1.0:

  • Alena Volkova
  • Aman Sharma
  • Caleigh Runge-Hottman
  • Dusty Mabe
  • František Zatloukal
  • Jeremy Cline
  • Ken Dreyer
  • Lumir Balhar
  • Martin Curlej
  • Patrick Uiterwijk
  • Pierre-Yves Chibon
  • Ralph Bean
  • Ryan Lerch
  • Randy Barlow

My week #49 in Fedora

Posted by Fedora Community Blog on December 11, 2017 04:27 PM

Lets summarize some of events from the past week or two:

F27 Server release

On Thursday December 7th, 2017 we held Go/No-Go meeting for F27 server edition.  During the meeting we run into some infrastructure issues due to datacenter move, so even the meeting minutes were not completely recorded, we successfully finished the meeting. At the end of the meeting we have agreed to release the Fedora 27 RC 1.6 compose as the Fedora 27 release. The release date is set on December 12th, 2017.


On Tuesday, December 5th, 2017 we started Voting period for Mindshare and Council. We have also postponed start of FESCo Voting period for three days due to low number of nominees.  This went mostly fine, unfortunately we run into an issue with the mandatory interviews, required by a new election rule, where majority of nominees have not completed their interview by the deadline. After a brief chat with other Council members we have agreed to run the Elections even the new rule  was not fulfilled. As this decision caused a lot of tension in the community, it has been finally decided to cancel these elections and organize a new run with clarified rules. To be as transparent as possible there is a Council ticket #156, where Council wants to agree on the way how to deal with interviews as part of elections. FESCo has also opened a ticket #1800, to discuss the Election topic.

The new run of elections is preliminary planned on January 2018, where exact schedule is going to be published during this (#50) week.

F27 Release Engineering Retrospective

On Monday, December 4th, Kate organized F27 Release Engineering Retrospective meeting. During the meeting people provided their pros & cons they experienced during the F27 release and then we had a discussion on these topics. There is a video recording available from the meeting as well as notes we were discussing. For me, personally, this was very useful meeting allowing me to understand some issues more deeply as well as see these issues from different point of view.

Fedora Release cycle & Changes Policy

FESCo has approved update of Fedora Release cycle & Changes Policy. The update consist mostly changes based on the following :

People who are dealing with scheduling are requested to check these updates, so they can reflect these changes in their own work.

Mandatory Release notes

Collecting Release notes for a release from Change owners is a long term issue. To help our Documentation team with this issue FESCo has approved Mandatory Release notes for all the Changes going into a release.  If you are a Change owner, please make sure you are aware of this decision. And of course, I am looking forward to work with Change owners to get the Release notes ready and on time.

Datacenter move

I should also mention a big Datacenter move which happened during the week #49 (from December 4th to December 8th). Even I do not know much of it, as all the systems seems to be working fine and outages during the past week were minimal, I guess this is a big success. Thank you Infra team for the work you have done.


For more information what is going on in the Fedora community, please subscribe to one of the mailing list Fedora people are contributing to.


The post My week #49 in Fedora appeared first on Fedora Community Blog.

CSR devices now supported in fwupd

Posted by Richard Hughes on December 11, 2017 12:42 PM

On Friday I added support for yet another variant of DFU. This variant is called “driverless DFU” and is used only by BlueCore chips from Cambridge Silicon Radio (now owned by Qualcomm). The driverless just means that it’s DFU like, and routed over HID, but it’s otherwise an unremarkable protocol. CSR is a huge ODM that makes most of the Bluetooth audio chips in vendor hardware. The hardware vendor can enable or disable features on the CSR microcontroller depending on licensing options (for instance echo cancellation), and there’s even a little virtual machine to do simple vendor-specific things. All the CSR chips are updatable in-field, and most vendors issue updates to fix sound quality issues or to add support for new protocols or devices.

The BlueCore CSR chips are used everywhere. If you have a “wireless” speaker or headphones that uses Bluetooth there is a high probability that it’s using a CSR chip inside. This makes the addition of CSR support into fwupd a big deal to access a lot of vendors. It’s a lot easier to say “just upload firmware” rather than “you have to write code” so I think it’s useful to have done this work.

The vendor working with me on this feature has been the awesome AIAIAI who make some very nice modular headphones. A few minutes ago we uploaded the H05 v1.5 firmware to the LVFS testing stream and v1.6 will be coming soon with even more bug fixes. To update the AIAIAI H05 firmware you just need to connect the USB cable and press and hold the top and bottom buttons on the headband until the LED goes out. You can then update the firmware using fwupdmgr update or just using GNOME Software. The big caveat is that you have to be running fwupd >= 1.0.3 which isn’t scheduled to be released until after Christmas.

I’ve contacted some more vendors I suspect are using the CSR chips. These include:

  • Jarre Technologies
  • RIVA Audio
  • Avantree
  • Zebra
  • Fugoo
  • Bowers&Wilkins
  • Plantronics
  • BeoPlay
  • JBL

If you know of any other “wireless speaker” companies that have issued at least one firmware update to users, please let me know in a comment here or in an email. I will follow up all suggestions and put the status on the Naughty&Nice vendorlist so please check that before suggesting a company. It would also be really useful to know the contact details (e.g. the web-form URL, or the email address) and also the model name of the device that might be updatable, although I’m happy to google myself if required. Thanks as always to Red Hat for allowing me to work on this stuff.

Heroes of Fedora (HoF) – F27 Final

Posted by Fedora Community Blog on December 11, 2017 08:30 AM

Hello and welcome to this issue of Heroes of Fedora focused on Fedora 27 Final release! The purpose of Heroes of Fedora is to provide a summation of testing activity on each milestone release of Fedora. So, without further ado, let’s get started!

FPL Badge

Updates Testing

Test period: Fedora 27 (2017-10-17 – 2017-11-14)
Testers: 100
Comments1: 436

Name Updates commented
pwalter 63
besser82 60
cserpentis 57
filiperosset 23
lnie 17
piotrdrag 16
sassam 15
anonymous 12
adamwill 10
ankursinha 8
g6avk 7
kparal 7
alciregi 7
pwhalen 6
bluepencil 5
nonamedotc 5
mastaiza 4
renault 4
greg18 4
dustymabe 4
frantisekz 3
churchyard 3
dandim 3
nb 3
raveit65 2
fgrose 2
rdieter 2
pbrobinson 2
earthwalker 2
martinpitt 2
vedranm 2
mzink 2
mati865 2
haghighi 2
leigh123linux 2
sumantrom 2
lupinix 2
satellit 2
…and also 62 other reporters who created less than 2 reports each, but 62 reports combined!

1 If a person provides multiple comments to a single update, it is considered as a single comment. Karma value is not taken into account.

Validation Testing

Test period: Fedora 27 (2017-10-17 – 2017-11-14)
Testers: 20
Reports: 434
Unique referenced bugs: 18

Name Reports submitted Referenced bugs1
pwhalen 167 1503758 1505896 1505903 (3)
frantisekz 39 1502816 (1)
lbrabec 37
lnie 35 1508841 1509772,1432627,1432754 (2)
pschindl 24 1508808 (1)
kparal 24 1508794 (1)
sumantrom 21
alciregi 19 1500834 1502915 1503496 (3)
coremodule 17
tenk 8
siddharthvipul1 8
adamwill 7 1508706 1508735 (2)
dustymabe 6
kevin 5
mattia 5 1506979 (1)
satellit 5 1490668 1502915 (2)
dominicpg 3
konradr 2 1484908 1486002 1504241 (3)
skamath 1
puiterwijk 1

1 This is a list of bug reports referenced in test results. The bug itself may not be created by the same person.

Bug Reports

Test period: Fedora 27 (2017-10-17 – 2017-11-14)
Reporters: 347
New reports: 676

Name Reports submitted1 Excess reports2 Accepted blockers3
mastaiza 47 2 (4%) 0
lnie 14 0 (0%) 0
Stephen Gallagher 14 1 (7%) 0
Paul Whalen 12 2 (16%) 0
xzj8b3 11 0 (0%) 0
Adam Williamson 10 0 (0%) 2
Davide Repetto 9 0 (0%) 0
Karel Srot 8 0 (0%) 0
Mikhail 8 1 (12%) 0
Alessio 7 3 (42%) 2
Keefer Rourke 7 0 (0%) 0
Luís Silva 7 0 (0%) 0
Peter 7 0 (0%) 0
Joachim Frieben 6 0 (0%) 0
Vedran Miletić 6 0 (0%) 0
Kamil Páral 5 0 (0%) 1
Daniel 5 0 (0%) 0
Ed Marshall 5 0 (0%) 0
Jacques Bonet 5 2 (40%) 0
Juanbi 5 0 (0%) 0
lennart_reuther at web.de 5 0 (0%) 0
Leslie Satenstein 5 0 (0%) 0
Stephen 5 1 (20%) 0
wibrown at redhat.com 5 0 (0%) 0
Christian Stadelmann 4 0 (0%) 0
František Zatloukal 4 0 (0%) 0
Juan Orti 4 1 (25%) 0
Maksim 4 0 (0%) 0
Michal Schmidt 4 0 (0%) 0
Michał 4 1 (25%) 0
msmafra at gmail.com 4 0 (0%) 0
Zbigniew Jędrzejewski-Szmek 4 1 (25%) 0
Alexey Matveichev 3 0 (0%) 0
Anass Ahmed 3 0 (0%) 0
Andrew Gunnerson 3 0 (0%) 0
christian gudino 3 0 (0%) 0
Gergely Polonkai 3 0 (0%) 0
Gwendal 3 0 (0%) 0
ilya 3 0 (0%) 0
jpg at rosario.com 3 0 (0%) 0
Kenneth Topp 3 0 (0%) 0
Leonid Podolny 3 0 (0%) 0
Lukas Slebodnik 3 0 (0%) 0
Micah Abbott 3 0 (0%) 0
Niki Guldbrand 3 0 (0%) 0
oliver.zemann at gmail.com 3 0 (0%) 0
peter 3 0 (0%) 0
Randy Barlow 3 1 (33%) 0
Ryan Gillette 3 0 (0%) 0
Steven Haigh 3 0 (0%) 0
sumantro 3 0 (0%) 0
Terje Røsten 3 <a href=”https://bugzilla.redhat.com/buglist.cgi?classification=Fedora&product=Fedora&version=27&query_format=adva

The post Heroes of Fedora (HoF) – F27 Final appeared first on Fedora Community Blog.

4 cool new projects to try in COPR for December

Posted by Fedora Magazine on December 11, 2017 08:00 AM

COPR is a collection of personal repositories for software that isn’t carried in Fedora. Some software doesn’t conform to standards that allow easy packaging. Or it may not meet other Fedora standards, despite being free and open source. COPR can offer these projects outside the Fedora set of packages. Software in COPR isn’t supported by Fedora infrastructure or signed by the project. However, it can be a neat way to try new or experimental software.

Here’s a set of new and interesting projects in COPR.


Thonny is an IDE for learning Python. It includes a simple debugger with many interesting features. It can show how expressions are evaluated, and where each subexpression is replaced by its value, step-by-step. Thonny opens a new window every time a function is called, with its own set of local variables. It highlights basic syntax errors and shows the scope of variables. Thonny comes with built-in Python 3.6.

Installation instructions

The repo currently provides Thonny for Fedora 25, 26, 27, and Rawhide. To install Thonny, use these commands:

sudo dnf copr enable aivarannamaa/thonny
sudo dnf install thonny


Netactview is a network connection viewer with a graphical interface. It’s similar in features to netstat. Netactview can collect and display data asynchronously. It also allows you to sort and filter connections and vary the refresh rate. Netactview can save lists of connection to a text or CSV file, and its interface can be customized.

Installation instructions

The repo currently provides Netactview for Fedora 25, 26, 27, and Rawhide. To install Netactview, use these commands:

sudo dnf copr enable szpak/netactview
sudo dnf install netactview


Newsboat is an RSS/Atom news feed reader for a terminal. It offers numerous configuration options. It can categorize feeds using tags and automatically remove unwanted articles using killfiles. Furthermore, it can download and save podcasts. Newsboat is a fork of Newsbeuter, which is available from the Fedora repository. However, Newsbeuter currently isn’t actively maintained.

Installation instructions

The repo currently provides Newsboat for Fedora 26 and 27. To install Newsboat, use these commands:

sudo dnf copr enable fszymanski/newsboat
sudo dnf install newsboat


Cool-retro-term (CRT) is a heavily stylized terminal emulator that replicates the look and feel of old cathode tube screens. It offers many customization options, the ability to create your own profiles, and comes with pre-configured templates as well. Cool-retro-term uses the Konsole engine and Qt 5.2.

Installation instructions

The repo currently provides cool-retro-term for Fedora 25, 26, 27 and Rawhide. To install cool-retro-term, use these commands:

sudo dnf copr enable kefah/cool-retro-term
sudo dnf install cool-retro-term

Fedora Women Day in Tirana

Posted by Jona Azizaj on December 10, 2017 11:39 PM

What is Fedora Women Day Fedora Women Day (FWD) is a worldwide series of events initiated by the Fedora Diversity Team. The events are dedicated to female contributors of the Fedora Project. During this day of celebration, local communities gather to present the accomplishments of women in the Fedora Project and thank them. FWD is […]

The post Fedora Women Day in Tirana appeared first on Jona Azizaj.

Les élections au sein du projet Fedora en cours sont retardées

Posted by Charles-Antoine Couret on December 09, 2017 11:50 PM

J'avais annoncé cette semaine l'ouverture des votes pour différents organes du projet Fedora : le conseil, FESCo et FAmSCo..

Tout d'abord j'ai oublié en effet qu'il a été décidé de remplacer le FAmSCo par Mindshare, qui n'est pas un simple changement de nom car cette organe a des représentants de plus d'équipes sociales du projet que seulement les ambassadeurs. Mais cela n'est pas l'objet de ce billet.

Le scrutin mentionné plus haut a été reporté depuis le 8 décembre à une date ultérieure, apparemment début janvier 2018. L'objet de ce report vient en fait de la décision de réformer un peu l'organisation des élections afin notamment de publier des entretiens de chaque candidat à la date d'ouverture des élections. Seulement, certains candidats n'ont pas pu poster à temps leur réponse pour des raisons de temps ou des difficultés techniques côté infrastructure de Fedora.

Par soucis d'équité et de cohérence, tous les scrutins ont été décalés par décision du conseil de Fedora.

Bon courage aux candidats et aux organisateurs, en espérant que la prochaine élection se déroule sans accroc !

Introducing simple-koji-ci

Posted by pingou on December 08, 2017 03:39 PM

simple-koji-ci is a small fedmsg-based service that just got deployed in the Fedora Infrastructure.

It aims at doing something really simple: for each pull-request opened in pagure on dist-git, kick off a scratch-build in koji and report the outcome of this build to the pull-request.

This way, when someone opens a pull-request against a package that you are maintaining you can quickly see if that change would build (at least at the time the pull-request was opened).

This service is currently really simple and straight forward, dumb in many ways and still missing some desired features such as: - kick off a new scratch build in the PR is rebased/updated - allow package maintainer to retrigger the build manually but it is a start and we will work on improving it :)


Happy packaging!

PS: live example

How to configure MTU for the Docker network

Posted by Alexander Todorov on December 08, 2017 02:02 PM

On one of my Jenkins slaves I've been experiencing problems when downloading files from the network. In particular with cabal update which fetches data from hackage.haskell.org. As suggested by David Roble the problem and solution lies in the MTU configured for the default docker0 interface!

By default docker0 had MTU of 1500 which should be lower than the host eth0 MTU of 1400! To configure this before the docker daemon is started place any non-default settings in /etc/docker/daemon.json! For more information head to https://docs.docker.com/engine/userguide/networking/default_network/custom-docker0/.

Thanks for reading and happy testing!

PHP version 7.0.27RC1 and 7.1.13RC1

Posted by Remi Collet on December 08, 2017 07:14 AM

Release Candidate versions are available in remi-test repository for Fedora and Enterprise Linux (RHEL / CentOS) to allow more people to test them. They are available as Software Collections, for a parallel installation, perfect solution for such tests (for x86_64 only), and also as base packages.

RPM of PHP version 7.1.13RC1 are available as SCL in remi-test repository and as base packages in the remi-test repository for Fedora 26-27 or remi-php71-test repository for Fedora 24-25 and Enterprise Linux.

RPM of PHP version 7.0.27RC1 are available as SCL in remi-test repository and as base packages in the remi-test repository for Fedora 25 or remi-php70-test repository for Fedora 24 and Enterprise Linux.

PHP version 5.6 is now in security mode only, so no more RC will be released.

PHP version 7.2.1RC1 is planed for next week, stable versions are planed for January 4th.

emblem-notice-24.pngInstallation : read the Repository configuration and choose your version.

Parallel installation of version 7.0 as Software Collection:

yum --enablerepo=remi-test install php70

Parallel installation of version 7.1 as Software Collection:

yum --enablerepo=remi-test install php71

Update of system version 7.0:

yum --enablerepo=remi-php70,remi-php70-test update php\*

Update of system version 7.1:

yum --enablerepo=remi-php71,remi-php71-test update php\*

Notice: version 7.1.12RC1 is also available in Fedora 27 and version 7.2.0RC6 in Fedora rawhide for QA.

emblem-notice-24.pngEL-7 packages are built using RHEL-7.4.

emblem-notice-24.pngRC version is usually the same as the final version (no change accepted after RC, exception for security fix).

Software Collections (php70, php71)

Base packages (php)

All systems go

Posted by Fedora Infrastructure Status on December 08, 2017 03:04 AM
New status good: Everything seems to be working. for services: Ask Fedora, Ipsilon, Badges, Blockerbugs, Package Updates Manager, Fedora Infrastructure Cloud, COPR Build System, Darkserver, Documentation website, Fedora elections, Account System, Fedora Messaging Bus, Fedora Calendar, FedoraHosted.org Services, Fedora pastebin service, FreeMedia, The Koji Buildsystem, Koschei Continuous Integration, Kerberos, Mailing Lists, Mirror List, Mirror Manager, Fedora Packages App, Fedora People, Package Database, Package maintainers git repositories, Tagger, Fedora websites, Fedora Wiki, Zodbot IRC bot

Fedora 27 : Testing Swift with Fedora linux .

Posted by mythcat on December 07, 2017 09:00 PM
I tested today a simple instalation of this package: dnf install swift.
This install come with all additional packets required for running .
This is an application ...
First, I thought in the first phase that they implemented a programming language from Apple .
Take a look at this screenshot:

There are scheduled downtimes in progress

Posted by Fedora Infrastructure Status on December 07, 2017 02:37 PM
New status scheduled: Scheduled maintenance in progress, see link on top for info for services: Ask Fedora, Ipsilon, Badges, Blockerbugs, Package Updates Manager, Fedora Infrastructure Cloud, COPR Build System, Darkserver, Documentation website, Fedora elections, Account System, Fedora Messaging Bus, Fedora Calendar, FedoraHosted.org Services, Fedora pastebin service, FreeMedia, The Koji Buildsystem, Koschei Continuous Integration, Kerberos, Mailing Lists, Mirror List, Mirror Manager, Fedora Packages App, Fedora People, Package Database, Package maintainers git repositories, Tagger, Fedora websites, Fedora Wiki, Zodbot IRC bot

Full coverage of libvirt XML schemas achieved in libvirt-go-xml

Posted by Daniel Berrange on December 07, 2017 02:14 PM

In recent times I have been aggressively working to expand the coverage of libvirt XML schemas in the libvirt-go-xml project. Today this work has finally come to a conclusion, when I achieved what I believe to be effectively 100% coverage of all of the libvirt XML schemas. More on this later, but first some background on Go and XML….

For those who aren’t familiar with Go, the core library’s encoding/xml module provides a very easy way to consume and produce XML documents in Go code. You simply define a set of struct types and annotate their fields to indicate what elements & attributes each should map to. For example, given the Go structs:

type Person struct {
    XMLName xml.Name `xml:"person"`
    Name string `xml:"name,attr"`
    Age string `xml:"age,attr"` 
    Home *Address `xml:"home"`
    Office *Address `xml:"office"`
type Address struct { 
    Street string `xml:"street"`
    City string `xml:"city"` 

You can parse/format XML documents looking like

<person name="Joe Blogs" age="24">
    <street>Some where</street><city>London</city>
    <street>Some where else</street><city>London</city>

Other programming languages I’ve used required a great deal more work when dealing with XML. For parsing, there’s typically a choice between an XML stream based parser where you have to react to tokens as they’re parsed and stuff them into structs, or a DOM object hierarchy from which you then have to pull data out into your structs. For outputting XML, apps either build up a DOM object hierarchy again, or dynamically format the XML document incrementally. Whichever approach is taken, it generally involves writing alot of tedious & error prone boilerplate code. In most cases, the Go encoding/xml module eliminates all the boilerplate code, only requiring the data type defintions. This really makes dealing with XML a much more enjoyable experience, because you effectively don’t deal with XML at all! There are some exceptions to this though, as the simple annotations can’t capture every nuance of many XML documents. For example, integer values are always parsed & formatted in base 10, so extra work is needed for base 16. There’s also no concept of unions in Go, or the XML annotations. In these edge cases custom marshaling / unmarshalling methods need to be written. BTW, this approach to XML is also taken for other serialization formats including JSON and YAML too, with one struct field able to have many annotations so it can be serialized to a range of formats.

Back to the point of the blog post, when I first started writing Go code using libvirt it was immediately obvious that everyone using libvirt from Go would end up re-inventing the wheel for XML handling. Thus about 1 year ago, I created the libvirt-go-xml project whose goal is to define a set of structs that can handle documents in every libvirt public XML schema. Initially the level of coverage was fairly light, and over the past year 18 different contributors have sent patches to expand the XML coverage in areas that their respective applications touched. It was clear, however, that taking an incremental approach would mean that libvirt-go-xml is forever trailing what libvirt itself supports. It needed an aggressive push to achieve 100% coverage of the XML schemas, or as near as practically identifiable.

Alongside each set of structs we had also been writing unit tests with a set of structs populated with data, and a corresponding expected XML document. The idea for writing the tests was that the author would copy a snippet of XML from a known good source, and then populate the structs that would generate this XML. In retrospect this was not a scalable approach, because there is an enourmous range of XML documents that libvirt supports. A further complexity is that Go doesn’t generate XML documents in the exact same manner. For example, it never generates self-closing tags, instead always outputting a full opening & closing pair. This is semantically equivalent, but makes a plain string comparison of two XML documents impractical in the general case.

Considering the need to expand the XML coverage, and provide a more scalable testing approach, I decided to change approach. The libvirt.git tests/ directory currently contains 2739 XML documents that are used to validate libvirt’s own native XML parsing & formatting code. There is no better data set to use for validating the libvirt-go-xml coverage than this. Thus I decided to apply a round-trip testing methodology. The libvirt-go-xml code would be used to parse the sample XML document from libvirt.git, and then immediately serialize them back into a new XML document. Both the original and new XML documents would then be parsed generically to form a DOM hierarchy which can be compared for equivalence. Any place where documents differ would cause the test to fail and print details of where the problem is. For example:

$ go test -tags xmlroundtrip
--- FAIL: TestRoundTrip (1.01s)
	xml_test.go:384: testdata/libvirt/tests/vircaps2xmldata/vircaps-aarch64-basic.xml: \
            /capabilities[0]/host[0]/topology[0]/cells[0]/cell[0]/pages[0]: \
            element in expected XML missing in actual XML

This shows the filename that failed to correctly roundtrip, and the position within the XML tree that didn’t match. Here the NUMA cell topology has a ‘<pages>‘  element expected but not present in the newly generated XML. Now it was simply a matter of running the roundtrip test over & over & over & over & over & over & over……….& over & over & over, adding structs / fields for each omission that the test identified.

After doing this for some time, libvirt-go-xml now has 586 structs defined containing 1816 fields, and has certified 100% coverage of all libvirt public XML schemas. Of course when I say 100% coverage, this is probably a lie, as I’m blindly assuming that the libvirt.git test suite has 100% coverage of all its own XML schemas. This is certainly a goal, but I’m confident there are cases where libvirt itself is missing test coverage. So if any omissions are identified in libvirt-go-xml, these are likely omissions in libvirt’s own testing.

On top of this, the XML roundtrip test is set to run in the libvirt jenkins and travis CI systems, so as libvirt extends its XML schemas, we’ll get build failures in libvirt-go-xml and thus know to add support there to keep up.

In expanding the coverage of XML schemas, a number of non-trivial changes were made to existing structs  defined by libvirt-go-xml. These were mostly in places where we have to handle a union concept defined by libvirt. Typically with libvirt an element will have a “type” attribute, whose value then determines what child elements are permitted. Previously we had been defining a single struct, whose fields represented all possible children across all the permitted type values. This did not scale well and gave the developer no clue what content is valid for each type value. In the new approach, for each distinct type attribute value, we now define a distinct Go struct to hold the contents. This will cause API breakage for apps already using libvirt-go-xml, but on balance it is worth it get a better structure over the long term. There were also cases where a child XML element previously represented a single value and this was mapped to a scalar struct field. Libvirt then added one or more attributes on this element, meaning the scalar struct field had to turn into a struct field that points to another struct. These kind of changes are unavoidable in any nice manner, so while we endeavour not to gratuitously change currently structs, if the libvirt XML schema gains new content, it might trigger further changes in the libvirt-go-xml structs that are not 100% backwards compatible.

Since we are now tracking libvirt.git XML schemas, going forward we’ll probably add tags in the libvirt-go-xml repo that correspond to each libvirt release. So for app developers we’ll encourage use of Go vendoring to pull in a precise version of libvirt-go-xml instead of blindly tracking master all the time.

Secure Boot — Fedora, RHEL, and Shim Upstream Maintenance: Government Involvement or Lack Thereof

Posted by Peter Jones on December 07, 2017 12:33 PM

You probably remember when I said some things about Secure Boot in June of 2014. I said there’d be more along those lines, and there is.

So there’s another statement about that here.

I’m going to try to remember to post a message like this once per month or so. If I miss one, keep an eye out, but maybe don’t get terribly suspicious unless I miss several in a row.

Note that there are parts of this chain I’m not a part of, and obviously linux distributions I’m not involved in that support Secure Boot. I encourage other maintainers to offer similar statements for their respective involvement.