Fedora People

Fedora 25: running Geekbench.

Posted by mythcat on February 19, 2017 12:24 PM
You can test your CPU with this software and will see report online.
The official website told us about this tool:
Geekbench 4 measures your system's power and tells you whether your computer is ready to roar. How strong is your mobile device or desktop computer? How will it perform when push comes to crunch? These are the questions that Geekbench can answer.
You can used free or buy a license of this software and you can get it from here.
Let's see how is working and what is tested:
[mythcat@localhost Geekbench-4.0.4-Linux]$ ls
geekbench4 geekbench.plar geekbench_x86_32 geekbench_x86_64
[mythcat@localhost Geekbench-4.0.4-Linux]$ ./geekbench4
[0219/140337:INFO:src/base/archive_file.cpp(43)] Found archive at
/home/mythcat/build.pulse/dist/Geekbench-4.0.4-Linux/geekbench.plar
Geekbench 4.0.4 Tryout : http://www.geekbench.com/

Geekbench 4 is in tryout mode.

Geekbench 4 requires an active Internet connection when in tryout mode, and
automatically uploads test results to the Geekbench Browser. Other features
are unavailable in tryout mode.

Buy a Geekbench 4 license to enable offline use and remove the limitations of
tryout mode.

If you would like to purchase Geekbench you can do so online:

https://store.primatelabs.com/v4

If you have already purchased Geekbench, enter your email address and license
key from your email receipt with the following command line:

./geekbench4 -r email address="" license key=""

Running Gathering system information
System Information
Operating System Linux 4.9.9-200.fc25.x86_64 x86_64
Model Gigabyte Technology Co., Ltd. B85-HD3
Motherboard Gigabyte Technology Co., Ltd. B85-HD3
Processor Intel Core i5-4460 @ 3.40 GHz
1 Processor, 4 Cores, 4 Threads
Processor ID GenuineIntel Family 6 Model 60 Stepping 3
L1 Instruction Cache 32.0 KB x 2
L1 Data Cache 32.0 KB x 2
L2 Cache 256 KB x 2
L3 Cache 6.00 MB
Memory 7.26 GB
BIOS American Megatrends Inc. F2
Compiler Clang 3.8.0 (tags/RELEASE_380/final)

Single-Core
Running AES
Running LZMA
Running JPEG
Running Canny
Running Lua
Running Dijkstra
Running SQLite
Running HTML5 Parse
Running HTML5 DOM
Running Histogram Equalization
Running PDF Rendering
Running LLVM
Running Camera
Running SGEMM
Running SFFT
Running N-Body Physics
Running Ray Tracing
Running Rigid Body Physics
Running HDR
Running Gaussian Blur
Running Speech Recognition
Running Face Detection
Running Memory Copy
Running Memory Latency
Running Memory Bandwidth

Multi-Core
Running AES
Running LZMA
Running JPEG
Running Canny
Running Lua
Running Dijkstra
Running SQLite
Running HTML5 Parse
Running HTML5 DOM
Running Histogram Equalization
Running PDF Rendering
Running LLVM
Running Camera
Running SGEMM
Running SFFT
Running N-Body Physics
Running Ray Tracing
Running Rigid Body Physics
Running HDR
Running Gaussian Blur
Running Speech Recognition
Running Face Detection
Running Memory Copy
Running Memory Latency
Running Memory Bandwidth


Uploading results to the Geekbench Browser. This could take a minute or two
depending on the speed of your internet connection.

Upload succeeded. Visit the following link and view your results online:

Integrate Dovecot IMAP with (Free)IPA using Kerberos SSO

Posted by Luc de Louw on February 19, 2017 10:59 AM

Dovecot can make use of Kerberos authentication and enjoying Single-Sign-On when checking emails via IMAP. This post shows you how you enable this feature. With IPA its rather simple to do so. First enroll your mail server to the IPA domain with ipa-client-install as described in various previously posted articles. Creating a Kerberos Service Priciple […]

The post Integrate Dovecot IMAP with (Free)IPA using Kerberos SSO appeared first on Luc de Louw's Blog.

Gitlab, Pelican and Let’s Encrypt for a secure blog

Posted by Fedora Magazine on February 19, 2017 02:58 AM

The Fedora Community is considering requiring https for blogs to be published on fedoraplanet.org. While it is currently possible to host an SSL blog on both github or gitlab pages only gitlab supports SSL for custom domains. This article is a tutorial on how to use Pelican and Let’s Encrypt to produce a blog hosted on gitlab pages.

The first step is to create the directory structure to support the verification process used by Let’s Encrypt. This process involves serving a page from a hidden directory. To create the directory

mkdir -p .well-known/acme-challenge

At this point you need to install certbot so you can request a certificate from your computer.

sudo dnf install certbot

After the install is complete you would issue the command to generate a certificate for a remote site.

certbot certonly -a manual -d yoursite.com --config-dir ~/letsencrypt/config --work-dir ~/letsencrypt/work --logs-dir ~/letsencrypt/logs

replace ‘yoursite.com’ with your chosen site. The results will be as follows. The log string for the file name and contents will be different.

Make sure your web server displays the following content at
 http://yoursite.com/.well-known/acme-challenge/uF2HODXEnO98ZRBLhDwFR0yOpGkyg0UyP4QZHImDfd1 before continuing:

uF2HODXEnO98ZRBLhDwFR0yOpGkyg0UyP4QZHImJ8qY.imp4JScFS23eaYWG4tF5e9TSRfGwDuFMmkQTiqN73t8

If you don't have HTTP server configured, you can run the following
 command on the target server (as root):

mkdir -p /tmp/certbot/public_html/.well-known/acme-challenge
 cd /tmp/certbot/public_html
 printf "%s" uF2HODXEnO98ZRBLhDwFR0yOpGkyg0UyP4QZHImJ8qY.imp4JScFS23eaYWG4tF5e9TSRfGwDuFMmkQTiqN73t8 > .well-known/acme-challenge/uF2HODXEnO98ZRBLhDwFR0yOpGkyg0UyP4QZHImDfd1
# run only once per server: $(command -v python2 || command -v python2.7 || command -v python2.6) -c \ "import BaseHTTPServer, SimpleHTTPServer; \ s = BaseHTTPServer.HTTPServer(('', 80), SimpleHTTPServer.SimpleHTTPRequestHandler); \ s.serve_forever()" Press ENTER to continue

d

IMPORTANT NOTES:
 - Congratulations! Your certificate and chain have been saved at
 /home/cprofitt/letsencrypt/config/live/hub.cprofitt.com/fullchain.pem.
 Your cert will expire on 2017-05-19. To obtain a new or tweaked
 version of this certificate in the future, simply run certbot
 again. To non-interactively renew *all* of your certificates, run
 "certbot renew"
 - If you like Certbot, please consider supporting our work by:

Donating to ISRG / Let's Encrypt:   https://letsencrypt.org/donate
 Donating to EFF:                    https://eff.org/donate-le

d

Anaconda Install Banners get a Makeover!

Posted by Mary Shakshober on February 19, 2017 01:02 AM

A redesign/ update for Anaconda install banners has been an ongoing project for me since the summer and has recently, in the passed month or so, had a fair amount of conversation on its Pagure ticket. I have done multiple series of iterations for these banners, and in the couple of weeks have established a design that represents the Fedora vibe. There are three, sort of, sub-categories for the banners: Common Banners, Server-specific Banners, and Desktop-specific Banners. At this point I have completed drafts of the Common banners (available on all editions) and the Desktop-specific banners (available in addition to Common for Desktop editions).

If you’d like to follow the ticket and help give feedback on the incoming iterations, take a look at https://pagure.io/design/issue/438

Here’s a sneak peak of what’s to come for Anaconda!

<figure class="wp-caption alignnone" data-shortcode="caption" id="attachment_247" style="width: 973px">ticket-438_common-banners4<figcaption class="wp-caption-text">COMMON BANNER series</figcaption></figure> <figure class="wp-caption alignnone" data-shortcode="caption" id="attachment_248" style="width: 973px">ticket-438_desktop-specific<figcaption class="wp-caption-text">DESKTOP-SPECIFIC series</figcaption></figure>

Rawhide notes from the trail: 2017-02-18 edition

Posted by Kevin Fenzi on February 19, 2017 12:27 AM

Greetings everyone, lets dive right into the latest changes in the rawhide world:

  • The Fedora 26 mass rebuild ran and finished last weekend. 16,352 successfull builds happened, along with around 1,000 that failed to build. Now we have a few weeks until we branch f26 off to fix things up.
  • The mass rebuild did disrupt signing of normal updates. Perhaps next mass rebuild we should look at standing up another set of signing servers to just sign the mass rebuild.
  • Composes for the last few days have failed. Turns out its due to an unsigned package. But how could that happen? We passed all the builds through the regular signing process. Turns out when builds were tagged in, there were a few builds that overrode newer versions already in rawhide, so releng ran a custom script to retag newer builds back in. However, there was a package where the maintainer built a new version, decided for some reason it was unusable and untagged it. Thats fine, but the custom script mistakenly tagged this “newer” build in and it was long enough ago that it’s signature was removed. Just a short note here about “newer”: koji has no concept of package versions. To koji if you ask for all the ‘newest’ builds in a tag, it will give you the most recently tagged ones. This importantly has nothing at all to do with the package-epoch-version-release, thats just not on a level koji knows or cares about.

Finally checking out FlatPak

Posted by Ankur Sinha "FranciscoD" on February 18, 2017 07:14 PM

I've been reading about FlatPak for a while now in various places (Planet Fedora) but I hadn't given it a try yet. I saw Jiri's post on the planet earlier today and finally decided to install the Firefox Nightlies using FlatPak. Of course, it works really well. I've gone ahead and installed the Telegram nightly from the FlatPak website too.

The instructions are all there in the documentation here. It's really quite simple. On Fedora, first, you must have flatpak installed:

sudo dnf install flatpak

Then, you go to the FlatPak website and click on an app that you want to install. This opens up the Gnome Software centre that installs the application for you. The application then shows up in the list in the activities menu on Gnome. For Firefox, you can follow the instructions here. For example, I now have the Firefox nightly installed:

Screenshot showing Firefox nightly FlatPak application

I now intend to make some time to learn more about FlatPak - I've read bits and pieces here and there about some of the great features it brings - sandboxing and so on - and it looks quite cool!

Project Idea: PI Sw1tch

Posted by Mo Morsi on February 18, 2017 05:02 PM

While gaming is not high on my agenda anymore (... or rather at all), I have recently been mulling buying a new console, to act as much as a home entertainment center as a gaming system.

Having owned several generations PlayStation and Sega products, a few new consoles caught my eye. While the most "open" solution, the Steambox sort-of fizzled out, Nintendo's latest console Switch does seem to stand out of the crowd. The balance between power and portability looks like a good fit, and given Nintendo's previous successes, it wouldn't be surprising if it became a hit.

In addition to the separate home and mobile gaming markets, new entertainment mechanisms are needing to provide seamless integration between the two environments, as well as offer comprehensive data and information access capabilities. After all what'd be the point of a gaming tablet if you couldn't watch Youtube on it! Neal Stephenson recently touched on this at his latest TechCrunch talk, by expressing a vision of technology that is more integrated/synergized with our immediate environment. While mobile solutions these days offer a lot in terms of processing power, nothing quite offers the comfort or immersion that a console / home entertainment solution provides (not to mention mobile phones being horrendous interfaces for gaming purposes!)

Being the geek that I am, this naturally led me to thinking about developing a hybrid mechanism of my own, based on open / existing solutions, so that it could be prototyped and demonstrated quickly. Having recently bought a Raspeberry PI (after putting my Arduino to use in my last microcontroller project), and a few other odds and end pieces, I whipped up the following:

The idea is simple, the Raspberry PI would act as the 'console', with a plethora of games and 'apps' available (via open repositories, steam, emulators, and many more... not to mention Nethack!). It would be anchorable to the wall, desk, or any other surface by using a 3D-printed mount, and made portable via a cheap wireless controller / LCD display / battery pack setup (tied together through another custom 3D printed bracket). The entire rig would be quickly assemblable and easy to use, simply snap the PI into the wall to play on your TV; remove and snap into the controller bracket to take it on the go.

I suspect the power component is going to be the most difficult to nail down, finding an affordable USB power source that is lightweight but offers sufficient juice to drive the Raspberry PI w/ LCD might be tricky. But if this is done correctly, all components will be interchangeable, and one can easily plug in a lower-power microcontroller and/or custom hardware component for a tailored experience.

If there is any interest, let me know via email. If 3 or so people commit, this could be done in a weekend! (stay tuned for updates!)

read more

modulemd 1.1.0

Posted by Petr Šabata on February 18, 2017 10:14 AM

This is a little belated announcement but let it be known that I released a new version of the module metadata library, modulemd-1.1.0, earlier this week!

This version changes the default behavior of the xmd block a little; it now defaults to being an empty dictionary rather than null. We’re also a lot smarter when it comes to loading the build and runtime dependencies blocks, reading the whole structures rather than assuming they are correct. Last but not least, it now also installs its test suite properly under modulemd.tests. That was a dumb bug. Sorry about that.

All systems go

Posted by Fedora Infrastructure Status on February 18, 2017 09:42 AM
Service 'Fedora pastebin service' now has status: good: Everything seems to be working.

#LinuxPlaya Preparation

Posted by Julita Inca Chiroque on February 18, 2017 04:05 AM

As #LinuxPlaya draws near, we’ve been preparing things to the event. We first did a workshop to help others to finish the GTK+Python tutorial for developers. While some other students from different universities in Lima did some posts to prove that they use Linux (FEDORA+GNOME). You can see in the following list, the various areas where they had worked: design, robotics, education, by using tech as Docker and a Snake GTK game.

linuxplaya

Thanks to GNOME and FEDORA we are going to go to the beach Santa Maria on March 4th, and we are going to do more than social networking.  In the morning we are going to present our projects and then we are going to encourage them to apply to the GSoC program. Lunch time and afternoon games are also planned for this occasion. These are  the summer merchandising we are going to offer to our guests.

It’s a pleasure to have Damian Nohales from GNOME Argentina as our international guest

img_7297

img_7303

Most of the participants are also leaders in their universities and they are going to replicate the meetings in their places.  This is the case of Leyla Marcelo who is an entrepreneur leader in her university UPN and our designer in the last Linux events I organised in Lima, Peru.  Special thanks to Softbutterfly for the Internet support that day!

leyla


Filed under: FEDORA, GNOME Tagged: #LinuxPlaya, evento Linux, fedora, GNOME, Julita, Julita Inca, Lima, linux, Linux Playa, Perú, Softbuttterfly

New releases in XFCE

Posted by Robert Antoni Buj Gelonch on February 17, 2017 08:52 PM

generator: stats.sh

Date Package Version
2017-02-16 xfce4-weather-plugin 0.8.9
2017-02-13 xfce4-notifyd 0.3.5
2017-02-13 Thunar 1.6.11
2017-02-12 xfce4-taskmanager 1.2.0
2017-02-10 xfce4-systemload-plugin 1.2.1
2017-02-10 xfce4-netload-plugin 1.3.1
2017-02-06 xfce4-terminal 0.8.4
2017-02-03 xfce4-whiskermenu-plugin 1.7.0-src
2017-02-01 ristretto 0.8.2
2017-01-28 xfce4-mount-plugin 1.1.0
2016-11-28 xfce4-clipman-plugin 1.4.1
2016-11-12 xfce4-time-out-plugin 1.0.2
2016-11-11 xfce4-verve-plugin 1.1.0
2016-11-05 xfce4-wavelan-plugin 0.6.0
2016-11-05 xfce4-smartbookmark-plugin 0.5.0
2016-11-05 xfce4-mpc-plugin 0.5.0
2016-11-05 xfce4-fsguard-plugin 1.1.0
2016-11-05 xfce4-diskperf-plugin 2.6.0
2016-11-05 xfce4-datetime-plugin 0.7.0
2016-11-05 xfce4-battery-plugin 1.1.0
2016-10-25 xfce4-panel 4.12.1
2016-09-15 xfce4-settings 4.12.1
2016-09-08 xfdashboard 0.7.0
2016-07-20 xfce4-hardware-monitor-plugin 1.5.0
2016-07-07 thunar-vcs-plugin 0.1.5
2016-04-26 xfce4-eyes-plugin 4.4.5
2016-04-26 xfce4-dict 0.7.2
2016-04-26 xfce4-cpufreq-plugin 1.1.3
2016-03-19 xfce4-power-manager 1.6.0
2015-10-16 parole 0.8.1
2015-09-15 exo 0.10.7
2015-07-24 xfce4-embed-plugin 1.6.0
2015-07-20 xfdesktop 4.12.3
2015-06-25 xfce4-notes-plugin 1.8.1
2015-05-17 xfburn 0.5.4
2015-05-16 xfwm4 4.12.3
2015-04-10 orage 4.12.1
2015-04-05 garcon 0.5.0
2015-03-29 xfce4-xkb-plugin 0.7.1
2015-03-29 xfce4-timer-plugin 1.6.0
2015-03-17 thunar-volman 0.8.1
2015-03-16 xfce4-session 4.12.1
2015-03-15 libxfce4ui 4.12.1
2015-03-09 xfce4-places-plugin 1.7.0
2015-03-01 xfce4-appfinder 4.12.0
2015-03-01 mousepad 0.4.0
2015-02-28 tumbler 0.1.31
2015-02-28 libxfce4util 4.12.1
2015-01-25 xfce4-screenshooter 1.8.2
2014-01-09 gigolo 0.4.2
2013-10-25 xfce4-mailwatch-plugin 1.2.0
2013-05-11 thunar-media-tags-plugin 0.2.1
2013-05-11 thunar-archive-plugin 0.3.1
2012-10-11 xfce4-mixer 4.10.0
2012-07-10 xfce4-cpugraph-plugin 1.0.5
2012-05-12 xfce4-genmon-plugin 3.4.0
2011-10-23 xfmpc 0.2.2

Filed under: Fedora

Wrapping your head around SSH tunnels

Posted by Sachin Kamath on February 17, 2017 02:13 PM
Wrapping your head around SSH tunnels

This post is for educational purposes only. VPN's might be illegal in some countries. If you are not sure of the consequences of tunnelling over a network/using a VPN, please do not attempt to do so. You have been warned.

This is my first post in the Tunnelling and OpenVPN series. More coming up soon :)

It's been really long since I blogged so here goes a pretty long-ish detailed blog about SSH tunnels. I have been playing around with VPN's for quite some time now and have learned a lot about networking, tunnelling and other awesome things about creating stable networks. OpenVPN is a free and open source application that implement s the features of a Virtual Private Network (VPN) to create a point-to-point secure connection. You can check out the features of OpenVPN here. The possibilities and endless with OpenVPN. Using it, you can build everything ranging from a simple proxy server to a completely anonymous and secure Private network of people.

I started digging into the features of OpenVPN when my university started tightening the campus network by only allowing traffic through port 80 and 443. (Yes! 22 was blocked). Initially, I thought it was the end of git over SSH until I found out I could SSH over the HTTPS port on Github. Take a look at the article here.

Before we get carried away, let's get back to VPN talk. One of the solutions to "port blocks" is SSH tunnelling.

"If we see light at the end of the tunnel, it is the light of the oncoming train" ~ Robert Lowell.

SSH tunnelling, also known as "Poor Man's VPN", is a very powerful feature of SSH which creates a secure connection between a local computer and a remote machine through which services can be relayed.

Let us try to understand SSH tunnelling first. Creating an SSH tunnel is simple. Let us assume Mr. FooMan has a cloud server in Singapore with SSH daemon running on port 22 (the default port) and he wants to redirect all this traffic via the tunnel and not directly. Now, all he will do is ssh into his box using the -D directive:

ssh -D 27015 fooman@hissingaporeserver.com -p 22

Quoting the man page of SSH:

-D [bind_address:]port

Specifies a local “dynamic” application-level port forwarding. This works by allocating a socket to listen to port on the local side, optionally bound to the specified bind_address. Whenever a connection is made to this port, the connection is forwarded over the secure channel, and the application protocol is then used to determine where to connect to from the remote machine. Currently the SOCKS4 and SOCKS5 protocols are supported, and ssh will act as a SOCKS server. Only root can forward privileged ports. Dynamic port forwardings can also be specified in the configuration file.

As always, if you need to use a port below 1000, you can but you will have to be root. To verify this, go ahead and run a netstat -tlpn on the local machine. If everything goes well, you should see something like this:

Wrapping your head around SSH tunnels

Fig 1 : Port 27015 being used by the SSH process

This means that the SSH process is now listening on port 27015 for any connections. You can now use this port for redirecting all your browser traffic or set it as a SOCKS proxy on any application that supports proxified traffic.

Let us set a system-wide proxy on Linux. For this, fire up Network Settings, select Proxy and choose the method as Manual. Now set the SOCKS proxy to be localhost and the port as 27015 (or the port that followed your -D directive).

Wrapping your head around SSH tunnels

Once you are done, check your IP address. Viola! You have successfully proxified your entire system. Make sure you disable proxy when you are done using it or you won't be able to access the internet.

You can also configure just your Web Browser to use the proxy. I use FoxyProxy to achieve this. The configuration is pretty much the same except it acts as a plugin for your browser.

There are a lot of limitations in this case. SSH tunnelling will only work if your university/office allows outgoing traffic on 22 (most probably blocked in most universities). If that is the case, you will have to take extra steps to work around the block.

I will be covering about OpenVPN in my upcoming posts. So, stay tuned! If you've anything in your mind and want to share, do drop a comment below :)

News: OpenSSL Security Advisory [16 Feb 2017]

Posted by mythcat on February 17, 2017 01:02 PM
According to this website:  www.openssl.org/news


OpenSSL Security Advisory [16 Feb 2017]
========================================

Encrypt-Then-Mac renegotiation crash (CVE-2017-3733)
====================================================

Severity: High

During a renegotiation handshake if the Encrypt-Then-Mac extension is
negotiated where it was not in the original handshake (or vice-versa) then this
can cause OpenSSL to crash (dependent on ciphersuite). Both clients and servers
are affected.

OpenSSL 1.1.0 users should upgrade to 1.1.0e

This issue does not affect OpenSSL version 1.0.2.

This issue was reported to OpenSSL on 31st January 2017 by Joe Orton (Red Hat).
The fix was developed by Matt Caswell of the OpenSSL development team.

Note
====

Support for version 1.0.1 ended on 31st December 2016. Support for versions
0.9.8 and 1.0.0 ended on 31st December 2015. Those versions are no longer
receiving security updates.

References
==========

URL for this Security Advisory:
https://www.openssl.org/news/secadv/20170216.txt

Note: the online version of the advisory may be updated with additional details
over time.

For details of OpenSSL severity classifications please see:
https://www.openssl.org/policies/secpolicy.html

2016 – My Year in Review

Posted by Justin W. Flory on February 17, 2017 08:30 AM

Before looking too far ahead to the future, it’s important to spend time to reflect over the past year’s events, identify successes and failures, and devise ways to improve. Describing my 2016 is a challenge for me to find the right words for. This post continues a habit I started last year with my 2015 Year in Review. One thing I discover nearly every day is that I’m always learning new things from various people and circumstances. Even though 2017 is already getting started, I want to reflect back on some of these experiences and opportunities of the past year.

Preface

When I started writing this in January, I read freenode‘s “Happy New Year!” announcement. Even though their recollection of the year began as a negative reflection, the freenode team did not fail to find some of the positives of this year as well. The attitude reflected in their blog post is reflective of the attitude of many others today. 2016 has brought more than its share of sadness, fear, and a bleak unknown, but the colors of radiance, happiness, and hope have not faded either. Even though some of us celebrated the end of 2016 and its tragedies, two thoughts stay in my mind.

One, it is fundamentally important for all of us to stay vigilant and aware of what is happening in the world around us. The changing political atmosphere of the world has brought a shroud of unknowing, and the changing of a number does not and will not signify the end of these doubts and fears. 2017 brings its own series of unexpected events. I don’t consider this a negative, but in order for it not to become a negative, we must constantly remain active and aware.

Secondly, despite the more bleak moments of this year, there has never been a more important time to embrace the positives of the past year. For every hardship faced, there is an equal and opposite reaction. Love is all around us and sometimes where we least expect it. Spend extra time this new year remembering the things that brought you happiness in the past year. Hold them close, but share that light of happiness with others too. You might not know how much it’s needed.

First year of university: complete!

Many things changed since I decided to pack up my life and go to a school a thousand miles away from my hometown. In May, I officially finished my first year at the Rochester Institute of Technology, finishing the full year on dean’s list. Even though it was only a single year, the changes from my decision to make the move are incomparable. Rochester exposed me to amazing, brilliant people. I’m connected to organizations and groups based on my interests like I never imagined. My courses are challenging, but interesting. If there is anything I am appreciative of in 2016, it is for the opportunities that have presented themselves to me in Rochester.

Adventures into FOSS@MAGIC

On 2016 Dec. 10th, the "FOSS Family" went to dinner at a local restaurant to celebrate the semester

On 2016 Dec. 10th, the “FOSS Family” went to dinner at a local restaurant to celebrate the semester

My involvement with the Free and Open Source Software (FOSS) community at RIT has grown exponentially since I began participating in 2015. I took my first course in the FOSS minor, Humanitarian Free and Open Source Software Development in spring 2016. In the following fall 2016 semester, I became the teaching assistant for the course. I helped show our community’s projects at Imagine RIT. I helped carry the RIT FOSS flag in California (more on that later). The FOSS@MAGIC initiative was an influencing factor for my decision to attend RIT and continues to play an impact in my life as a student.

I eagerly look forward to future opportunities for the FOSS projects and initiatives at RIT to grow and expand. Bringing open source into more students’ hands excites me!

I <3 WiC

With a new schedule, the fall 2016 semester marked the beginning of my active involvement with the Women in Computing (WiC) program at RIT, as part of the Allies committee. Together with other members of the RIT community, we work together to find issues in our community, discuss them and share experiences, and find ways to grow the WiC mission: to promote the success and advancement of women in their academic and professional careers.

WiCHacks 2016 Opening CeremonyIn spring 2016, I participated as a volunteer for WiCHacks, the annual all-female hackathon hosted at RIT. My first experience with WiCHacks left me impressed by all the hard work by the organizers and the entire atmosphere and environment of the event. After participating as a volunteer, I knew I wanted to become more involved with the organization. Fortunately, fall 2016 enabled me to become more active and engaged with the community. Even though I will be unable to attend WiCHacks 2017, I hope to help support the event in any way I can.

Also, hey! If you’re a female high school or university student in the Rochester area (or willing to do some travel), you should seriously check this out!

Google Summer of Code

Google Summer of Code, abbreviated to GSoC, is an annual program run by Google every year. Google works with open source projects to offer stipends for them to pay students to work on projects over the summer. In a last-minute decision to apply, I was accepted as a contributing student to the Fedora Project. My proposal was to work within the Fedora Infrastructure team to help automate the WordPress platforms with Ansible. My mentor, Patrick Uiterwijk, provided much of the motivation for the proposal and worked with me throughout the summer as I began learning Ansible for the first time. Over the course of the summer, my learned knowledge began to turn into practical experience.

It would be unfair for a reflection to count successes but not failures. GSoC was one of the most challenging and stressful activities I’ve ever participated in. It was a complete learning experience for me. One area I noted that I needed to improve on was communication. My failing point was not regularly communicating what I was working through or stuck on with my mentor and the rest of the Fedora GSoC community. GSoC taught me the value of asking questions often when you’re stuck, especially in an online contribution format.

On the positive side, GSoC helped formally introduce me to Ansible, and to a lesser extent, the value of automation in operations work. My work in GSoC helped enable me to become a sponsored sysadmin of Fedora, where I mostly focus my time contributing to the Badges site. Additionally, my experience in GSoC helped me when interviewing for summer internships (also more on this later).

Google Summer of Code came with many ups and downs. But I made it and passed the program. I’m happy and fortunate to have received this opportunity from the Fedora Project and Google. I learned several valuable lessons that have and will impact going forward into my career. I look forward to participating either as a mentor or organizer for GSoC 2017 with the Fedora Project this year.

Flock 2016

Group photo of all Flock 2016 attendees outside of the conference venue (Photo courtesy of Joe Brockmeier)

Group photo of all Flock 2016 attendees outside of the conference venue (Photo courtesy of Joe Brockmeier)

Towards the end of summer, in the beginning of August, I was accepted as a speaker to the annual Fedora Project contributor conference, Flock. As a speaker, my travel and accommodation were sponsored to the event venue in Kraków, Poland.

Months after Flock, I am still incredibly grateful for receiving the opportunity to attend the conference. I am appreciative and thankful to Red Hat for helping cover my costs to attend, which is something I would never be able to do on my own. Outside of the real work and productivity that happened during the conference, I am happy to have mapped names to faces. I met incredible people from all corners of the world and have made new lifelong friends (who I was fortunate to see again in 2017)! Flock introduced me in-person to the diverse and brilliant community behind the Fedora Project. It is an experience that will stay with me forever.

To read a more in-depth analysis of my time in Poland, you can read my full write-up of Flock 2016.

To Kraków for Flock with Bee, Amita, Jona, and Giannis

On a bus to the Kraków city center with Bee Padalkar, Amita Sharma, Jona Azizaj, and Giannis Konstantinidis (left to right).

Maryland (Bitcamp), Massachusetts (HackMIT), California (MINECON)

Bitcamp 2016: The Fedora Ambassadors of Bitcamp 2016

The Fedora Ambassadors at Bitcamp 2016. Left to right: Chaoyi Zha (cydrobolt), Justin W. Flory (jflory7), Mike DePaulo (mikedep333), Corey Sheldon (linuxmodder)

2016 provided me the opportunity to explore various parts of my country. Throughout the year, I attended various conferences to represent the Fedora Project, the SpigotMC project, and the RIT open source community.

There are three distinct events that stand out in my memory. For the first time, I visited the University of Maryland for Bitcamp as a Fedora Ambassador. It also provided me an opportunity to see my nation’s capitol for the first time. I also visited Boston for the first time this year as well for HackMIT, MIT’s annual hackathon event. I also participated as a Fedora Ambassador and met brilliant students from around the country (and even the world, with one student I met flying in from India for the weekend).

"Team Ubuntu" shows off their project to Charles Profitt before the project deadline for HackMIT 2016

“Team Ubuntu” shows off their project to Charles Profitt before the project deadline for HackMIT 2016

Lastly, I also took my first journey to the US west coast for MINECON 2016, the annual Minecraft convention. I attended as a staff member of the SpigotMC project and a representative of the open source community at RIT.

All three of these events have their own event reports to go with them. More info and plenty of pictures are in the full reports.

Vermont 2016 with Matt

Shortly after I arrived, Matt Coutu took me around to see the sights and find coffee

Shortly after I arrived, Matt took me around to see the sights and find coffee.

Some trips happen without prior arrangements and planning. Sometimes, the best memories are made by not saying no. I remember the phone call with one of my closest friends, Matt Coutu, at some point in October. On a sudden whim, we planned my first visit to Vermont to visit him. Some of the things he told me to expect made me excited to explore Vermont! And then in the pre-dawn hours of November 4th, I made the trek out to Vermont to see him.

50 feet up into the air atop Spruce Mountain was colder than we expected

50 feet up into the air atop Spruce Mountain was colder than we expected.

Instantly when crossing over the state border, I knew this was one of the most beautiful states I ever visited. During the weekend, the two of us did things that I think only the two of us would enjoy. We climbed a snowy mountain to reach an abandoned fire watchtower, where we endured a mini blizzard. We walked through a city without a specific destination in mind, but to go wherever the moment took us.

We visited a quiet dirt road that led to a meditation house and cavern maintained by monks, where we meditated and drank in the experience. I wouldn’t classify the trip has a high-energy or engaging trip, but for me, it was one of the most enjoyable trips I’ve embarked on yet. There are many things that I still hold on to from that weekend for remembering or reflecting back on.

A big shout-out to Matt for always supporting me with everything I do and always being there when we need each other.

Martin Bridge may not be one of your top places to visit in Vermont, but if you keep going, you'll find a one-of-a-kind view

Martin Bridge may not be one of your top places to visit in Vermont, but if you keep going, you’ll find a one-of-a-kind view.

Finally seeing NYC with Nolski

Mike Nolan and Justin W. Flory venture through New York City early on a Sunday evening

Mike Nolan and I venture through New York City early on a Sunday evening

In no short time after the Vermont trip, I purchased tickets for my favorite band, El Ten Eleven, in New York City on November 12th. What turned into a one-day trip to see the band turned into an all-weekend trip to see the band, see New York City, and spend some time catching up with two of my favorite people, Mike Nolan (nolski) and Remy DeCausemaker (decause). During the weekend, I saw the World Trade Center memorial site for the first time, tried some amazing bagels, explored virtual reality in Samsung’s HQ, and got an exclusive inside look at the Giphy office.

This was my third time in New York City, but my first time to explore the city. Another shout-out goes to Mike for letting me crash on his couch and stealing his Sunday to walk through his metaphorical backyard. Hopefully it isn’t my last time to visit the city either!

Finalizing study abroad

This may be cheating since it was taken in 2017, but this is one of my favorite photos from Dubrovnik, Croatia so far

This may be cheating since it was taken in 2017, but this is one of my favorite photos from Dubrovnik, Croatia so far. You can find more like this on my 500px gallery!

At the end of 2016, I finalized a plan that was more than a year in the making. I applied and was accepted to study abroad at the Rochester Institute of Technology campus in Dubrovnik, Croatia. RIT has a few satellite campuses across the world: two in Croatia (Zagreb and Dubrovnik) and one in Dubai, UAE. In addition to being accepted, the university provided me a grant to further my education abroad. I am fortunate to have received this opportunity and can’t wait to spend the next few months of my life in Croatia. I am currently studying in Dubrovnik since January until the end of May.

During my time here, I will be taking 12 credit hours of courses. I am taking ISTE-230 (Introduction to Database and Data Modeling), ENGL-361 (Technical Writing), ENVS-150 (Ecology of the Dalmatian Coast), and lastly, FOOD-161 (Wines of the World). The last one was a fun one that I took for myself to try broadening my experiences while abroad.

Additionally, one of my personal goals for 2017 is to practice my photography skills. During my time abroad, I have created a gallery on 500px where I upload my top photos from every week. I welcome feedback and opinions about my pictures, and if you have criticism for how I can improve, I’d love to hear about it!

Accepting my first co-op

The last big break that I had in 2016 was accepting my first co-op position. Starting in June, I will be a Production Engineering Intern at Jump Trading, LLC. I started interviewing with Jump Trading in October and even had an on-site interview that brought me to their headquarters in Chicago at the beginning of December. After meeting the people and understanding the culture of the company, I am happy to accept a place at the team. I look forward to learning from some of the best in the industry and hope to contribute to some of the fascinating projects going on there.

From June until late August, I will be starting full-time at their Chicago office. If you are in the area or ever want to say hello, let me know and I’d be happy to grab coffee, once I figure out where all the best coffee shops in Chicago are!

In summary

2015 felt like a difficult year to follow, but 2016 exceeded my expectations. I acknowledge and I’m grateful for the opportunities this year presented to me. Most importantly, I am thankful for the people who have touched my life in a unique way. I met many new people and strengthened my friendships and bonds with many old faces too. All of the great things from the past year would not be possible without the influence, mentorship, guidance, friendship, and comradery these people have given me. My mission is to always pay it forward to others in any way that I can, so that others are able to experience the same opportunities (or better).

2017 is starting off hot and moving quickly, so I hope I can keep up! I can’t wait to see what this year brings and hope that I have the chance to meet more amazing people, and also meet many of my old friends again, wherever that may be.

Keep the FOSS flag high.

The post 2016 – My Year in Review appeared first on Justin W. Flory's Blog.

North America and Fedora: Year in Review

Posted by Fedora Community Blog on February 17, 2017 08:15 AM

The past year has proven to be both challenging and demanding for our Ambassadors. During the past year there have been a lot of new ideas proposed and more events that are being sought out attempting to expand our base. Many of the ventures have been with hack-a-thons in several states. This has been a relatively new venture in those areas. Since our involvement in these types of events, we quickly discovered that Fedora and the associated spins were a new tool for most of these individuals attending and participating. That was a surprising fact within the community that the young and impressionable individuals seemed to be using Windows more than any other operating system available. Since those few we (Fedora) attended, there has been an increase in the open source software utilization across the board at these types of events, a total and undeniable success.

Looking back at North America events

Fedora Project Leader Matthew Miller and Ambassador Mike DePaulo at the Fedora table during LISA 16

Fedora Project Leader Matthew Miller and Ambassador Mike DePaulo at the Fedora table during LISA 16

Covering the larger events such as Linux Fest Northwest, SCaLE 14x, OSCON, Texas Linux Fest, Southeast Linux Fest, Ohio Linux Fest, SeaGL, and LISA 2016 were a large and diverse group of North America Ambassadors, each with their own specialties and wide range of knowledge with the hopes of showing the best Open Source Software out today. As usual, the first event of the year was SCaLE 14x, a large event that showcases various operating systems and open source software. We always have a great attendance at SCaLE. This past year, we added another Ambassador to the group with Perry Rivera (lajuggler) who has brought new ideas and vision to our group, as well as our stable Brian Monroe (ParadoxGuitarist), Scott Williams (VWBUSGUY), and Alejandro who bridges the gap with our Spanish speaking customers.

Texas events

OSCON and Texas Linux Fest also proved to be noteworthy. Since OSCON events are usually outside of our price range for sponsorship, we were entirely grateful to Red Hat for allowing us to share the booth with them. Both events were headed by Jon Disnard (parasense) for Fedora. We are also lucky to have Adam Miller (maxamillion) who is within the area and helped on short notice. Both events were successful in explaining the importance of Open Source Software and how Fedora plays a vital role in being the leader in technical and inventive ideas leading right back into the software. These two events are the only events that showed up in the Midwest region that had the local (somewhat) ambassadors available.

In the next year, Texas Linux fest will be in the Fall of 2017. We will see what that brings for attendance, hopefully more since the local college was in summer recess during the last event in 2016.

US northwest events

Our local Northwest ambassador headed up two events this past year, Linux Fest Northwest, and SeaGL, both in the host city of Seattle, Washington. Both events were extremely effective and resulted in large attendance. Jeff Sandys (jsandys) who has been with the program for eight years is Fedora’s Seattle-area local ambassador. He has been attending and planning events in the area for some time now. Although we do have some others that are in the area, Jeff is our active Ambassador for the Pacific Northwest. Thanks to Laura Abbott (labbott) for also helping us with some of our events this year in Seattle on short notice.

US east coast

Fedora Ambassadors attend Bitcamp 2016 at the University of Maryland (left to right: Chaoyi Zha, Justin W. Flory, Mike DePaulo, Corey Sheldon)

Fedora Ambassadors attend Bitcamp 2016 at the University of Maryland (left to right: Chaoyi Zha, Justin W. Flory, Mike DePaulo, Corey Sheldon)

The East coast is always busy with events during the year. Our major events include Southeast Linux Fest, Ohio Linux Fest, LISA, Software Freedom Day, and some of the smaller events such as BrickHack, Bitcamp, HackMIT, NASA Space Apps, and FOSSCON. Some key individuals in the planning and event attendance are Ben Williams (kk4ewt), Corey Sheldon (linuxmodder), Justin W. Flory (jflory7), Nick Bebout (nb), Chaoyi Zha (cydrobolt), Dan Mossor (danofsatx) and Michael DePaulo (mikedep333).

Southeast Linux Fest usually has the largest showing of Ambassadors from the Midwest and southeast corners of the country. The past year, we had seven Ambassadors in attendance. This gave us the flexibility to also make ourselves available to other activities during the event. As usual, the Amateur Radio exam was administered by Ben Williams and Nick Bebout along with other smaller activities as well.

Fedora Ambassadors invite BrickHack attendees to join them at the "hacker table" to spend time hanging out with the Fedora community

Fedora Ambassadors invite BrickHack attendees to join them at the “hacker table” to spend time hanging out with the Fedora community

Ohio Linux fest is also another event we normally attend in the greater Columbus, OH area. This event usually draws from surrounding states as well, such as Indiana, since there has been no event in the Indianapolis area for the past few years. Sadly that was a result of one of our own Ambassadors who lost his battle with cancer, Matthew Williams (Lord Drachenblut)… you will be missed.

Some of the smaller events that were on the east coast (headed up by Corey Sheldon and Justin W. Flory) were also successful in delivering a powerful message to the Free and Open Source community. Even though the events were not on a large scale of attendance such as Southeast or Northwest Linux Fests, the delivery of Fedora was there. We weren’t handing out hundreds of media discs or stickers in volume, but the small, sustainable word of what we are about spreads quickly from the small event through the local Linux Users Groups. Feedback received from attending was nothing short of wonderful. Although you will always get those hard-liner folks that use only X or Y and never consider Z, the question is then why are you not willing to try a new or different experience, and they can never give an answer supporting what they use in an operating system. Maybe it’s a knowledge factor, or a specific equipment / hardware configuration, who knows. But those individuals will always take swag from our table, and take media as well, maybe they are actually Fedora users but don’t want the other hard-lined friends to know.

Reflecting back

Our Ambassadors are the keys to our success. Without the outstanding group we currently have, I do not think that the group would be where they are at today. We have some new Ambassadors that joined the group during the year:

We hope the next year will bring us more with Ambassadors and more ideas to the table in reference to the best Operating System in the Open Source category.

Event reports

Here are a few links to event reports.

LFNW

LISA 2016

SELF 2016

Ohio Linux Fest

SeaGL

BrickHack 2016

Bitcamp 2016

HackMIT 2016


Image courtesy of Travis Torres – originally posted to Unsplash as “Untitled“. Modifications by Justin W. Flory.

The post North America and Fedora: Year in Review appeared first on Fedora Community Blog.

Saving laptop power with powertop

Posted by Fedora Magazine on February 17, 2017 08:00 AM

If there’s one thing you want from a laptop, it’s long battery life. You want every drop of power you can get to work, read, or just be entertained on a long jaunt. So it’s good to know where your power is going.

You can use the powertop utility to see what’s drawing power when your system’s not plugged in. This utility only runs on the Terminal, so you’ll need to open a Terminal to get it. Then run this command:

sudo dnf install powertop

powertop needs access to hardware to measure power usage. So you have to run it with special privileges too:

sudo powertop

The powertop display looks similar to this screenshot. Power usage on your system will likely be different:

powertop-screenshot

The utility has several screens. You can switch between them using the Tab and Shift+Tab keys. To quit, hit the Esc key. The shortcuts are also listed at the bottom of the screen for your convenience.

The utility shows you power usage for various hardware and drivers. But it also displays interesting numbers like how many times your system wakes up each second. (Processors are so fast that they often sleep for the majority of a second of uptime.)

If you want to maximize battery power, you want to minimize wakeups. One way to do this is to use powertop‘s Tunables page. “Bad” indicates a setting that’s not saving power, although it might be good for performance. “Good” indicates a power saving setting is in effect. You can hit Enter on any tunable to switch it to the other setting.

The powertop package also provides a service that automatically sets all tunables to “Good” for optimal power saving. To use it, run this command:

sudo systemctl start powertop.service

If you’d like the service to run automatically when you boot, run this command:

sudo systemctl enable powertop.service

Caveat about this service and tunables: Certain tunables may risk your data, or (on some odd hardware) may cause your system to behave erratically. For instance, the “VM writeback timeout” setting affects how long the system waits before writing changed data to storage. This means a power saving setting trades off data security. If the system loses all power for some reason, you could lose up to 15 seconds’ of changed data, rather than the default 5. However, for most laptop users this isn’t an issue, since your system should warn you about low battery.

PHP version 7.0.16 and 7.1.2

Posted by Remi Collet on February 17, 2017 07:27 AM

RPM of PHP version 7.1.2 are available in remi-php71 repository for Fedora 23-25 and Enterprise Linux (RHEL, CentOS).

RPM of PHP version 7.0.16 are available in remi repository for Fedora 25 and in remi-php70 repository for Fedora 22-24 and Enterprise Linux (RHEL, CentOS).

emblem-important-2-24.pngPHP version 5.5 have reached its end life and is no longer maintained by the project.

These versions are also available as Software Collections.

No security fix this month, so no update for 5.6.30.

Version announcements:

emblem-notice-24.pngInstallation : use the Configuration Wizard and choose your version and installation mode.

Replacement of default PHP by version 7.1 installation (simplest):

yum-config-manager --enable remi-php71
yum update

Parallel installation of version 7.1 as Software Collection (x86_64 only):

yum install php71

Replacement of default PHP by version 7.0 installation (simplest):

yum-config-manager --enable remi-php70
yum update

Parallel installation of version 7.0 as Software Collection (x86_64 only):

yum install php70

And soon in the official updates:

emblem-important-2-24.pngTo be noticed :

  • EL7 rpm are build using RHEL-7.2
  • EL6 rpm are build using RHEL-6.8
  • a lot of new extensions are also available, see the PECL extension RPM status page

emblem-notice-24.pngInformation, read:

Base packages (php)

Software Collections (php56 / php70)

Getting started with Pagure CI

Posted by Adam Williamson on February 17, 2017 06:50 AM

I spent a few hours today setting up a couple of the projects I look after, fedfind and resultsdb_conventions, to use Pagure CI. It was surprisingly easy! Many thanks to Pingou and Lubomir for working on this, and of course Kevin for helping me out with the Jenkins side.

You really do just have to request a Jenkins project and then follow the instructions. I followed the step-by-step, submitted a pull request, and everything worked first time. So the interesting part for me was figuring out exactly what to run in the Jenkins job.

The instructions get you to the point where you’re in a checkout of the git repository with the pull request applied, and then you get to do…whatever you can given what you’re allowed to do in the Jenkins builder environment. That doesn’t include installing packages or running mock. So I figured what I’d do for my projects – which are both Python – is set up a good tox profile. With all the stuff discussed below, the actual test command in the Jenkins job – after the boilerplate from the guide that checks out and merges the pull request – is simply tox.

First things first, the infra Jenkins builders didn’t have tox installed, so Kevin kindly fixed that for me. I also convinced him to install all the variant Python version packages – python26, and the non-native Python 3 packages – on each of the Fedora builders, so I can be confident I get pretty much the same tox run no matter which of the builders the job winds up on.

Of course, one thing worth noting at this point is that tox installs all dependencies from PyPI: if something your code depends on isn’t in there (or installed on the Jenkins builders), you’ll be stuck. So another thing I got to do was start publishing fedfind on PyPI! That was pretty easy, though I did wind up cribbing a neat trick from this PyPI issue so I can keep my README in Markdown format but have setup.py convert it to rst when using it as the long_description for PyPI, so it shows up properly formatted, as long as pypandoc is installed (but work even if it isn’t, so you don’t need pandoc just to install the project).

After playing with it for a bit, I figured out that what I really wanted was to have two workflows. One is to run just the core test suite, without any unnecessary dependencies, with python setup.py test – this is important when building RPM packages, to make sure the tests pass in the exact environment the package is built in (and for). And then I wanted to be able to run the tests across multiple environments, with coverage and linting, in the CI workflow. There’s no point running code coverage or a linter while building RPMs, but you certainly want to do it for code changes.

So I put the install, test and CI requirements into three separate text files in each repo – install.requires, tests.requires and tox.requires – and adjusted the setup.py files to do this in their setup():

install_requires = open('install.requires').read().splitlines(),
tests_require = open('tests.requires').read().splitlines(),

In tox.ini I started with this:

deps=-r{toxinidir}/install.requires
     -r{toxinidir}/tests.requires
     -r{toxinidir}/tox.requires

so the tox runs get the extra dependencies. I usually write pytest tests, so to start with in tox.ini I just had this command:

commands=py.test

Pytest integration for setuptools can be done in various ways, but I use this one. Add a class to setup.py:

import sys
from setuptools import setup, find_packages
from setuptools.command.test import test as TestCommand

class PyTest(TestCommand):
    user_options = [('pytest-args=', 'a', "Arguments to pass to py.test")]

    def initialize_options(self):
        TestCommand.initialize_options(self)
        self.pytest_args = ''
        self.test_suite = 'tests'

    def run_tests(self):
        #import here, cause outside the eggs aren't loaded
        import pytest
        errno = pytest.main(self.pytest_args.split())
        sys.exit(errno)

and then this line in setup():

cmdclass = {'test': PyTest},

And that’s about the basic shape of it. With an envlist, we get the core tests running both through tox and setup.py. But we can do better! Let’s add some extra deps to tox.requires:

coverage
diff-cover
pylint
pytest-cov

and tweak the commands in tox.ini:

commands=py.test --cov-report term-missing --cov-report xml --cov fedfind
         diff-cover coverage.xml --fail-under=90
         diff-quality --violations=pylint --fail-under=90

By adding a few args to our py.test call we get a coverage report for our library with the pull request applied. The subsequent commands use the neat diff_cover tool to add some more information. diff-cover basically takes the full coverage report (coverage.xml is produced by --cov-report xml) and considers only the lines that are touched by the pull request; the --fail-under arg tells it to fail if there is less than 90% coverage of the modified lines. diff-quality runs a linter (in this case, pylint) on the code and, again, considers only the lines changed by the pull request. As you might expect, --fail-under=90 tells it to fail if the ‘quality’ of the changed code is below 90% (it normalizes all the linter scores to a percentage scale, so that really means a pylint score of less than 9.0).

So without messing around with shipping all our stuff off to hosted services, we get a pretty decent indicator of the test coverage and code quality of the pull request, and it shows up as failing tests if they’re not good enough.

It’s kind of overkill to run the coverage and linter on all the tested Python environments, but it is useful to do it at least on both Python 2 and 3, since the pylint results may differ, and the code might hit different paths. Running them on every minor version isn’t really necessary, but it doesn’t take that long so I’m not going to sweat it too much.

But that does bring me to the last refinement I made, because you can vary what tox does in different environments. One thing I wanted for fedfind was to run the tests not just on Python 2.6, but with the ancient versions of several dependencies that are found in RHEL / EPEL 6. And there’s also an interesting bug in pylint which makes it crash when running on fedfind under Python 3.6. So my tox.ini really looks this:

[tox]
envlist = py26,py27,py34,py35,py36,py37
skip_missing_interpreters=true
[testenv]
deps=py27,py34,py35,py36,py37: -r{toxinidir}/install.requires
     py26: -r{toxinidir}/install.requires.py26
     py27,py34,py35,py36,py37: -r{toxinidir}/tests.requires
     py26: -r{toxinidir}/tests.requires.py26
     py27,py34,py35,py36,py37: -r{toxinidir}/tox.requires
     py26: -r{toxinidir}/tox.requires.py26
commands=py27,py34,py35,py36,py37: py.test --cov-report term-missing --cov-report xml --cov fedfind
         py26: py.test
         py27,py34,py35,py36,py37: diff-cover coverage.xml --fail-under=90
         # pylint breaks on functools imports in python 3.6+
         # https://github.com/PyCQA/astroid/issues/362
         py27,py34,py35: diff-quality --violations=pylint --fail-under=90
setenv =
    PYTHONPATH = {toxinidir}

As you can probably guess, what’s going on there is we’re installing different dependencies and running different commands in different tox ‘environments’. pip doesn’t really have a proper dependency solver, which – among other things – unfortunately means tox barfs if you try and do something like listing the same dependency twice, the first time without any version restriction, the second time with a version restriction. So I had to do a bit more duplication than I really wanted, but never mind. What the files wind up doing is telling tox to install specific, old versions of some dependencies for the py26 environment:

[install.requires.py26]
cached-property
productmd
setuptools == 0.6.rc10
six == 1.7.3

[tests.requires.py26]
pytest==2.3.5
mock==1.0.1

tox.requires.py26 is just shorter, skipping the coverage and pylint bits, because it turns out to be a pain trying to provide old enough versions of various other things to run those checks with the older pytest, and there’s no real need to run the coverage and linter on py26 as long as they run on py27 (see above). As you can see in the commands section, we just run plain py.test and skip the other two commands on py26; on py36 and py37 we skip the diff-quality run because of the pylint bug.

So now on every pull request, we check the code (and tests – it’s usually the tests that break, because I use some pytest feature that didn’t exist in 2.3.5…) still work with the ancient RHEL 6 Python, pytest, mock, setuptools and six, check it on various other Python interpreter versions, and enforce some requirements for test coverage and code quality. And the package builds can still just do python setup.py test and not require coverage or pylint. Who needs github and coveralls? 😉

Of course, after doing all this I needed a pull request to check it on. For resultsdb_conventions I just made a dumb fake one, but for fedfind, because I’m an idiot, I decided to write that better compose ID parser I’ve been meaning to do for the last week. So that took another hour and a half. And then I had to clean up the test suite…sigh.

Bluetooth in Fedora

Posted by Nathaniel McCallum on February 16, 2017 08:53 PM

So… Bluetooth. It’s everywhere now. Well, everywhere except Fedora. Fedora does, of course support bluetooth. But even the most common workflows are somewhat spotty. We should improve this.

To this end, I’ve enlisted the help of the Don Zickus, kernel developer extrordinaire, and Adam Williamson, the inimitable Fedora QA guru. The plan is to create a set of user tests for the most common bluetooth tasks. This plan has several goals.

First, we’d like to know when stuff is broken. For example, the recent breakage in linux-firmware. Catching this stuff early is a huge plus.

Second, we’d like to get high quality bug reports. When things do break, vague bug reports often cause things to sit in limbo for a while. Making sure we have all the debugging information up front can make reports actionable.

Third, we’d (eventually) like to block a new Fedora release if major functionality is broken. We’re obviously not ready for this step yet. But once the majority of workflows work on the hardware we care about, we need to ensure that we don’t ship a Fedora release with broken code.

To this end we are targeting three workflows which cover the most common cases:

  • Keyboards
  • Headsets
  • Mice

For more information, or to help develop the user testing, see the Fedora QA bug. Here’s to a better future!

Setting up a nested KVM guest for developing & testing PCI device assignment with NUMA

Posted by Daniel Berrange on February 16, 2017 12:44 PM

Over the past few years OpenStack Nova project has gained support for managing VM usage of NUMA, huge pages and PCI device assignment. One of the more challenging aspects of this is availability of hardware to develop and test against. In the ideal world it would be possible to emulate everything we need using KVM, enabling developers / test infrastructure to exercise the code without needing access to bare metal hardware supporting these features. KVM has long has support for emulating NUMA topology in guests, and guest OS can use huge pages inside the guest. What was missing were pieces around PCI device assignment, namely IOMMU support and the ability to associate NUMA nodes with PCI devices. Co-incidentally a QEMU community member was already working on providing emulation of the Intel IOMMU. I made a request to the Red Hat KVM team to fill in the other missing gap related to NUMA / PCI device association. To do this required writing code to emulate a PCI/PCI-E Expander Bridge (PXB) device, which provides a light weight host bridge that can be associated with a NUMA node. Individual PCI devices are then attached to this PXB instead of the main PCI host bridge, thus gaining affinity with a NUMA node. With this, it is now possible to configure a KVM guest such that it can be used as a virtual host to test NUMA, huge page and PCI device assignment integration. The only real outstanding gap is support for emulating some kind of SRIOV network device, but even without this, it is still possible to test most of the Nova PCI device assignment logic – we’re merely restricted to using physical functions, no virtual functions. This blog posts will describe how to configure such a virtual host.

First of all, this requires very new libvirt & QEMU to work, specifically you’ll want libvirt >= 2.3.0 and QEMU 2.7.0. We could technically support earlier QEMU versions too, but that’s pending on a patch to libvirt to deal with some command line syntax differences in QEMU for older versions. No currently released Fedora has new enough packages available, so even on Fedora 25, you must enable the “Virtualization Preview” repository on the physical host to try this out – F25 has new enough QEMU, so you just need a libvirt update.

# curl --output /etc/yum.repos.d/fedora-virt-preview.repo https://fedorapeople.org/groups/virt/virt-preview/fedora-virt-preview.repo
# dnf upgrade

For sake of illustration I’m using Fedora 25 as the OS inside the virtual guest, but any other Linux OS will do just fine. The initial task is to install guest with 8 GB of RAM & 8 CPUs using virt-install

# cd /var/lib/libvirt/images
# wget -O f25x86_64-boot.iso https://download.fedoraproject.org/pub/fedora/linux/releases/25/Server/x86_64/os/images/boot.iso
# virt-install --name f25x86_64  \
    --file /var/lib/libvirt/images/f25x86_64.img --file-size 20 \
    --cdrom f25x86_64-boot.iso --os-type fedora23 \
    --ram 8000 --vcpus 8 \
    ...

The guest needs to use host CPU passthrough to ensure the guest gets to see VMX, as well as other modern instructions and have 3 virtual NUMA nodes. The first guest NUMA node will have 4 CPUs and 4 GB of RAM, while the second and third NUMA nodes will each have 2 CPUs and 2 GB of RAM. We are just going to let the guest float freely across host NUMA nodes since we don’t care about performance for dev/test, but in production you would certainly pin each guest NUMA node to a distinct host NUMA node.

    ...
    --cpu host,cell0.id=0,cell0.cpus=0-3,cell0.memory=4096000,\
               cell1.id=1,cell1.cpus=4-5,cell1.memory=2048000,\
               cell2.id=2,cell2.cpus=6-7,cell2.memory=2048000 \
    ...

QEMU emulates various different chipsets and historically for x86, the default has been to emulate the ancient PIIX4 (it is 20+ years old dating from circa 1995). Unfortunately this is too ancient to be able to use the Intel IOMMU emulation with, so it is neccessary to tell QEMU to emulate the marginally less ancient chipset Q35 (it is only 9 years old, dating from 2007).

    ...
    --machine q35

The complete virt-install command line thus looks like

# virt-install --name f25x86_64  \
    --file /var/lib/libvirt/images/f25x86_64.img --file-size 20 \
    --cdrom f25x86_64-boot.iso --os-type fedora23 \
    --ram 8000 --vcpus 8 \
    --cpu host,cell0.id=0,cell0.cpus=0-3,cell0.memory=4096000,\
               cell1.id=1,cell1.cpus=4-5,cell1.memory=2048000,\
               cell2.id=2,cell2.cpus=6-7,cell2.memory=2048000 \
    --machine q35

Once the installation is completed, shut down this guest since it will be necessary to make a number of changes to the guest XML configuration to enable features that virt-install does not know about, using “virsh edit“. With the use of Q35, the guest XML should initially show three PCI controllers present, a “pcie-root”, a “dmi-to-pci-bridge” and a “pci-bridge”

<controller type='pci' index='0' model='pcie-root'/>
<controller type='pci' index='1' model='dmi-to-pci-bridge'>
  <model name='i82801b11-bridge'/>
  <address type='pci' domain='0x0000' bus='0x00' slot='0x1e' function='0x0'/>
</controller>
<controller type='pci' index='2' model='pci-bridge'>
  <model name='pci-bridge'/>
  <target chassisNr='2'/>
  <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
</controller>

PCI endpoint devices are not themselves associated with NUMA nodes, rather the bus they are connected to has affinity. The default pcie-root is not associated with any NUMA node, but extra PCI-E Expander Bridge controllers can be added and associated with a NUMA node. So while in edit mode, add the following to the XML config

<controller type='pci' index='3' model='pcie-expander-bus'>
  <target busNr='180'>
    <node>0</node>
  </target>
  <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
</controller>
<controller type='pci' index='4' model='pcie-expander-bus'>
  <target busNr='200'>
    <node>1</node>
  </target>
  <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</controller>
<controller type='pci' index='5' model='pcie-expander-bus'>
  <target busNr='220'>
    <node>2</node>
  </target>
  <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</controller>

It is not possible to plug PCI endpoint devices directly into the PXB, so the next step is to add PCI-E root ports into each PXB – we’ll need one port per device to be added, so 9 ports in total. This is where the requirement for libvirt >= 2.3.0 – earlier versions mistakenly prevented you adding more than one root port to the PXB

<controller type='pci' index='6' model='pcie-root-port'>
  <model name='ioh3420'/>
  <target chassis='6' port='0x0'/>
  <alias name='pci.6'/>
  <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
</controller>
<controller type='pci' index='7' model='pcie-root-port'>
  <model name='ioh3420'/>
  <target chassis='7' port='0x8'/>
  <alias name='pci.7'/>
  <address type='pci' domain='0x0000' bus='0x03' slot='0x01' function='0x0'/>
</controller>
<controller type='pci' index='8' model='pcie-root-port'>
  <model name='ioh3420'/>
  <target chassis='8' port='0x10'/>
  <alias name='pci.8'/>
  <address type='pci' domain='0x0000' bus='0x03' slot='0x02' function='0x0'/>
</controller>
<controller type='pci' index='9' model='pcie-root-port'>
  <model name='ioh3420'/>
  <target chassis='9' port='0x0'/>
  <alias name='pci.9'/>
  <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
</controller>
<controller type='pci' index='10' model='pcie-root-port'>
  <model name='ioh3420'/>
  <target chassis='10' port='0x8'/>
  <alias name='pci.10'/>
  <address type='pci' domain='0x0000' bus='0x04' slot='0x01' function='0x0'/>
</controller>
<controller type='pci' index='11' model='pcie-root-port'>
  <model name='ioh3420'/>
  <target chassis='11' port='0x10'/>
  <alias name='pci.11'/>
  <address type='pci' domain='0x0000' bus='0x04' slot='0x02' function='0x0'/>
</controller>
<controller type='pci' index='12' model='pcie-root-port'>
  <model name='ioh3420'/>
  <target chassis='12' port='0x0'/>
  <alias name='pci.12'/>
  <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
</controller>
<controller type='pci' index='13' model='pcie-root-port'>
  <model name='ioh3420'/>
  <target chassis='13' port='0x8'/>
  <alias name='pci.13'/>
  <address type='pci' domain='0x0000' bus='0x05' slot='0x01' function='0x0'/>
</controller>
<controller type='pci' index='14' model='pcie-root-port'>
  <model name='ioh3420'/>
  <target chassis='14' port='0x10'/>
  <alias name='pci.14'/>
  <address type='pci' domain='0x0000' bus='0x05' slot='0x02' function='0x0'/>|
</controller>

Notice that the values in ‘bus‘ attribute on the <address> element is matching the value of the ‘index‘ attribute on the <controller> element of the parent device in the topology. The PCI controller topology now looks like this

pcie-root (index == 0)
  |
  +- dmi-to-pci-bridge (index == 1)
  |    |
  |    +- pci-bridge (index == 2)
  |
  +- pcie-expander-bus (index == 3, numa node == 0)
  |    |
  |    +- pcie-root-port (index == 6)
  |    +- pcie-root-port (index == 7)
  |    +- pcie-root-port (index == 8)
  |
  +- pcie-expander-bus (index == 4, numa node == 1)
  |    |
  |    +- pcie-root-port (index == 9)
  |    +- pcie-root-port (index == 10)
  |    +- pcie-root-port (index == 11)
  |
  +- pcie-expander-bus (index == 5, numa node == 2)
       |
       +- pcie-root-port (index == 12)
       +- pcie-root-port (index == 13)
       +- pcie-root-port (index == 14)

All the existing devices are attached to the “pci-bridge” (the controller with index == 2). The devices we intend to use for PCI device assignment inside the virtual host will be attached to the new “pcie-root-port” controllers. We will provide 3 e1000 per NUMA node, so that’s 9 devices in total to add

<interface type='user'>
  <mac address='52:54:00:7e:6e:c6'/>
  <model type='e1000e'/>
  <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
</interface>
<interface type='user'>
  <mac address='52:54:00:7e:6e:c7'/>
  <model type='e1000e'/>
  <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
</interface>
<interface type='user'>
  <mac address='52:54:00:7e:6e:c8'/>
  <model type='e1000e'/>
  <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
</interface>
<interface type='user'>
  <mac address='52:54:00:7e:6e:d6'/>
  <model type='e1000e'/>
  <address type='pci' domain='0x0000' bus='0x09' slot='0x00' function='0x0'/>
</interface>
<interface type='user'>
  <mac address='52:54:00:7e:6e:d7'/>
  <model type='e1000e'/>
  <address type='pci' domain='0x0000' bus='0x0a' slot='0x00' function='0x0'/>
</interface>
<interface type='user'>
  <mac address='52:54:00:7e:6e:d8'/>
  <model type='e1000e'/>
  <address type='pci' domain='0x0000' bus='0x0b' slot='0x00' function='0x0'/>
</interface>
<interface type='user'>
  <mac address='52:54:00:7e:6e:e6'/>
  <model type='e1000e'/>
  <address type='pci' domain='0x0000' bus='0x0c' slot='0x00' function='0x0'/>
</interface>
<interface type='user'>
  <mac address='52:54:00:7e:6e:e7'/>
  <model type='e1000e'/>
  <address type='pci' domain='0x0000' bus='0x0d' slot='0x00' function='0x0'/>
</interface>
<interface type='user'>
  <mac address='52:54:00:7e:6e:e8'/>
  <model type='e1000e'/>
  <address type='pci' domain='0x0000' bus='0x0e' slot='0x00' function='0x0'/>
</interface>

Note that we’re using the “user” networking, aka SLIRP. Normally one would never want to use SLIRP but we don’t care about actually sending traffic over these NICs, and so using SLIRP avoids polluting our real host with countless TAP devices.

The final configuration change is to simply add the Intel IOMMU device

<iommu model='intel'/>

It is a capability integrated into the chipset, so it does not need any <address> element of its own. At this point, save the config and start the guest once more. Use the “virsh domifaddrs” command to discover the IP address of the guest’s primary NIC and ssh into it.

# virsh domifaddr f25x86_64
 Name       MAC address          Protocol     Address
-------------------------------------------------------------------------------
 vnet0      52:54:00:10:26:7e    ipv4         192.168.122.3/24

# ssh root@192.168.122.3

We can now do some sanity check that everything visible in the guest matches what was enabled in the libvirt XML config in the host. For example, confirm the NUMA topology shows 3 nodes

# dnf install numactl
# numactl --hardware
available: 3 nodes (0-2)
node 0 cpus: 0 1 2 3
node 0 size: 3856 MB
node 0 free: 3730 MB
node 1 cpus: 4 5
node 1 size: 1969 MB
node 1 free: 1813 MB
node 2 cpus: 6 7
node 2 size: 1967 MB
node 2 free: 1832 MB
node distances:
node   0   1   2 
  0:  10  20  20 
  1:  20  10  20 
  2:  20  20  10 

Confirm that the PCI topology shows the three PCI-E Expander Bridge devices, each with three NICs attached

# lspci -t -v
-+-[0000:dc]-+-00.0-[dd]----00.0  Intel Corporation 82574L Gigabit Network Connection
 |           +-01.0-[de]----00.0  Intel Corporation 82574L Gigabit Network Connection
 |           \-02.0-[df]----00.0  Intel Corporation 82574L Gigabit Network Connection
 +-[0000:c8]-+-00.0-[c9]----00.0  Intel Corporation 82574L Gigabit Network Connection
 |           +-01.0-[ca]----00.0  Intel Corporation 82574L Gigabit Network Connection
 |           \-02.0-[cb]----00.0  Intel Corporation 82574L Gigabit Network Connection
 +-[0000:b4]-+-00.0-[b5]----00.0  Intel Corporation 82574L Gigabit Network Connection
 |           +-01.0-[b6]----00.0  Intel Corporation 82574L Gigabit Network Connection
 |           \-02.0-[b7]----00.0  Intel Corporation 82574L Gigabit Network Connection
 \-[0000:00]-+-00.0  Intel Corporation 82G33/G31/P35/P31 Express DRAM Controller
             +-01.0  Red Hat, Inc. QXL paravirtual graphic card
             +-02.0  Red Hat, Inc. Device 000b
             +-03.0  Red Hat, Inc. Device 000b
             +-04.0  Red Hat, Inc. Device 000b
             +-1d.0  Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #1
             +-1d.1  Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #2
             +-1d.2  Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #3
             +-1d.7  Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #1
             +-1e.0-[01-02]----01.0-[02]--+-01.0  Red Hat, Inc Virtio network device
             |                            +-02.0  Intel Corporation 82801FB/FBM/FR/FW/FRW (ICH6 Family) High Definition Audio Controller
             |                            +-03.0  Red Hat, Inc Virtio console
             |                            +-04.0  Red Hat, Inc Virtio block device
             |                            \-05.0  Red Hat, Inc Virtio memory balloon
             +-1f.0  Intel Corporation 82801IB (ICH9) LPC Interface Controller
             +-1f.2  Intel Corporation 82801IR/IO/IH (ICH9R/DO/DH) 6 port SATA Controller [AHCI mode]
             \-1f.3  Intel Corporation 82801I (ICH9 Family) SMBus Controller

The IOMMU support will not be enabled yet as the kernel defaults to leaving it off. To enable it, we must update the kernel command line parameters with grub.

# vi /etc/default/grub
....add "intel_iommu=on"...
# grub2-mkconfig > /etc/grub2.cfg

While intel-iommu device in QEMU can do interrupt remapping, there is no way enable that feature via libvirt at this time. So we need to set a hack for vfio

echo "options vfio_iommu_type1 allow_unsafe_interrupts=1" > \
  /etc/modprobe.d/vfio.conf

This is also a good time to install libvirt and KVM inside the guest

# dnf groupinstall "Virtualization"
# dnf install libvirt-client
# rm -f /etc/libvirt/qemu/networks/autostart/default.xml

Note we’re disabling the default libvirt network, since it’ll clash with the IP address range used by this guest. An alternative would be to edit the default.xml to change the IP subnet.

Now reboot the guest. When it comes back up, there should be a /dev/kvm device present in the guest.

# ls -al /dev/kvm
crw-rw-rw-. 1 root kvm 10, 232 Oct  4 12:14 /dev/kvm

If this is not the case, make sure the physical host has nested virtualization enabled for the “kvm-intel” or “kvm-amd” kernel modules.

The IOMMU should have been detected and activated

# dmesg  | grep -i DMAR
[    0.000000] ACPI: DMAR 0x000000007FFE2541 000048 (v01 BOCHS  BXPCDMAR 00000001 BXPC 00000001)
[    0.000000] DMAR: IOMMU enabled
[    0.203737] DMAR: Host address width 39
[    0.203739] DMAR: DRHD base: 0x000000fed90000 flags: 0x1
[    0.203776] DMAR: dmar0: reg_base_addr fed90000 ver 1:0 cap 12008c22260206 ecap f02
[    2.910862] DMAR: No RMRR found
[    2.910863] DMAR: No ATSR found
[    2.914870] DMAR: dmar0: Using Queued invalidation
[    2.914924] DMAR: Setting RMRR:
[    2.914926] DMAR: Prepare 0-16MiB unity mapping for LPC
[    2.915039] DMAR: Setting identity map for device 0000:00:1f.0 [0x0 - 0xffffff]
[    2.915140] DMAR: Intel(R) Virtualization Technology for Directed I/O

The key message confirming everything is good is the last line there – if that’s missing something went wrong – don’t be mislead by the earlier “DMAR: IOMMU enabled” line which merely says the kernel saw the “intel_iommu=on” command line option.

The IOMMU should also have registered the PCI devices into various groups

# dmesg  | grep -i iommu  |grep device
[    2.915212] iommu: Adding device 0000:00:00.0 to group 0
[    2.915226] iommu: Adding device 0000:00:01.0 to group 1
...snip...
[    5.588723] iommu: Adding device 0000:b5:00.0 to group 14
[    5.588737] iommu: Adding device 0000:b6:00.0 to group 15
[    5.588751] iommu: Adding device 0000:b7:00.0 to group 16

Libvirt meanwhile should have detected all the PCI controllers/devices

# virsh nodedev-list --tree
computer
  |
  +- net_lo_00_00_00_00_00_00
  +- pci_0000_00_00_0
  +- pci_0000_00_01_0
  +- pci_0000_00_02_0
  +- pci_0000_00_03_0
  +- pci_0000_00_04_0
  +- pci_0000_00_1d_0
  |   |
  |   +- usb_usb2
  |       |
  |       +- usb_2_0_1_0
  |         
  +- pci_0000_00_1d_1
  |   |
  |   +- usb_usb3
  |       |
  |       +- usb_3_0_1_0
  |         
  +- pci_0000_00_1d_2
  |   |
  |   +- usb_usb4
  |       |
  |       +- usb_4_0_1_0
  |         
  +- pci_0000_00_1d_7
  |   |
  |   +- usb_usb1
  |       |
  |       +- usb_1_0_1_0
  |       +- usb_1_1
  |           |
  |           +- usb_1_1_1_0
  |             
  +- pci_0000_00_1e_0
  |   |
  |   +- pci_0000_01_01_0
  |       |
  |       +- pci_0000_02_01_0
  |       |   |
  |       |   +- net_enp2s1_52_54_00_10_26_7e
  |       |     
  |       +- pci_0000_02_02_0
  |       +- pci_0000_02_03_0
  |       +- pci_0000_02_04_0
  |       +- pci_0000_02_05_0
  |         
  +- pci_0000_00_1f_0
  +- pci_0000_00_1f_2
  |   |
  |   +- scsi_host0
  |   +- scsi_host1
  |   +- scsi_host2
  |   +- scsi_host3
  |   +- scsi_host4
  |   +- scsi_host5
  |     
  +- pci_0000_00_1f_3
  +- pci_0000_b4_00_0
  |   |
  |   +- pci_0000_b5_00_0
  |       |
  |       +- net_enp181s0_52_54_00_7e_6e_c6
  |         
  +- pci_0000_b4_01_0
  |   |
  |   +- pci_0000_b6_00_0
  |       |
  |       +- net_enp182s0_52_54_00_7e_6e_c7
  |         
  +- pci_0000_b4_02_0
  |   |
  |   +- pci_0000_b7_00_0
  |       |
  |       +- net_enp183s0_52_54_00_7e_6e_c8
  |         
  +- pci_0000_c8_00_0
  |   |
  |   +- pci_0000_c9_00_0
  |       |
  |       +- net_enp201s0_52_54_00_7e_6e_d6
  |         
  +- pci_0000_c8_01_0
  |   |
  |   +- pci_0000_ca_00_0
  |       |
  |       +- net_enp202s0_52_54_00_7e_6e_d7
  |         
  +- pci_0000_c8_02_0
  |   |
  |   +- pci_0000_cb_00_0
  |       |
  |       +- net_enp203s0_52_54_00_7e_6e_d8
  |         
  +- pci_0000_dc_00_0
  |   |
  |   +- pci_0000_dd_00_0
  |       |
  |       +- net_enp221s0_52_54_00_7e_6e_e6
  |         
  +- pci_0000_dc_01_0
  |   |
  |   +- pci_0000_de_00_0
  |       |
  |       +- net_enp222s0_52_54_00_7e_6e_e7
  |         
  +- pci_0000_dc_02_0
      |
      +- pci_0000_df_00_0
          |
          +- net_enp223s0_52_54_00_7e_6e_e8

And if you look at at specific PCI device, it should report the NUMA node it is associated with and the IOMMU group it is part of

# virsh nodedev-dumpxml pci_0000_df_00_0
<device>
  <name>pci_0000_df_00_0</name>
  <path>/sys/devices/pci0000:dc/0000:dc:02.0/0000:df:00.0</path>
  <parent>pci_0000_dc_02_0</parent>
  <driver>
    <name>e1000e</name>
  </driver>
  <capability type='pci'>
    <domain>0</domain>
    <bus>223</bus>
    <slot>0</slot>
    <function>0</function>
    <product id='0x10d3'>82574L Gigabit Network Connection</product>
    <vendor id='0x8086'>Intel Corporation</vendor>
    <iommuGroup number='10'>
      <address domain='0x0000' bus='0xdc' slot='0x02' function='0x0'/>
      <address domain='0x0000' bus='0xdf' slot='0x00' function='0x0'/>
    </iommuGroup>
    <numa node='2'/>
    <pci-express>
      <link validity='cap' port='0' speed='2.5' width='1'/>
      <link validity='sta' speed='2.5' width='1'/>
    </pci-express>
  </capability>
</device>

Finally, libvirt should also be reporting the NUMA topology

# virsh capabilities
...snip...
<topology>
  <cells num='3'>
    <cell id='0'>
      <memory unit='KiB'>4014464</memory>
      <pages unit='KiB' size='4'>1003616</pages>
      <pages unit='KiB' size='2048'>0</pages>
      <pages unit='KiB' size='1048576'>0</pages>
      <distances>
        <sibling id='0' value='10'/>
        <sibling id='1' value='20'/>
        <sibling id='2' value='20'/>
      </distances>
      <cpus num='4'>
        <cpu id='0' socket_id='0' core_id='0' siblings='0'/>
        <cpu id='1' socket_id='1' core_id='0' siblings='1'/>
        <cpu id='2' socket_id='2' core_id='0' siblings='2'/>
        <cpu id='3' socket_id='3' core_id='0' siblings='3'/>
      </cpus>
    </cell>
    <cell id='1'>
      <memory unit='KiB'>2016808</memory>
      <pages unit='KiB' size='4'>504202</pages>
      <pages unit='KiB' size='2048'>0</pages>
      <pages unit='KiB' size='1048576'>0</pages>
      <distances>
        <sibling id='0' value='20'/>
        <sibling id='1' value='10'/>
        <sibling id='2' value='20'/>
      </distances>
      <cpus num='2'>
        <cpu id='4' socket_id='4' core_id='0' siblings='4'/>
        <cpu id='5' socket_id='5' core_id='0' siblings='5'/>
      </cpus>
    </cell>
    <cell id='2'>
      <memory unit='KiB'>2014644</memory>
      <pages unit='KiB' size='4'>503661</pages>
      <pages unit='KiB' size='2048'>0</pages>
      <pages unit='KiB' size='1048576'>0</pages>
      <distances>
        <sibling id='0' value='20'/>
        <sibling id='1' value='20'/>
        <sibling id='2' value='10'/>
      </distances>
      <cpus num='2'>
        <cpu id='6' socket_id='6' core_id='0' siblings='6'/>
        <cpu id='7' socket_id='7' core_id='0' siblings='7'/>
      </cpus>
    </cell>
  </cells>
</topology>
...snip...

Everything should be ready and working at this point, so lets try and install a nested guest, and assign it one of the e1000e PCI devices. For simplicity we’ll just do the exact same install for the nested guest, as we used for the top level guest we’re currently running in. The only difference is that we’ll assign it a PCI device

# cd /var/lib/libvirt/images
# wget -O f25x86_64-boot.iso https://download.fedoraproject.org/pub/fedora/linux/releases/25/Server/x86_64/os/images/boot.iso
# virt-install --name f25x86_64 --ram 2000 --vcpus 8 \
    --file /var/lib/libvirt/images/f25x86_64.img --file-size 10 \
    --cdrom f25x86_64-boot.iso --os-type fedora23 \
    --hostdev pci_0000_df_00_0 --network none

If everything went well, you should now have a nested guest with an assigned PCI device attached to it.

This turned out to be a rather long blog posting, but this is not surprising as we’re experimenting with some cutting edge KVM features trying to emulate quite a complicated hardware setup, that deviates from normal KVM guest setup quite a way. Perhaps in the future virt-install will be able to simplify some of this, but at least for the short-medium term there’ll be a fair bit of work required. The positive thing though is that this has clearly demonstrated that KVM is now advanced enough that you can now reasonably expect to do development and testing of features like NUMA and PCI device assignment inside nested guests.

The next step is to convince someone to add QEMU emulation of an Intel SRIOV network device….volunteers please :-)

Install old Skype package into Fedora 25.

Posted by mythcat on February 16, 2017 11:55 AM
This is old package of skype and can be download from this link: skype Fedora 16 - 32 bit.
The install process of skype using the dnf command:

[root@localhost Downloads]# dnf install skype-4.3.0.37-fedora.i586.rpm
Last metadata expiration check: 2:47:29 ago on Wed Feb 15 12:56:31 2017.
Dependencies resolved.
================================================================================
 Package              Arch   Version                              Repository
                                                                           Size
================================================================================
Installing:
 alsa-lib             i686   1.1.1-2.fc25                         fedora  411 k
 alsa-plugins-pulseaudio
                      i686   1.1.1-1.fc25                         fedora   45 k
 bzip2-libs           i686   1.0.6-21.fc25                        updates  44 k
 cairo                i686   1.14.8-1.fc25                        updates 750 k
 ...
 xz-libs              i686   5.2.2-2.fc24                         fedora   98 k
 zlib                 i686   1.2.8-10.fc24                        fedora   98 k

Transaction Summary
================================================================================
Install  104 Packages

Total size: 90 M
Total download size: 71 M
Installed size: 264 M
Is this ok [y/N]: y
...
  sni-qt.i686 0.2.6-7.fc24                                                     
  sqlite-libs.i686 3.14.2-1.fc25                                               
  systemd-libs.i686 231-12.fc25                                                
  tcp_wrappers-libs.i686 7.6-83.fc25                                           
  xz-libs.i686 5.2.2-2.fc24                                                    
  zlib.i686 1.2.8-10.fc24                                                      

Complete!

To run the skype just use the command skype under linux shell:


ANNOUNCE: libosinfo 1.0.0 release

Posted by Daniel Berrange on February 16, 2017 11:19 AM

NB, this blog post was intended to be published back in November last year, but got forgotten in draft stage. Publishing now in case anyone missed the release…

I am happy to announce a new release of libosinfo, version 1.0.0 is now available, signed with key DAF3 A6FD B26B 6291 2D0E 8E3F BE86 EBB4 1510 4FDF (4096R). All historical releases are available from the project download page.

Changes in this release include:

  • Update loader to follow new layout for external database
  • Move all database files into separate osinfo-db package
  • Move osinfo-db-validate into osinfo-db-tools package

As promised, this release of libosinfo has completed the separation of the library code from the database files. There are now three independently released artefacts:

  • libosinfo – provides the libosinfo shared library and most associated command line tools
  • osinfo-db – contains only the database XML files and RNG schema, no code at all.
  • osinfo-db-tools – a set of command line tools for managing deployment of osinfo-db archives for vendors & users.

Before installing the 1.0.0 release of libosinfo it is necessary to install osinfo-db-tools, followed by osinfo-db. The download page has instructions for how to deploy the three components. In particular note that ‘osinfo-db’ does NOT contain any traditional build system, as the only files it contains are XML database files. So instead of unpacking the osinfo-db archive, use the osinfo-db-import tool to deploy it.

Parsing web server access logs

Posted by Peter Czanik on February 16, 2017 10:18 AM

If you operate web servers, you want to have insight about your traffic. Traditional solutions to process access logs include:

  • scripts to create nightly reports with tools like AWStats
  • run a JavaScript snippet on each page load, like Google Analytics,
  • or combine the two methods, like Piwik.

But if you want to use your logs in operation, you are better off using syslog-ng and message parsing, as it gives you a lot more flexibility.

Access logs have a columnar data format, where Space acts as the delimiter between separate fields in the log message. Each message has the same information: the client address, the authenticated user, the time, and so on.

127.0.0.1 - frank [10/Oct/2000:13:55:36 -0700] "GET /apache_pb.gif HTTP/1.0" 200 2326

Logs without parsing are not really useful in syslog-ng, since you can only forward or store them for subsequent processing. But if you parse your web server logs in real time instead of using daily or hourly reports, you can react to events as they happen. For example, you can:

The apache-access-log parser of syslog-ng creates a new name-value pair for each field of the log message, and does some additional parsing to get more information.

The apache-accesslog-parser()

When you have generic columnar logs (for example, a list of tab-separated or comma-separated values), you can parse those using the CSV parser in syslog-ng. For your Apache access logs (or any other web server that uses the Common or Combined log format) you can use the Apache Access Log Parser. It has been fine-tuned to correctly handle access logs, so you should use this instead of the generic parser to save yourself some time.

Make sure that you are running at least syslog-ng version 3.8, and that the following line is included in your syslog-ng.conf:

@include "scl.conf"

(Scl.conf refers to the syslog-ng configuration library. You can read more about the power of SCL in this blogpost from Balázs Scheidler and on reusing configuration blocks in the documentation.)

Using the apache-accesslog-parser()

Let’s look at the following example. There is an optional parameter, prefix(), which allows you to configure what prefix would you like to use in front of the freshly created name-value pairs. By default it is “.apache.”.  The format-json template function replaces the leading dot with an underscore. You can obviously change this if you are forwarding logs to an application where fields beginning with an underscrore have a special meaning , for example, in Elasticsearch.

parser parser_name {
    apache-accesslog-parser(
        prefix(“apache.”)
    );
};

Log sources

Traditionally, access logs arrive to syslog-ng through file sources. Logging to files is default both in the Apache and Nginx web servers. The drawback of this solution is that log messages are stored twice: once by the web server and once by syslog-ng. You also need to rotate the log files. Fortunately, there are other methods which help you to avoid this overhead.

Apache httpd supports writing log messages into a pipe, and syslog-ng can read from pipes. In this case, instead of using an intermediary file, Apache sends the logs directly to syslog-ng through the pipe.

Nginx can use the old BSD syslog protocol to send logs through a UDP connection. It is not state of the art, and can lead to message loss if your web server has high traffic. Still, it can simplify your logging infrastructure considerably.

Note that when you use a file or pipe source, the message arrives without a syslog header. This means that you have to use the flags (no-parse) in the source, otherwise syslog-ng tries to interpret it as a syslog message and you will get unexpected results.

source s_access {
  file("/var/log/httpd/access_log" flags(no-parse));
};

Using virtual hosts

The method above works perfectly if you only have a single website. If you have multiple websites (virtual servers) that use the same web server, then there is a problem: the name of the virtual server is not included in the log message. You either need to define many log files both on the web server and in syslog-ng (well, if you are using syslog-ng Premium Edition, then you can simply use wildcards in the source path), or you loose some critical information in the name of the virtual host. Alternatively, you can define your own log format.

In case of Apache httpd, add “%v” to the description of your log format to include the virtual host name in the logs. For details and other possibilities, check the Apache documentation about logging.

Obviously, if you have a new field in your log file, you also need to add it to the parser configuration. You can find the Apache parser in the SCL directory. In case of openSUSE, the file is /usr/share/syslog-ng/include/scl/apache/apache.conf and it should be similar in other distributions. You need to add the field name matching the field order of the Apache configuration at this part of the config:

# field names match of that of Logstash
columns("clientip", "ident", "auth",
  "timestamp", "rawrequest", "response",
  "bytes", "referrer", "agent"));

Example configuration

Here is a complete example syslog-ng configuration. This one reads the web server logs from a file, parses them with the apache-access-log-parser() and sends the results to Elasticsearch. There is also a JSON file destination, commented out in the log path, which can be used for debugging.

# source: apache access log file
source s_access {
  file("/var/log/httpd/access_log" flags(no-parse));
};

# destination: elasticsearch server
destination d_elastic {
  elasticsearch2 (
    cluster("syslog-ng")
    client_mode("http")
    index("syslog-ng")
    type("test")
    template("$(format-json --scope rfc5424 --scope nv-pairs --exclude DATE --key ISODATE)")
  )
};

# destination: JSON format with same content as to Elasticsearch
destination d_json {
  file("/var/log/test.json"
    template("$(format-json --scope rfc5424 --scope nv-pairs --exclude DATE --key ISODATE)\n\n"));
};

# parser for apache access log
parser p_access {
  apache-accesslog-parser(
    prefix("apache.")
  );
};

# magic happens here: all building blocks connected together
log {
  source(s_access);
  parser(p_access);
  # destination(d_json);
  destination(d_elastic);
};

If you want to try this on your web server, install syslog-ng 3.8.1 or later. If this is not in your distribution, you can download it from here. For further ideas on processing your logs, see some of my earlier posts:

Are you stuck?

If you have questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by email or you can even chat with us. For a long list of possibilities, check our contact page at https://syslog-ng.org/contact-us/. On Twitter I am available as @PCzanik.

The post Parsing web server access logs appeared first on Balabit Blog.

Hello, Modern Paste!

Posted by Fedora Magazine on February 16, 2017 08:00 AM

Fedora offers a pastebin service for its users and contributors. A pastebin lets you save text on a website for a length of time. This helps you exchange data easily with other users. For example, you can post error messages for help with a bug or other issue.

If you use Fedora’s fpaste pastebin service, you’re in for some exciting changes. Fedora has switched to Modern Paste for your pastebin needs.

Welcome our new Modern Paste overlord

Modern Paste is an actively maintained, free software pastebin app. It’s written in Python and aims to be “visually pleasing, feature-rich, [and] mobile friendly.” It features a pleasant user interface, built on top of popular JavaScript libraries. For instance, it uses Code Mirror for syntax highlighting.

Demo of Modern Paste with code snippet

The Fedora team will soon integrate Modern Paste with the Fedora Accounts System (FAS). When that happens, contributors will be able to control their pastes, view old pastes, and delete them at will. They’ll also be able to attach small binary files like screenshots to pastes.

Best of all, the fpaste command line tool that comes with Fedora works without interruption. All Fedora users can continue to use fpaste to get help in community support forums.

We invite you to check out the new service. We also invite you to report issues to the Fedora Infrastructure team. You can find the team in the #fedora-apps channel on IRC Freenode.

The Fedora team would also like to thank the upstream project for their assistance and collaboration.

Announcing the resultsdb-users mailing list

Posted by Adam Williamson on February 16, 2017 01:28 AM

I’ve been floating an idea around recently to people who are currently using ResultsDB in some sense – either sending reports to it, or consuming reports from it – or plan to do so. The idea was to have a group where we can discuss (and hopefully co-ordinate) use of ResultsDB – a place to talk about result metadata conventions and so forth.

It seemed to get a bit of traction, so I’ve created a new mailing list: resultsdb-users. If you’re interested, please do subscribe, through the web interface, or by sending a mail with ‘subscribe’ in the subject to this address.

If you’re not familiar with ResultsDB – well, it’s a generic storage engine for test results. It’s more or less a database with a REST API and some very minimal rules for what constitutes a ‘test result’. The only requirements really are some kind of test name plus a result, chosen from four options; results can include any other arbitrary key:value pairs you like, and a few have special meaning in the web UI, but that’s about it. This is one of the reasons for the new list: because ResultsDB is so generic, if we want to make it easily and reliably possible to find related groups of results in any given ResultsDB, we need to come up with ways to ensure related results share common metadata values, and that’s one of the things I expect we’ll be talking about on the list.

It began life as Taskotron‘s result storage engine, but it’s pretty independent, and you could certainly get value out of a ResultsDB instance without any of the other bits of Taskotron.

Right now ResultsDB is used in production in Fedora for storing results from Taskotron, openQA and Autocloud, and an instance is also used inside Red Hat for storing results from some RH test systems.

Please note: despite the list being a fedoraproject one, the intent is to co-ordinate with folks from CentOS, Red Hat and maybe even further afield as well; we’re just using an fp.o list as it’s a quick convenient way to get a nice mailman3/hyperkitty list without having to go set up a list server on taskotron.org or something.

IMPORTANT REMINDER: EL 5 is EOL on March 31. 2017

Posted by Stephen Smoogen on February 15, 2017 10:59 PM
This is probably my final reminder on this before April 3rd 2017. As listed at https://access.redhat.com/support/policy/updates/errata and https://en.wikipedia.org/wiki/Red_Hat_Enterprise_Linux#Product_life_cycle Red Hat Enterprise Linux will be exiting "Production Phase 3", and CentOS will be archiving off old EL-5 releases.

At that point, all remaining EPEL-5 packages will be archived to /pub/archive/epel/5 for systems to get data from. No new updates or packages will be done after that.

2.4.0

Posted by Bodhi on February 15, 2017 09:56 PM

Bodhi 2.4.0 is a feature and bugfix release.

Features

  • #999: The web interface now displays whether an update has autopush enabled.
  • #1191: Autopush is now disabled on any update that receives authenticated negative karma.
  • #1246: Bodhi now links to Koji builds via TLS instead of plaintext.
  • Some usage examples have been added to the bodhi man page.
  • Bodhi's server package has a new script called bodhi-clean-old-mashes that can recursively delete any folders with names that end in a dash followed by a string that can be interpreted as a float, sparing the newest 10 by lexigraphical sorting. This should help release engineers keep the Koji mashing folder clean.
  • There is now a bodhi.client.bindings module provided by the Bodhi client package. It contains Python bindings to Bodhi's REST API.
  • The bodhi CLI now prints autokarma and thresholds when displaying updates.
  • bodhi-push now has a --version flag.
  • There are now man pages for bodhi-push and initialize_bodhi_db.

Bugs

  • #902: Users' e-mail addresses will now be updated when they log in to Bodhi.
  • #908: The masher now tests for repomd.xml instead of the directory that contains it.
  • #1018: Users can now only upvote an update once.
  • #1009: Only comment on non-autokarma updates when they meet testing requirements.
  • #1048: Autokarma can no longer be set to NULL.
  • #1064: Users can now be more fickle than ever about karma.
  • #1065: Critical path updates can now be free of past negative karma ghosts.
  • #1094: Bodhi now comments on non-autokarma updates after enough time has passed.
  • #1107: bodhi-push now does not crash when users abort a push.
  • #1113: bodhi-push now does not print updates when resuming a push.
  • #1146: Bodhi now says "Log in" and "Log out" instead of "Login" and "Logout".
  • #1201: Bodhi now configures the Koji client to retry, which should help make the masher more reliable.
  • #1262: Bodhi is now compatible with Pillow-4.0.0.
  • #1408195: The bodhi cli no longer prints update JSON when setting the request.
  • Bodhi's signed handler now skips builds that were not assigned to a release.
  • The comps file is now cloned into an explicit path during mashing.
  • The buildsystem is now locked during login.

Development improvements

  • A great deal of tests were written for Bodhi. Test coverage is now up to 81% and is enforced by the test suite.
  • Bodhi's server code is now PEP-8 compliant.
  • The docs now contain contribution guidelines.
  • The build system will now fail with a useful Exception if used without being set up.
  • The Vagrantfile is a good bit fancier, with hostname, dnf caching, unsafe but performant disk I/O, and more.
  • The docs now include a database schema image.
  • Bodhi is now run by systemd in the Vagrant guest.
  • The Vagrant environment now has several helpful shell aliases and a helpful MOTD to advertise them to developers.
  • The development environment now uses Fedora 25 by default.
  • The test suite is less chatty, as several unicode warnings have been fixed.

Dependency change

  • Bodhi server now depends on click for bodhi-push.

Release contributors

The following contributors submitted patches for Bodhi 2.4.0:

  • Trishna Guha
  • Patrick Uiterwijk
  • Jeremy Cline
  • Till Mass
  • Josef Sukdol
  • Clement Verna
  • andreas
  • Ankit Raj Ojha
  • Randy Barlow

There are scheduled downtimes in progress

Posted by Fedora Infrastructure Status on February 15, 2017 08:49 PM
Service 'Fedora pastebin service' now has status: scheduled: We are moving to Modern Paste!

Today I Learned: Searching in Vim

Posted by Lubomír Sedlář on February 15, 2017 06:36 PM

Ok, I lied. Searching in Vim is something I do all the time, so it's not such a new thing. However there is a feature that I only need every once in a while, and always forget how to do it.

The goal is to highlight something in the code, you can just search for it and if hlsearch is on, it will shine with the light of thousand suns.

But what if you need some context to match the snippet, but only want to highlight part of the match?

Search no more. The magic is done by including \zs or \ze in the search pattern. These snippets do not affect what is matched, but they set the start and end of the match respectively.

DevConf.cz 2017

Posted by Randy Barlow on February 15, 2017 03:52 PM

I had the pleasure of attending and speaking at DevConf.cz this year. It was also my first visit to the Czech Republic, which was fun for me since my grandfather's family originates from Radnice.

Friday

Friday was the first day of the conference. We got up bright and early (well, maybe not bright…) and headed over to the venue. I spent a fair amount of time on Friday attending talks.

I started with the keynote, presented by a variety of speakers representing a wide range of Red Hat's products. The keynote told a narrative of going from unboxed, racked servers to deploying code live from Eclipse to production on those servers (and all the steps in between).

Next I attended "Generational Core - The Future of Fedora?" by Petr Sabata. Petr presented about Fedora's modular future and how Factory 2.0 fits into the picture.

I then went to see Adam Šamalík and Courtney Pacheco give us another modularity talk. This talk presented more detail about the build system and design of Fedora modules.

I spent the afternoon eating a delicious hamburger provided by the conference. Well, not really the whole afternoon. I snuck in a little bit of hallway track time and worked a bit with Pierre-Yves Chibon on the talk we would give on Saturday.

Saturday

I started Saturday by attending the keynote talk about Fedora, Red Hat Enterprise Linux, CentOS, and how they are all better together. It was a wonderful narrative about how each of the projects/product contribute to one another's success. Well done!

I spent Saturday morning putting the finishing touches on my two talks that I'd give that day. The plans for how we would mirror container images had been in flux, and the slides I had prepared weren't quite accurate anymore so I had to put some last minute corrections into them.

I then gave both of my talks, back to back! It was a rush. I felt really good about both talks. There seemed to be a lot of interest in the Bodhi talk. Pierre demonstrated some neat ideas for integrating Pagure and Bodhi, and we have a pretty lively Q&A session. I had to rush over to my next talk, which was in a different building. This one had a less lively Q&A session, but I think it went well too. You can more about my talks here.

After my talks were done I had a rush of relief as I no longer had to worry if I'd done enough preparation for two talks (note to self: perhaps only pitch one talk next year…)

I spent the afternoon hacking on ejabberd with the legendary Peter Lemenkov. He had been working on some patches to add Kerberos support to ejabberd and wanted to try them out. We built some boxes in Fedora's OpenStack instance and got started with patching. We quickly realized we were in over our heads as neither of us had much server-side Kerberos knowledge, but after a few SMS messages we were able to enlist the help of Patrick Uiterwijk, a wise security sage. He created us a keytab in Fedora's staging environment and got us moving again. We never got it working on Saturday, but we did make a lot of progress.

There was a neat survey/test being done in one of the conference hallways about the openssl command line tool. They had you perform some tasks and ask you about the tool afterwards. If you were able to perform the tasks to their satisfaction, you were awarded with a hat. I got a nice Red Hat toboggan!

Saturday night was the conference party, which was a lot of fun. I got to spend time with a lot of my coworkers from all over the world in a more social setting.

Sunday

Sunday morning's keynote was a very entertaining argument between Dan Walsh and Steven Pousty about the future world of containers. It was my personal favorite talk of the conference and I highly recommend watching it if you haven't seen it.

I next went to see Till Mass talk about Certificate Transparency. It was a very well delivered talk with very nice slides. There were some unfortunate projector problems, but Till did a great job of navigating the technical difficulties. Certificate Transparency was news to me, but it creates a public database of all issued certificates (from participating Certificate Authorities). This would prevent a compromised authority from issuing a certificate for a CN that has been issued by another authority. It addresses what is currently a significant weakness in our public key infrastructure.

After a break I watched Dennis Gilmore talk about "Moving everyone to rawhide". I've been running Rawhide on a few systems here and there, so I was interested in hearing his perspective. He shared a lot of changes that are coming to Fedora in the future, such as Fedora 27 not having a beta, and Fedora 28 might be the last versioned Fedora release (whoah!).

I then had another hacking session with Peter Lemenkov on ejabberd. It turned out that Patrick Uiterwijk was able to identify some problems on the ejabberd server side and they got it working together. I then worked with Peter to clean up our implementation a bit and do additional testing. It seems to work well. We are considering deploying an ejabberd to Fedora's Cloud (i.e., it would be an unsupported community run service), but there are still a few things we'd need to figure out.

I saw Christian Schaller give a talk about Fedora Workstation. He talked about some of the changes that are coming in the future, and the challenges the workstation team faces. There was a lively Q&A session afterwards.

After that I did a little bit more hallway tracking, and then I went to see the concluding session which was a fun quiz with prizes.

Conclusion

The conference was a wonderful experience, and I look forward to sending a talk for next year's DevConf (though maybe I'll just do one this time ☺)

Nightly and Wayland Builds of Firefox for Flatpak

Posted by Jiri Eischmann on February 15, 2017 12:24 PM

When I announced Firefox Developer Edition for Flatpak over a month ago, I also promised that we would not stop there and bring more options in the future. Now I can proudly announce that we provide two more variants of Firefox – Firefox Nightly and Firefox Nightly for Wayland.

With Nightly, you can closely follow the development of Firefox. Due to Flatpak you can easily install it and keep getting daily updates via our flatpak repo.

As a bonus, we’re also bringing a Firefox build that runs natively on Wayland. We used to provide a Copr repository, but with Flatpak it’s open to users of other distros, too. When running this version, keep in mind it’s still WIP and very experimental. Firefox seems to run just fine on Wayland at first glance, but there is still some basic functionality missing (copy-paste for example) and it’s not so stable either (it crashed the whole Wayland session for me once). But once it’s done, it will be a big improvement in security for Firefox on Linux because Wayland isolates the application on the display server level. Together with other pieces of Flatpak sandboxing, it will provide a full sandbox for the browser in the future.

When adding more Firefox variants to the repo, we first considered using branches, but you have to switch between them manually to start different variants of Firefox which we didn’t find very user friendly. In the end, we’re using one branch and multiple applications with different names in it. This way, you can install and run multiple variants in parallel.

snimek-z-2017-02-15-13-13-22

You can find the instructions to install Firefox for Flatpak on the repository webpage. We’re also constantly improving how Firefox runs in Flatpak. If you have any Flatpak-specific problems with Firefox, please report it to our bug tracker on Github. If you hit problems that are not Flatpak-specific, please report them directly to Mozilla.

And again kudos to Jan Hořák from our team who made all this happen!


How to install supplemental wallpapers

Posted by Fedora Magazine on February 15, 2017 08:00 AM

Supplemental wallpapers make each release of Fedora a joy to run. This article explains how to install and select them on your Fedora system.

Backgrounds for everyone

The Fedora team works hard to make each release beautiful. Of course, we start with the desktops created by upstream projects. Then the Fedora Design team also creates an elegant official wallpaper. Finally, contributors also submit more background wallpapers for users who like to change their desktop.

By the way, here is that collection for the recent (at this writing) Fedora 25 release. Are you an artist or photographer? Maybe you’d like to contribute to the Fedora 26 set. If so, you can read more here.

These wallpapers are free to use, modify, and redistribute. In fact, you can find all the tools you need in Fedora for this, such as GIMP, Inkscape, and Rawstudio.

Installing the supplemental wallpapers

The supplemental wallpapers in Fedora are easy to install. Use sudo and the dnf tool to install the correct package for your desktop environment. For Fedora 25 Workstation, run this command:

sudo dnf install f25-backgrounds-extras-gnome   # for GNOME or Cinnamon

For other desktops, use one of these commands:

sudo dnf install f25-backgrounds-extras-kde     # for KDE
sudo dnf install f25-backgrounds-extras-mate    # for Mate
sudo dnf install f25-backgrounds-extras-xfce    # for XFCE

Furthermore, you can select additional wallpapers from earlier releases. Each collection comes from Fedora contributors around the world.

Selecting a wallpaper

Next, use your desktop environment’s settings tool to change the background. On most desktops, you can right-click an empty desktop area with your mouse for this setting. For instance, on Fedora Workstation, right-click the desktop. Then choose Change Background….

Next, select either the Background or the Lock Screen to change one of the wallpapers.

Now you can choose any wallpapers to your liking, and make Fedora your own. Enjoy!

Episode 33 - Everybody who went to the circus is in the circus (RSA 2017)

Posted by Open Source Security Podcast on February 15, 2017 06:22 AM
Josh and Kurt are at the same place at the same time! We discuss our RSA sessions and how things went. Talk of CVE IDs, open source libraries, Wordpress, and early morning sessions.

Download Episode
<iframe frameborder="no" height="150" scrolling="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/307825712&amp;auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false&amp;visual=true" width="100%"></iframe>

Show Notes


Trying to get an idea about what packages are used

Posted by Stephen Smoogen on February 14, 2017 09:14 PM

Background

One of the questions I get asked a lot is "You provide various statistics for Fedora, can you show which packages are installed the most?"

To head off a lot of future requests, the answer is no, no I can't. We do not have any sort of popcorn database which shows what packages are popular. When a user requests the OS to install a package, there is no "Hey I am asking for Bob if I can install libfoobar" that gets sent to the Fedora servers. What yum, dnf, PackageKit, or Salt do is then request for the repo data, looks to see if there is a way to figure out what is wanted and then asks for any packages that it needs to get.

It is this data that I can sort of glean some sort of idea of most installed packages.. but I feel it is way past "Lies", "Damn Lies", and "Statistics" into regions like  "Political Promises" or "Half Life 3 confirmed". Looking over an entire month of requests, sorting the data, and ranking the requests, I find that a bunch of packages show up a lot while others fall off in a long tail. Things that make this data dirty are the fact that if 200 people ask for wordpress, 150 for mediawiki and 90 for nagios.. I will see various PHP trunk packages that all three want as a higher number. I can't simply tell if the person wanted that PHP package by itself or wanted wordpress. [I could possibly try and work out a transaction of requested packages and figure out what nodes and leafs there might be.. but I found that the tools don't always request from download.fedoraproject.org everything it is wanting because it possibly already 'knows' where something is.

In any case, here are the most requested packages to the download website for January.

EPEL-7

  1. epel-release-7-9
  2. python2-pip-8
  3. python2-boto-2
  4. openvpn-2
  5. php-tcpdf-6
  6. php-tcpdf-dejavu-sans-fonts-6
  7. pdc-updater-0
  8. duplicity-0
  9. nagios-plugins-2 *lots of plugins show up here*
  10. ansible-2
  11. libopendkim-2
  12. opendkim-2
  13. cowsay-3
  14. python2-wikitcms-2
  15. pkcs11-helper-1
  16. fedmsg-0
  17. htop-2
  18. munin *lots of munin packages here
  19. awscli-1
  20. hdf5-1

EPEL-6

  1. nagios-plugins-2 *lots of other nagios removed*
  2. libmcrypt-2
  3. nodejs-0 *lots of other nodejs removed*
  4. python2-boto-2
  5. GeoIP-1 *other GeoIP removed*
  6. geoipupdate-2
  7. nrpe-2
  8. libnet-1
  9. denyhosts-2
  10. eventlog-0
  11. syslog-ng-3
  12. epel-release-6-8
  13. php-pear-Auth-SASL-1
  14. php-pear-Net-SMTP-1
  15. php-pear-Net-Socket-1
  16. perl-Net-IDN-Encode-2
  17. perl-Net-Whois-Raw-2
  18. perl-Regexp-IPv6-0
  19. pwhois-2
  20. v8
EPEL-6 is our most popular distribution with a ratio of about 12 EPEL-6 : 7 EPEL-7: 1.5 Fedora 25 to 1 EPEL-5 request over the month of January. 

EPEL-5

  1. R-core-3 *lots of other R packages removed*
  2. globus-gssapi-gsi-devel-12 *lots of other globus removed*
  3. nordugrid-arc-5
  4. xrootd-client-libs-4 *lots of other xrootd removed*
  5. pcp-libs-devel-3
  6. nordugrid-arc-devel-5
  7. libopendkim-2
  8. libopendmarc-1
  9. pcp-libs-3
  10. nordugrid-arc-plugins-globus-5
  11. libopendkim-devel-2
  12. libopendmarc-1
  13. ebtree-6
  14. myproxy-libs-6
  15. mosh-1
  16. lua-cyrussasl-1
  17. drupal7
  18. rear-2
  19. clustershell-1
  20. rsnapshot-1
I found it interesting that R was getting pulled in by a lot of computers on EPEL-5. This OS is almost end of lifed, but it looks like systems are still getting provisioned with it.

Fedora 25

  1. java-1
  2. vim-minimal-8
  3. kernel-core-4
  4. libX11-1
  5. perl-libs-5
  6. perl-5
  7. perl-IO-1
  8. perl-macros-5
  9. perl-Errno-1
  10. nss-3
  11. gdk-pixbuf2-2
  12. gtk3-3
  13. audit-libs-2
  14. nss-softokn-freebl-3
  15. libX11-common-1
  16. gdk-pixbuf2-modules-2
  17. libnl3-3
  18. gnutls-3
  19. pcre-8
  20. gtk-update-icon-cache-3
As can be seen from the Fedora 25, there is another problem with my trying to get an idea of packages.. a package getting updated that is installed on a lot of boxes will show up also. 

Conclusions

I really don't think any 'real' conclusions can come out of this other than people really want vim on their Fedora 25 desktops (emacs was way down the list). 😑 I also want to say that we should get an opt-in popcorn for Fedora :).

[Edited: I forgot this part]

This list of agents which get used to pull down packages for EPEL and Fedora was rather interesting. I combined all the yum together as the many different versions kind of polluted the numbers but here are the top agents:


  1. yum
  2. Salt
  3. dnf
  4. Artifactory
  5. python-requests
  6. Debian Apt-Cacher-NG
  7. PackageKit-hawkey
  8. Axel 2.4 (Linux)
  9. Wget
  10. libdnf
  11. curl
  12. urlgrabber
The Salt seems to come from a large number of amazon systems which are installing either epel-release-6 (80% of the time) or epel-release-7 (20% of the time). Nothing else seemed to be 'pulled' from download.fedoraproject.org so it is probably just a config artifact on bootup. 

Matching Fedora OSTree Released Content With Each 2 Week Atomic Release

Posted by Dusty Mabe on February 14, 2017 08:43 PM

Cross posted with this Project Atomic Blog post

TL;DR: The default Fedora cadence for updates in the RPM streams is once a day. Until now, the OSTree-based updates cadence has matched this, but we're changing the default OSTree update stream to match the Fedora Atomic Host image release cadence (once every two weeks).

---

In Fedora we release a new Atomic Host approximately every two weeks. In the past this has meant that we bless and ship new ISO, QCOW, and Vagrant images that can then be used to install and or start a new Atomic Host server. But what if you already have an Atomic Host server up and running?

Servers that are already running are configured to get their updates directly from the OSTree repo that is sitting on Fedora Infrastructure servers. The client will ask What is the newest commit for my branch/ref? and the server will kindly reply with the most recent commit. If the client is at an older version then it will start to pull the newer commit and will apply the update.

This is exactly how the client is supposed to behave, but one problem with the way we have been doing things in the past is that we have been updating everyone's branch/ref every night when we do our updates runs in Fedora.

This has the side effect of meaning that users can get content as soon as it has been created, but it also means that the two week release process where we perform testing and validation really means nothing for these users, as they will get something before we ever did testing on it.

We have decided to slow down the cadence of the fedora-atomic/25/x86_64/docker-host ref within the OSTree repo to match the exact releases that we do for the two week release process. Users will be able to track this ref like they always have, but it will only update when we do a release, approximately every two weeks.

We have also decided to create a new ref that will get updated every night, so that we can still do our testing. This ref will be called fedora-atomic/25/x86_64/updates/docker-host. If you want to keep following the content as soon as it is created you can rebase to this branch/ref at any time using:

# rpm-ostree rebase fedora-atomic/25/x86_64/updates/docker-host

As an example, let's say that we have a Fedora Atomic host which is on the default ref. That ostree will now be updated every two weeks, and only every two weeks:

-bash-4.3# date
Fri Feb 10 21:05:27 UTC 2017

-bash-4.3# rpm-ostree status
State: idle
Deployments:
● fedora-atomic:fedora-atomic/25/x86_64/docker-host
       Version: 25.51 (2017-01-30 20:09:59)
        Commit: f294635a1dc62d9ae52151a5fa897085cac8eaa601c52e9a4bc376e9ecee11dd
        OSName: fedora-atomic

-bash-4.3# rpm-ostree upgrade
Updating from: fedora-atomic:fedora-atomic/25/x86_64/docker-host
1 metadata, 0 content objects fetched; 569 B transferred in 1 seconds
No upgrade available.

If you want the daily ostree update instead, as you previously had, you need to switch to the _updates_ ref:

-bash-4.3# rpm-ostree rebase --reboot fedora-atomic/25/x86_64/updates/docker-host

812 metadata, 3580 content objects fetched; 205114 KiB transferred in 151 seconds                                                                                                                                                           
Copying /etc changes: 24 modified, 0 removed, 54 added
Connection to 192.168.121.128 closed by remote host.
Connection to 192.168.121.128 closed.

[laptop]$ ssh fedora@192.168.121.128
[fedora@cloudhost ~]$ sudo su -
-bash-4.3# rpm-ostree status
State: idle
Deployments:
● fedora-atomic:fedora-atomic/25/x86_64/updates/docker-host
       Version: 25.55 (2017-02-10 13:59:37)
        Commit: 38934958d9654721238947458adf3e44ea1ac1384a5f208b26e37e18b28ec7cf
        OSName: fedora-atomic

  fedora-atomic:fedora-atomic/25/x86_64/docker-host
       Version: 25.51 (2017-01-30 20:09:59)
        Commit: f294635a1dc62d9ae52151a5fa897085cac8eaa601c52e9a4bc376e9ecee11dd
        OSName: fedora-atomic

We hope you are enjoying using Fedora Atomic Host. Please share your success or horror stories with us on the mailing lists or in IRC: #atomic or #fedora-cloud on Freenode.

Cheers!

The Fedora Atomic Team

Packaging Ampache in Fedora

Posted by Randy Barlow on February 14, 2017 05:56 PM

Hello Fedora Hackers! I've been working together with Remi Collet and Shawn Iwinski to package Ampache and its list of dependencies for Fedora. Ampache is a music server that allows you to listen to your music catalog in your web browser. There are even a variety of mobile applications that allow you to listen to your music on the go, such as DSub.

We've made a fair amount of progress with 25/41 spec files written. If you'd like to get involved in the packaging effort, we've been using Fedora's Wiki to organize the effort. You can put your name next to any dependency you'd like to package. Another way to help is to review packages - any package in that list that has a BZ link that isn't green is one that probably hasn't been reviewed yet.

F25-20170210 Updated Lives released

Posted by Ben Williams on February 14, 2017 04:51 PM

I am happy to announce new F25-20170210 Updated Lives.

(with Kernel 4.9.8)

With F25 we are now using Livemedia-creator to build the updated lives.

Also from now on we will only be releasing updated lives on even kernel releases.

To build your own please look at  https://fedoraproject.org/wiki/Livemedia-creator-_How_to_create_and_use_a_Live_CD

With this new build of F25 Updated Lives will save you 695M of updates after install on Workstation.

As always the isos can be found at http://tinyurl.com/Live-respins2


ABRT client ansible role

Posted by ABRT team on February 14, 2017 11:48 AM

Some of you maybe don’t know how to install and enable ABRT. Hence, we created ABRT ansible role which does all the required steps. Basically the ansible role install all required packages and enable all services.

ABRT client ansible role is available in ansible galaxy and will be maintained by ABRT team. https://galaxy.ansible.com/abrt/abrt-client-role/

How to install and run the ansible role:

1) install ABRT ansible role

# ansible-galaxy install abrt.abrt-client-role

2) create ansible playbook for example the following

$ cat abrt-cli.yml
---
- name: install and enable abrt client
  hosts: localhost
  roles:
  - abrt.abrt-client-role

3) run ansible playbook

$ ansible-playbook abrt-cli.yml

If you want, you can also test ABRT functionality for instance as follows

$ sleep 1000 &
$ kill -SEGV %%
# abrt-cli list
id 205449a7910ff7aa84d0cf0941a3b65c58c40e08
reason:         sleep killed by SIGSEGV
time:           Tue Feb 14 17:20:52 2017
cmdline:        sleep 1000
package:        coreutils-8.22-18.el7
uid:            1000 (mhabrnal)
count:          1
Directory:      /var/spool/abrt/ccpp-2017-02-14-17:20:52-27211
Run 'abrt-cli report /var/spool/abrt/ccpp-2017-02-14-17:20:52-27211' for creating a case in Red Hat Customer Portal

The Autoreporting feature is disabled. Please consider enabling it by issuing
'abrt-auto-reporting enabled' as a user with root privileges

Valentine's

Posted by Nicu Buculei on February 14, 2017 07:08 AM
For the Valentine's Day, a thematic selection from my pictures at the on-going protests in Bucharest (the events are covered more in-depth in my photography blog)
valentine

For the curious, the images are processed with darktable, GIMP and now PhotoCollage.

PS: It looks like a bunch of other Linux geeks were there
linux protest

Getting Started with Taskwarrior

Posted by Fedora Magazine on February 13, 2017 08:00 AM

Taskwarrior is a flexible command-line task management program. In their own words:

Taskwarrior manages your TODO list from your command line. It is flexible, fast, efficient, unobtrusive, does its job then gets out of your way.

Taskwarrior is highly customizable, but can also be used “right out of the box.” In this article, we’ll show you the basic commands to add and complete tasks. Then we’ll cover a couple more advanced commands. And finally, we’ll show you some basic configuration settings to begin customizing your setup.

Installing Taskwarrior

Taskwarrior is available in the Fedora repositories, so installing it is simple:

sudo dnf install task

Once installed, run task. This first run will create a ~/.taskrc file for you.

$ task
A configuration file could not be found in ~

Would you like a sample /home/link/.taskrc created, so Taskwarrior can proceed? (yes/no) yes
[task next]
No matches.

Adding Tasks

Adding tasks is fast and unobtrusive.

$ task add Plant the wheat
Created task 1.

Run task or task list to show upcoming tasks.

$ task list

ID Age Description         Urg 
 1 8s  Plant the wheat        0

1 task

Let’s add a few more tasks to round out the example.

$ task add Tend the wheat
Created task 2.
$ task add Cut the wheat
Created task 3.
$ task add Take the wheat to the mill to be ground into flour
Created task 4.
$ task add Bake a cake
Created task 5.

Run task again to view the list.

[task next]

ID Age  Description                                        Urg 
 1 3min Plant the wheat                                       0
 2 22s  Tend the wheat                                        0
 3 16s  Cut the wheat                                         0
 4 8s   Take the wheat to the mill to be ground into flour    0
 5 2s   Bake a cake                                           0

5 tasks

Completing Tasks

To mark a task as complete, look up its ID and run:

$ task 1 done
Completed task 1 'Plant the wheat'.
Completed 1 task.

You can also mark a task done with its description.

$ task 'Tend the wheat' done
Completed task 1 'Tend the wheat'.
Completed 1 task.

With add, list and done, you’re all ready to get started with Taskwarrior.

Setting Due Dates

Many tasks do not require a due date:

task add Finish the article on Taskwarrior

But sometimes, setting a due date is just the kind of motivation you need to get productive. Use the due modifier when adding a task to set a specific due date.

task add Finish the article on Taskwarrior due:tomorrow

due is highly flexible. It accepts specific dates (“2017-02-02”), or ISO-8601 (“2017-02-02T20:53:00Z”), or even relative time (“8hrs”). See the Date & Time documentation for all the examples.

Dates go beyond due dates too. Taskwarrior has scheduled, wait, and until.

task add Proof the article on Taskwarrior scheduled:thurs

Once the date (Thursday in this example) passes, the task is tagged with the READY virtual tag. It will then show up in the ready report.

$ task ready

ID Age   S  Description                                        Urg 
 1 2s    1d Proof the article on Taskwarrior                      5

To remove a date, modify the task with a blank value:

$ task 1 modify scheduled:

Searching Tasks

No task list is complete without the ability to search with regular expressions, right?

$ task '/.* the wheat/' list

ID Age   Project Description                                            Urg 
 2 42min         Take the wheat to the mill to be ground into flour        0
 1 42min Home    Cut the wheat                                             1

2 tasks

Customizing Taskwarrior

Remember that file we created back in the beginning (~/.taskrc). Let’s take at the defaults:

# [Created by task 2.5.1 2/9/2017 16:39:14]
# Taskwarrior program configuration file.
# For more documentation, see http://taskwarrior.org or try 'man task', 'man task-color',
# 'man task-sync' or 'man taskrc'

# Here is an example of entries that use the default, override and blank values
#   variable=foo   -- By specifying a value, this overrides the default
#   variable=      -- By specifying no value, this means no default
#   #variable=foo  -- By commenting out the line, or deleting it, this uses the default

# Use the command 'task show' to see all defaults and overrides

# Files
data.location=~/.task

# Color theme (uncomment one to use)
#include /usr//usr/share/task/light-16.theme
#include /usr//usr/share/task/light-256.theme
#include /usr//usr/share/task/dark-16.theme
#include /usr//usr/share/task/dark-256.theme
#include /usr//usr/share/task/dark-red-256.theme
#include /usr//usr/share/task/dark-green-256.theme
#include /usr//usr/share/task/dark-blue-256.theme
#include /usr//usr/share/task/dark-violets-256.theme
#include /usr//usr/share/task/dark-yellow-green.theme
#include /usr//usr/share/task/dark-gray-256.theme
#include /usr//usr/share/task/dark-gray-blue-256.theme
#include /usr//usr/share/task/solarized-dark-256.theme
#include /usr//usr/share/task/solarized-light-256.theme
#include /usr//usr/share/task/no-color.theme

The only active option right now is data.location=~/.task. To view active configuration settings (including the built-in defaults), run show.

task show

To change a setting, use config.

$ task config displayweeknumber no
Are you sure you want to add 'displayweeknumber' with a value of 'no'? (yes/no) yes
Config file /home/link/.taskrc modified.

Examples

These are just some of the things you can do with Taskwarrior.

Assign a project to your tasks:

task 'Fix leak in the roof' modify project:Home

Use start to mark what you were working on. This can help you remember what you were working on after the weekend:

task 'Fix bug #141291' start

Use relevant tags:

task add 'Clean gutters' +weekend +house

Be sure to read the complete documentation to learn all the ways you can catalog and organize your tasks.

Reality Based Security

Posted by Josh Bressers on February 13, 2017 04:37 AM
If I demand you jump off the roof and fly, and you say no, can I call you a defeatist? What would you think? To a reasonable person it would be insane to associate this attitude with being a defeatist. There are certain expectations that fall within the confines of reality. Expecting things to happen outside of those rules is reckless and can often be dangerous.

Yet in the universe of cybersecurity we do this constantly. Anyone who doesn’t pretend we can fix problems is a defeatist and part of the problem. We just have to work harder and not claim something can’t be done, that’s how we’ll fix everything! After being called a defeatist during a discussion, I decided to write some things down. We spend a lot of time trying to fly off of roofs instead of looking for practical realistic solutions for our security problems.

The way cybersecurity works today someone will say “this is a problem”. Maybe it’s IoT, or ransomware, or antivirus, secure coding, security vulnerabilities; whatever, pick something, there’s plenty to choose from. It’s rarely in a general context though, it will be sort of specific, for example “we have to teach developers how to stop adding security flaws to software”. Someone else will say “we can’t fix that”, then they get called a defeatist for being negative and it’s assumed the defeatists are the problem. The real problem is they’re not wrong. It can’t be fixed. We will never see humans write error free code, there is no amount of training we can give them. Pretending it can is what’s dangerous. Pretending we can fix problems we can’t is lying.

The world isn’t fairy dust and rainbows. We can’t wish for more security and get it. We can’t claim to be working on a problem if we have no clue what it is or how to fix it. I’ll pick on IoT for a moment. How many security IoT “experts” exist now? The number is non trivial. Does anyone have any ideas how to understand the IoT security problems? Talking about how to fix IoT doesn’t make sense today, we don’t even really understand what’s wrong. Is the problem devices that never get updates? What about poor authentication? Maybe managing the devices is the problem? It’s not one thing, it’s a lot of things put together in a martini shaker, shook up, then dumped out in a heap. We can’t fix IoT because we don’t know what it even is in many instances. I’m not a defeatist, I’m trying to live in reality and think about the actual problems. It’s a lot easier to focus on solutions for problems you don’t understand. You will find a solution, those solutions won’t make sense though.

So what do we do now? There isn’t a quick answer, there isn’t an easy answer. The first step is to admit you have a problem though. Defeatists are a real thing, there’s no question about it. The trick is to look at the people who might be claiming something can’t be fixed. Are they giving up, or are they trying to reframe the conversation? If you declare them a defeatist, the conversation is now over, you killed it. On the other side of the coin, pretending things are fine is more dangerous than giving up, you’re living in a fantasy. The only correct solution is reality based security. Have honest and real conversations, don’t be afraid to ask hard questions, don’t be afraid to declare something unfixable. An unfixable problem is really just one that needs new ideas.

You can't fly off the roof, but trampolines are pretty awesome.

I'm @joshbressers on Twitter, talk to me.

Fedora BTRFS+Snapper – The Fedora 25 Edition

Posted by Dusty Mabe on February 13, 2017 12:56 AM

History

I'm back again with the Fedora 25 edition of my Fedora BTRFS+Snapper series. As you know, in the past I have configured my computers to be able to snapshot and rollback the entire system by leveraging BTRFS snapshots, a tool called snapper, and a patched version of Fedora's grub2 package. I have updated the patchset (patches taken from SUSE) for Fedora 25's version of grub and the results are available in this git repo.

This setup is not new. I have fully documented the steps I took in the past for my Fedora 22 systems in two blog posts: part1 and part2. This is a condensed continuation of those posts for Fedora 25.

Setting up System with LUKS + LVM + BTRFS

The manual steps for setting up the system are detailed in the part1 blog post from Fedora 22. This time around I have created a script that will quickly configure the system with LUKS + LVM + BTRFS. The script will need to be run in an Anaconda environment just like the manual steps were done in part1 last time.

You can easily enable ssh access to your Anaconda booted machine by adding inst.sshd to the kernel command line arguments. After booting up you can scp the script over and then execute it to build the system. Please read over the script and modify it to your liking.

Alternatively, for an automated install I have embedded that same script into a kickstart file that you can use. The kickstart file doesn't really leverage Anaconda at all because it simply runs a %pre script and then reboots the box. It's basically like just telling Anaconda to run a bash script, but allows you to do it in an automated way. None of the kickstart directives at the top of the kickstart file actually get used.

Installing and Configuring Snapper

After the system has booted for the first time, let's configure the system for doing snapshots. I still want to be able to track how much size each snapshot has taken so I'll go ahead and enable quota support on BTRFS. I covered how to do this in a previous post:

[root@localhost ~]# btrfs quota enable /
[root@localhost ~]# btrfs qgroup show /
qgroupid         rfer         excl 
--------         ----         ---- 
0/5         999.80MiB    999.80MiB

Next up is installing/configuring snapper. I am also going to install the dnf plugin for snapper so that rpm transactions will automatically get snapshotted:

[root@localhost ~]# dnf install -y snapper python3-dnf-plugins-extras-snapper
...
Complete!
[root@localhost ~]# snapper --config=root create-config /
[root@localhost ~]# snapper ls
Type   | # | Pre # | Date | User | Cleanup | Description | Userdata
-------+---+-------+------+------+---------+-------------+---------
single | 0 |       |      | root |         | current     |         
[root@localhost ~]# snapper list-configs
Config | Subvolume
-------+----------
root   | /        
[root@localhost ~]# btrfs subvolume list /
ID 260 gen 44 top level 5 path .snapshots

So we used the snapper command to create a configuration for BTRFS filesystem mounted at /. As part of this process we can see from the btrfs subvolume list / command that snapper also created a .snapshots subvolume. This subvolume will be used to house the COW snapshots that are taken of the system.

Next, we'll add an entry to fstab so that regardless of what subvolume we are actually booted in we will always be able to view the .snapshots subvolume and all nested subvolumes (snapshots):

[root@localhost ~]# echo '/dev/vgroot/lvroot /.snapshots btrfs subvol=.snapshots 0 0' >> /etc/fstab

Taking Snapshots

OK, now that we have snapper installed and the .snapshots subvolume in /etc/fstab we can start creating snapshots:

[root@localhost ~]# btrfs subvolume get-default /
ID 5 (FS_TREE)
[root@localhost ~]# snapper create --description "BigBang"
[root@localhost ~]# snapper ls
Type   | # | Pre # | Date                            | User | Cleanup | Description | Userdata
-------+---+-------+---------------------------------+------+---------+-------------+---------
single | 0 |       |                                 | root |         | current     |         
single | 1 |       | Mon 13 Feb 2017 12:50:51 AM UTC | root |         | BigBang     |         
[root@localhost ~]# btrfs subvolume list /
ID 260 gen 48 top level 5 path .snapshots
ID 261 gen 48 top level 260 path .snapshots/1/snapshot
[root@localhost ~]# ls /.snapshots/1/snapshot/
bin  boot  dev  etc  home  lib  lib64  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var

We made our first snapshot called BigBang and then ran a btrfs subvolume list / to view that a new snapshot was actually created. Notice at the top of the output of the sections that we ran a btrfs subvolume get-default /. This outputs what the currently set default subvolume is for the BTRFS filesystem. Right now we are booted into the root subvolume but that will change as soon as we decide we want to use one of the snapshots for rollback.

Since we took a snapshot let's go ahead and make some changes to the system by updating the kernel:

[root@localhost ~]# dnf update -y kernel
...
Complete!
[root@localhost ~]# rpm -q kernel
kernel-4.8.6-300.fc25.x86_64
kernel-4.9.8-201.fc25.x86_64
[root@localhost ~]# snapper ls
Type   | # | Pre # | Date                            | User | Cleanup | Description                   | Userdata
-------+---+-------+---------------------------------+------+---------+-------------------------------+---------
single | 0 |       |                                 | root |         | current                       |         
single | 1 |       | Mon 13 Feb 2017 12:50:51 AM UTC | root |         | BigBang                       |         
single | 2 |       | Mon 13 Feb 2017 12:52:38 AM UTC | root | number  | /usr/bin/dnf update -y kernel |

So we updated the kernel and the snapper dnf plugin automatically created a snapshot for us. Let's reboot the system and see if the new kernel boots properly:

[root@localhost ~]# reboot 
...
[dustymabe@media ~]$ ssh root@192.168.122.177
Warning: Permanently added '192.168.122.177' (ECDSA) to the list of known hosts.
root@192.168.122.177's password: 
Last login: Mon Feb 13 00:41:40 2017 from 192.168.122.1
[root@localhost ~]# 
[root@localhost ~]# uname -r
4.9.8-201.fc25.x86_64

Rolling Back

Say we don't like that new kernel. Let's go back to the earlier snapshot we made:

[root@localhost ~]# snapper rollback 1
Creating read-only snapshot of current system. (Snapshot 3.)
Creating read-write snapshot of snapshot 1. (Snapshot 4.)
Setting default subvolume to snapshot 4.
[root@localhost ~]# reboot

snapper created a read-only snapshot of the current system and then a new read-write subvolume based on the snapshot we wanted to go back to. It then sets the default subvolume to be the newly created read-write subvolume. After reboot you'll be in the newly created read-write subvolume and exactly back in the state you system was in at the time the snapshot was created.

In our case, after reboot we should now be booted into snapshot 4 as indicated by the output of the snapper rollback command above and we should be able to inspect information about all of the snapshots on the system:

[root@localhost ~]# btrfs subvolume get-default /
ID 264 gen 66 top level 260 path .snapshots/4/snapshot
[root@localhost ~]# snapper ls
Type   | # | Pre # | Date                     | User | Cleanup | Description                   | Userdata
-------+---+-------+--------------------------+------+---------+-------------------------------+---------
single | 0 |       |                          | root |         | current                       |         
single | 1 |       | Mon Feb 13 00:50:51 2017 | root |         | BigBang                       |         
single | 2 |       | Mon Feb 13 00:52:38 2017 | root | number  | /usr/bin/dnf update -y kernel |         
single | 3 |       | Mon Feb 13 00:56:13 2017 | root |         |                               |         
single | 4 |       | Mon Feb 13 00:56:13 2017 | root |         |                               |         
[root@localhost ~]# ls /.snapshots/
1  2  3  4
[root@localhost ~]# btrfs subvolume list /
ID 260 gen 67 top level 5 path .snapshots
ID 261 gen 61 top level 260 path .snapshots/1/snapshot
ID 262 gen 53 top level 260 path .snapshots/2/snapshot
ID 263 gen 60 top level 260 path .snapshots/3/snapshot
ID 264 gen 67 top level 260 path .snapshots/4/snapshot

And the big test is to see if the change we made to the system was actually reverted:

[root@localhost ~]# uname -r 
4.8.6-300.fc25.x86_64
[root@localhost ~]# rpm -q kernel
kernel-4.8.6-300.fc25.x86_64

Enjoy!

Dusty

A tale of a bluetooth headset bug…

Posted by Kevin Fenzi on February 13, 2017 12:09 AM

I find tracking down obscure bugs interesting, so I thought I would share another tale of one I just tracked down.

Ever since coming back from the holiday break this year, I found my bluetooth headsets had stopped working. (I have 2 very different models: a plantronics M50 that I have had for a long time, and a Bose QuietComfort 35 (yes, the headphones have a headset profile!)).

At first when I noticed the problem I tried the obvious things:

  • Downgraded pulseaudio
  • Downgraded bluez
  • Downgraded and tried various kernels.
  • Looked though tons of upstream pulseaudio and bluez bugs.

No change at all. I added myself to a existing bluez bug and provided the bluez maintainer a bunch of debugging output, but it all looked pretty normal, there was just no sound. So I gave up and moved on to more urgent things.

Fast forward to today: what better way to spend a sunday afternoon that more debugging. I repeated my former tests and had the same result. Then I decided to try some LiveUSB’s to isolate it. I downloaded and booted a F25 updated workstation iso and it still had the issue. So, I thought, Ah ha it has to be the updated pulseaudio since that landed in f25 updates. But to be sure, I downloaded and booted the stock f25 release iso. Still had the issue. Boggling. Then I went looking around in the kernel bugzilla and found a report that talked about issues with bluetooth and firmware. That was just the hint I was looking for. So, I downgraded the linux-firmware package and tried things and… it still didn’t work! There was one more trick: The firmware only reloads on cold boot. If it’s already loaded (or misloaded) it doesn’t even try again. So, cold boot and then everthing started working.

The key kernel boot message on the failing firmware: Bluetooth: hci0: Failed to send Intel_Write_DDC (-22) and the working firmware: Bluetooth: hci0: Applying Intel DDC parameters completed

For bonus points of course the failing firmware file is: ibt-11-5.ddc and the working one is: ibt-11-5.ddc (yes, thats right, they are changing the file around without changing the version any).

Next of course is to figure out where to report this to get it fixed, but some poking around and testing found that http://git.kernel.org/cgit/linux/kernel/git/firmware/linux-firmware.git/commit/?id=581f24500138f5e410d51ab63b205be9a52f4c77 already had a fix. I just need to wait for a update to the linux-firmware in Fedora and everything should be back to normal.

So, troubleshooting lessons for today: Sometimes cold boot matters. When you think you have downgraded everything that could cause a problem, remember firmware. Reading around on bug reports will sometimes give you ideas on how to solve yours.

The future of Fedora QA

Posted by Adam Williamson on February 12, 2017 05:33 PM

Welcome to version 2.0 of this blog post! This space was previously occupied by a whole bunch of longwinded explanation about some changes that are going on in Fedoraland, and are going to be accelerating (I think) in the near future. But it was way too long. So here’s the executive summary!

First of all: if you do nothing else to get up to speed on Stuff That’s Going On, watch Ralph Bean’s Factory 2.0 talk and Adam Samalik’s Modularity talk from Devconf 2017. Stephen Gallagher’s Fedora Server talk and Dennis Gilmore’s ‘moving everyone to Rawhide’ talk are also valuable, but please at least watch Ralph’s. It’s a one-hour overview of all the big stuff that people really want to build for Fedora (and RH) soon.

To put it simply: Fedora (and RH) don’t want to be only in the business of releasing a bunch of RPMs and operating system images every X months (or years) any more. And we’re increasing moving away from the traditional segmented development process where developers/package maintainers make the bits, then release engineering bundles them all up into ‘things’, and then QA looks at the ‘things’ and says “er, it doesn’t boot, try again”, and we do that for several months until QA is happy, then we release it and start over. There is a big project to completely overhaul the way we build and ship products, using a pipeline that involves true CI, where each proposed change to Fedora produces an immediate feedback loop of testing and the change is blocked if the testing fails. Again, watch Ralph’s talk, because what he basically does is put up a big schematic of this entire system and go into a whole bunch of detail about his vision for how it’s all going to work.

As part of this, some of the folks in RH’s Fedora QA team whose job has been to work on ‘automated testing’ – a concept that is very tied to the traditional model for building and shipping a ‘distribution’, and just means taking some of the tasks assigned to QA/QE in that model and automating them – are now instead going to be part of a new team at Red Hat whose job is to work on the infrastructure that supports this CI pipeline. That doesn’t mean they’re leaving Fedora, or we’re going to throw away all the work we’ve invested in the components of Taskotron and start all over again, but it does mean that some or all of the components of Taskotron are going to be re-envisaged as part of a modernized pipeline for building and shipping whatever it is we want to call Fedora in the future – and also, if things go according to plan, for building and shipping CentOS and Red Hat products, as part of the vision is that as many components of the pipeline as possible will be shared among many projects.

So that’s one thing that’s happening to Fedora QA: the RH team is going to get a bit smaller, but it’s for good and sensible reasons. You’re also not going to see those folks disappear into some kind of internal RH wormhole, they’ll still be right here working on Fedora, just in a somewhat different context.

Of course, all of this change has other implications for Fedora QA as well, and I reckon this is a good time for those of us still wearing ‘Fedora QA’ hats – whether we’re paid by Red Hat or not – to be reconsidering exactly what our goals and priorities ought to be. Much like with Taskotron, we really haven’t sat down and done that for several years. I’ve been thinking about it myself for a while, and I wouldn’t say I have it all figured out, but I do have some thoughts.

For a start I think we should be looking ahead to the time when we’re no longer on what the anaconda team used to call ‘the blocker treadmill’, where a large portion of our working time is eaten up by a more or less constant cycle of waking up, finding out what broke in Rawhide or Branched today, and trying to get it fixed. If the plans above come about, that should happen a lot less for a couple of reasons: firstly Fedora won’t just be a project which releases a bunch of OS images every six months any more, and secondly, distribution-level CI ought to mean that things aren’t broken all the damn time any more. In an ideal scenario, a lot of the basic fundamental breakage that, right now, is still mostly caught by QA – and that we spend a lot of our cycles on dealing with – will just no longer be our problem. In a proper CI system, it becomes truly the developers’ responsibility: developers don’t get to throw in a change that breaks everything and then wait for QA to notice and tell them about it. If they try and send a change that breaks everything, it gets rejected, and hopefully, the breakage never really ‘happens’.

Sadly (or happily, given I still have a mortgage to pay off) this probably doesn’t mean Project Colada will finally be reality and we all get to sit on the beach drinking cocktails for the rest of our lives. CI is a great process for ensuring your project basically works all the time, but ‘basically works’ is a long way from ‘perfect’. Software is still software, after all, and a CI process is never going to catch all of the bugs. Freeing QA from the blocker treadmill lets us look up and think, well, what else can we do?

To be clear, I think we’re still going to need ‘release validation’. In fact, if the bits of the plan about having more release streams than just ‘all the bits, every six months’ come off, we’ll need more release validation. But hopefully there’ll be a lot more “well, this doesn’t quite work right in this quite involved real-world scenario” and less “it doesn’t boot and I think it ate my cat” involved. For the near future, we’re going to have to keep up the treadmill: bar a few proofs of concept and stuff, Fedora 26 is still an ‘all the bits, every six months’ release, and there’s still an awful lot of “it doesn’t boot” involved. (Right now, Rawhide doesn’t even compose, let alone boot!) But it’s not too early to start thinking about how we might want to revise the ‘release validation’ concept for a world where the wheels don’t fall off the bus every five minutes. It might be a good idea to go back to the teams responsible for all the Fedora products – Server, Workstation, Atomic et. al – and see if we need to take another good look at the documents that define what those products should deliver, and the test processes we have in place to try and determine whether they deliver them.

We’re also still going to be doing ‘updates testing’ and ‘test days’, I think. In fact, the biggest consequence of a world where the CI stuff works out might be that we are free to do more of those. There may be some change in what ‘updates’ are – it may not just be RPM packages any more – but whatever interesting forms of ‘update’ we wind up shipping out to people, we’re still going to need to make sure they work properly, and manual testing is always going to be able to find things that automated tests miss there.

I think the question of to what extent we still have a role in ‘automated testing’ and what it should be is also a really interesting one. One of the angles of the ‘more collaboration between RH and Fedora’ bit here is that RH is now very interested in ‘upstreaming’ a bunch of its internal tests that it previously considered to be sort of ‘RH secret sauce’. Specifically, there’s a set of tests from RH’s ‘Platform QE’ team which currently run through a pipeline using RH’s Beaker test platform which we’d really like to have at least a subset of running on Fedora. So there’s an open question about whether and to what extent Fedora QA would have a role in adapting those tests to Fedora and overseeing their operation. The nuts and bolts of ‘make sure Fedora has the necessary systems in place to be able to run the tests at all’ is going to be the job of the new ‘infrastructure’ team, but we may well wind up being involved in the work of adapting the tests themselves to Fedora and deciding which ones we want to run and for what purposes. In general, there is likely still going to be a requirement for ‘automated testing’ that isn’t CI – it’s still going to be necessary to test the things we build at a higher level. I don’t think we can yet know exactly what requirements we’ll have there, but it’s something to think about and figure out as we move forward, and I think it’s definitely going to be part of our job.

We may also need to reconsider how Fedora QA, and indeed Fedora as a whole, decides what is really important. Right now, there’s a pretty solid process for this, but it’s quite tied to the ‘all the things, every six months’ release cycle. For each release we decide which Fedora products are ‘release blocking’, and we care about those, and the bits that go into them and the tools for building them, an awful lot more than we care about anything else. This works pretty well to focus our limited resources on what’s really important. But if we’re going to be moving to having more and more varied ‘Fedora’ products with different release streams, the binary ‘is it release blocking?’ question doesn’t really work any more. Fedora as a whole might need a better way of doing that, and QA should have a role to play in figuring that out and making sure we work out our priorities properly from it.

So there we go! I hope that was useful and thought-provoking. We’ve got a QA meeting coming up tomorrow (2017-02-13) at 1600 UTC where I’m hoping we can chew these topics over a bit, just to serve as an opportunity to get people thinking. Hope to see you there, or on the mailing list!

Python GTK+ 3 workshop on FEDORA #LinuXatPUCP

Posted by Julita Inca Chiroque on February 12, 2017 04:50 PM

Yesterday we celebrated a very nice workshop. Students from universities such as UNTELS, UPIG, UTP, UNMSM, UNI, UIGV and PUCP gathered together to follow the Python GTK+3 tutorial.

Our journey started at 9:00 am with meeting people to gather them into the Lab V205 at PUCP. Here are the first ones in getting the place (Thanks for being on time).

img_7168Then we had the talk of Neville from FEDORA.  Lovely Neville introduced guys into the ICARO project and recommended us to actively send mails to the lists to activate our FAS account and to demonstrate work of the students to the different FEDORA groups 🙂

nevilleThen we did the workshop virtually on FEDORA 25. Thanks to Martin and Felipe to support students issues during the workshop. Sheyla help us with the git account while Fabian did work on the bug #709865

linuxpucpWe shared lunch and two photo groups until we have finally completed the first modules.

comidaSpecial thanks to authorities of PUCP (Felipe Solari and Walter Segama) for the unconditional help and let us use the lab. Highlighted the labor of Neville from FEDORA Nicaragua and Fabio from GNOME Chile, great masters and mentors of both projects!

mentores* This is the step by step of the experience from my point of view:

1.- Cloning the jhbuild into the development directory:

gtk12.- Then you must install the packages that are missing after doing autogen:

pantalla023.- In my case the automake package was missing:

pantalla034.- The gettext package was done with the devel part:

pantalla055.- After no having errors, we can do make

pantalla07

6.- Start the installation with make install

pantalla097.- We are going to set the local commands into the bashrc

pantalla108.- Sanity check let us know the package that still missing to install jhbuild

pantalla119.- The python-dbus was crucial during this labor

pantalla1210.- In my case I also needed to install the libtool package

pantalla1311.- Installing all the dependence packages of jhbuild

pantalla1412.- This message must be shown after struggling with the packages for FEDORA

pantalla1513.- Another package that must install to achieve the pygobjet is redhat-rpm-config

pantalla1714.- Now we are able to install pygobject

pantalla2115.- The SUCCESS message have finally appeared!

pantalla2216.- The installation of GTK3 is a dream came true:

pantalla2317.- Then we must be alert by the terminal that we made it!

pantalla2518.- Into the development file, we are going to create our sample-jhbuild-gtk directory

pantalla2619.- Running  the first python code main.py using gtk

pantalla30

20.-  The work of the participants:

Final picture of our effort! Thanks FEDORA and GNOME ❤

img_7243If you wish, you can see more pictures here 🙂


Filed under: FEDORA, GNOME, τεχνολογια :: Technology Tagged: #LinuXatPUCP, #LinuxPlaya, fedora, FEDORA 25, GTK 3, gtk+, Julita Inca, Julita Inca Chiroque, Linux at PUCP, PUCP, Python, python 3.5

Recipes by mail

Posted by Matthias Clasen on February 12, 2017 08:58 AM

Since I last wrote about GNOME recipes, we’ve mainly focused on completing our feature set for 3.24.

Todays 0.12.0 release brings us very close to covering all the goals we’ve set ourselves when we started working on recipes: We have a  fullscreen cooking mode

and recipes and shopping lists can be shared by email

Since the Share button has replaced the Export button, contributing your recipes is now easier too: Just choose the “Contribute” option and send the recipe to the new recipes-list@gnome.org mailing list.

While working on this, it suddenly dawned on my why I may have seen some recipe contributions in bugzilla that where missing attachments: bugzilla has a limit for the sizes of attachments it allows, and recipes with photos may hit this limit.

So, if you’ve tried to contributed a recipe via bugzilla, and ran into this problem, please send your recipe to recipes-list@gnome.org instead.

Reverse Proxying to Docker Containers with Nginx

Posted by Devan Goodwin on February 11, 2017 08:01 PM

On my personal VPS I host a handful of websites accessed from a variety of domains and sub-domains, as well as a few more involved webapps such as tt-rss. Historically applications that cross multiple programming languages and databases have been a terrible pain to deploy and keep running on a private server, but since containers have arrived this has become a lot easier.

On my server, I wanted to have a web server listening on the standard http/https ports proxying traffic for a variety of sites and applications, based on the domain/sub-domain in the request. Some of these applications would be hosted by containers running on other ports. The following post outlines how to do this with CentOS 7, Nginx, and Docker. I also wanted to be able to connect securely to these so in the examples below, you will see references to my LetsEncrypt certificates being used for various sub-domains.

Assumptions

You will need to install nginx and docker.

yum install -y docker nginx
systemctl enable docker && systemctl start docker
systemctl enable nginx && systemctl start nginx

This post also assumes you are have DNS for your domain(s) pointing to your server, and optionally have familiarized yourself with LetsEncrypt and generated relevant certificates.

Static Web Content

Several of the sites I host are just static web content, most notably this blog which I recently started writing with Hugo. (an immense relief after years Drupal, php, and databases) For this kind of content we don’t really need Docker, nginx is perfectly capable of hosting these easily.

Your main /etc/nginx/nginx.conf should look something like this:

user  nginx;
worker_processes  1;
error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;
events {
    worker_connections  1024;
}
http {
    default_type  application/octet-stream;
    include       /etc/nginx/mime.types;
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';
    access_log  /var/log/nginx/access.log  main;
    sendfile        on;
    keepalive_timeout  65;
    include /etc/nginx/conf.d/*.conf;
}

After this you can drop new config files into /etc/nginx/conf.d/ for each new site/sub-domain you want to host.

My configuration for this blog, including SSL using LetsEncrypt certs looks like this:

server {
    listen 80;
    listen 443 default_server ssl;
    server_name rm-rf.ca www.rm-rf.ca;
    access_log /var/log/nginx/rm-rf-access.log;
    error_log /var/log/nginx/rm-rf-error.log;

    ssl_certificate      /etc/letsencrypt/live/rm-rf.ca/fullchain.pem;
    ssl_certificate_key  /etc/letsencrypt/live/rm-rf.ca/privkey.pem;

    root /var/www/sites/rm-rf.ca;
    index index.html;

    location / {
        root /var/www/sites/rm-rf.ca;
    }
}

Running Web Apps as Containers

For the most part getting your applications running in containers will need to be an exercise for the reader. For tt-rss I simply used the Docker setup from the clue/ttrss image.

docker run -d --name ttrssdb nornagon/postgres
docker run -d --link ttrssdb:db -p 3001:80 clue/ttrss

You should now have tt-rss running on port 3001, however you do not need to open this port to the world, nginx is just going to proxy to it over localhost.

You can now add nginx config to forward traffic to it based on the domain in the request, in this example https://ttrss.rm-rf.ca.

server {
    listen 443 ssl;
    server_name ttrss.rm-rf.ca;
    access_log /var/log/nginx/ttrss-access.log;
    error_log /var/log/nginx/ttrss-error.log;

    ssl_certificate      /etc/letsencrypt/live/rm-rf.ca/fullchain.pem;
    ssl_certificate_key  /etc/letsencrypt/live/rm-rf.ca/privkey.pem;

    ssl_session_cache shared:SSL:1m;
    ssl_session_timeout  5m;

    ssl_ciphers  HIGH:!aNULL:!MD5;
    ssl_prefer_server_ciphers   on;

    location / {
            proxy_pass http://localhost:3001;
            proxy_redirect  http://localhost:3001 /;
            proxy_read_timeout 60s;

            proxy_set_header          Host            $host;
            proxy_set_header          X-Real-IP       $remote_addr;
            proxy_set_header          X-Forwarded-For $proxy_add_x_forwarded_for;
    }
}

Result

You can now run any open source webapp of your choosing on your VPS, and expose it securely over https without resorting to ports in your URLs. Just get your container running on a local port, and drop a new nginx config in place.

Running OpenShift using Minishift

Posted by Kushal Das on February 11, 2017 06:46 AM

You may already hear about Kubernetes or you may be using it right now. OpenShift Origin is a distribution of Kubernetes, which is optimized for continuous development and multi-tenant deployment. It also powers the Red Hat OpenShift.

Minishift is the upcoming tool which will enable you to run OpenShift locally on your computer on a single node OpenShift cluster inside a VM. I am using it on a Fedora 25 laptop, with help of KVM. It can also be used on Windows or OSX. For KVM, I first had to install docker-machine-driver-kvm. Then downloaded the latest minishift from the releases page. Unzip, and put the binary in your path.

$ ./minishift start
Starting local OpenShift cluster using 'kvm' hypervisor...
E0209 20:42:29.927281    4638 start.go:135] Error starting the VM: Error creating the VM. Error creating machine: Error checking the host: Error checking and/or regenerating the certs: There was an error validating certificates for host "192.168.42.243:2376": tls: DialWithDialer timed out
You can attempt to regenerate them using 'docker-machine regenerate-certs [name]'.
Be advised that this will trigger a Docker daemon restart which might stop running containers.
. Retrying.
Provisioning OpenShift via '/home/kdas/.minishift/cache/oc/v1.4.1/oc [cluster up --use-existing-config --host-config-dir /var/lib/minishift/openshift.local.config --host-data-dir /var/lib/minishift/hostdata]'
-- Checking OpenShift client ... OK
-- Checking Docker client ... OK
-- Checking Docker version ... OK
-- Checking for existing OpenShift container ... OK
-- Checking for openshift/origin:v1.4.1 image ... 
   Pulling image openshift/origin:v1.4.1
   Pulled 0/3 layers, 3% complete
   Pulled 0/3 layers, 24% complete
   Pulled 0/3 layers, 45% complete
   Pulled 1/3 layers, 63% complete
   Pulled 2/3 layers, 81% complete
   Pulled 2/3 layers, 92% complete
   Pulled 3/3 layers, 100% complete
   Extracting
   Image pull complete
-- Checking Docker daemon configuration ... OK
-- Checking for available ports ... OK
-- Checking type of volume mount ... 
   Using Docker shared volumes for OpenShift volumes
-- Creating host directories ... OK
-- Finding server IP ... 
   Using 192.168.42.243 as the server IP
-- Starting OpenShift container ... 
   Creating initial OpenShift configuration
   Starting OpenShift using container 'origin'
   Waiting for API server to start listening
   OpenShift server started
-- Adding default OAuthClient redirect URIs ... OK
-- Installing registry ... OK
-- Installing router ... OK
-- Importing image streams ... OK
-- Importing templates ... OK
-- Login to server ... OK
-- Creating initial project "myproject" ... OK
-- Removing temporary directory ... OK
-- Server Information ... 
   OpenShift server started.
   The server is accessible via web console at:
       https://192.168.42.243:8443

   You are logged in as:
       User:     developer
       Password: developer

   To login as administrator:
       oc login -u system:admin

The oc binary is in ~/.minishift/cache/oc/v1.4.1/ directory, so you can add that in your PATH. If you open up the above-mentioned URL in your browser, you will find your OpenShift cluster is up and running well.

Now you can start reading the Using Minishift to start using your brand new OpenShift Cluster.

Fedora Goes to FOSDEM

Posted by Brian "bex" Exelbierd on February 11, 2017 05:00 AM

Fedora logo I had the pleasure of going to FOSDEM this year and the annual spectacular didn't cease to deliver. During this year's conference, my second FOSDEM, I worked with Brian Stinson of CentOS fame to produce the Distributions Devroom.

FOSDEM gets busier every year and the Distributions Devroom was no different. For almost the entire day, the room was filled and we were routinely turning people away for lack of seats. The few times there was a dip in attendance seemed tied to the topic and not the time. This leads us to believe that the program was well balanced and represented the current thoughts and interests around distributions.

Read more over at the Red Hat Community Blog where this was originally posted.

Hacking on Pagure CI

Posted by farhaan on February 11, 2017 04:35 AM

“Ahaa!” I got a lot of ahaa moments when I was hacking on Pagure CI ,  Pagure CI’s initial draft was laid by lsedlar and I have blogged about it followed by me and Pingou. Pingou has done really amazing work with the flow and refactoring of code to making beautiful api calls.

I had great time hacking around it and got a bunch of learning. Few of the learning are :

  1. Try to do the minimal work in setting up the development environment mock everything that is available for testing.
  2. Think deeply about something when your mentor points it to you.

So the issue I was working on is a long pending one the issue was to attach build ID to all the Jenkins build Pagure was getting . Reason why attaching build id’s are necessary is to distinguish between different builds and to make the link to Jenkins a bit more specific for example if a build fail which was that build.

The first mistake I did was setting up Jenkins on my machine I had it previously but since my machine went under a kernel panic I lost all data related to Jenkins , now Fedora 25 has some packaging issue when installing  Jenkins directly. But anyhow from Jenkins site I got a way to set it up and it worked for me. In the mean while Pingou was pointing it out that I actually don’t need Jenkins instance but I was not able to get him on that and I really feel bad about it.

After setting up Jenkins the other task for me was to configure it , which was really easy because I have done it before and also because it was well documented. For setting up the documentation is fine but for hacking on the CI you need a little less work.

Step 1

Set up REDIS on your machine , you can do that with installing redis using sudo dnf install redis and enable the service using sudo systemctl enable redis and then start the service using sudo systemctl start redis. Along with this you need to add config for redis in default_config.py or which ever config file you are giving to the server using --config. The configuration are well documented in pagure.cfg.sample.

Step 2

Now, copy the pagure-ci-server from pagure-ci directory into the parent directory. Now once you have done that , this step is necessary because this is the service that run for pagure-ci. Now you just have to run pagure-ci-server by python pagure-ci-server.py. Once this started your service will be up and running.

Step 3

Now you just fire up your instance and make a project , have two branches and open a PR form once branch to other, if you get some authentication error that is most probably because you not done the right permission for users to use Jenkins, this is not recommended but you can entirely turn off the security of Jenkins just because you are testing something.

If you have done everything correct you will see the Jenkins flag being attached to the Pull Request.

VERY IMPORTANT NOTE:

All this could be saved if I have just used python-jenkins to fetch a job from Fedora Jenkins instance and send it as a flag to my PR. Thank you Pingou for telling me this hack.

Happy Hacking!