Fedora People

I have not disappeared!

Posted by Suzanne Hillman (Outreachy) on February 23, 2017 10:27 PM

I promise. Got a nasty cold, so wasn’t making as much progress, but still here.

Brief catchup, since I’m in the middle of trying to get a bunch of wrap-up stuff done.

Usability tests with mockups

As I said a few posts ago before I dove into CSS, I needed to do some usability tests with my mockups. I was unable to get any of my original set of interviewees to do this, and due to sickness on my own and Mo’s part and weather interfering, was unable to do any in-person usability testing.

I did get 5 people using my prototype, with a good spread among the tasks I had available.

As mentioned previously, my tasks included what we identified as the most immediately relevant aspects of the project, and the mockups I made for those.

The first page of each of my prototypes and their associated tasks are shown below, with a link to the prototypes themselves in the short description below each mockup.

Prototypes and tasks

<figure><figcaption>The People Prototype</figcaption></figure>
  • You heard that there were going to be events in your local region (Southern California) in the next few months. Using this interface, find one of those upcoming events and show me how you would interact with the interface to find out when the event is, where it’s located, and who to contact about it, and tell me what you are thinking as you do it.
  • You recently attended an event, and are wondering if anyone has put anything interesting on the event page. Using the prototype, find a past event and visit the page, and tell me what you are thinking as you do it.
<figure><figcaption>The Events Prototype</figcaption></figure>
  • You are going to be traveling to Berlin, Germany on a business trip and have a couple of extra days on the tail end of your journey to explore. You wonder if there is a Fedora community of locals that you could meet up with during the trip. Use the prototype to find Fedora folks near Berlin, and tell me what you are thinking as you do it.
  • FLOCK Los Angeles is tomorrow, but you cannot find the address of the venue or directions on how to get there. You need to figure it out before tomorrow so that you can arrange for a ride there. Find a Fedora community member in the Los Angeles area who is online right now to help, and tell me what you are thinking as you do it.
<figure><figcaption>Join us or Sign Up</figcaption></figure>
  • You live near Boston, MA, USA, and someone sent you a link to the Greater Boston Hub. You’ve never used Fedora Hubs before. You want to join the group to keep up to date with what they are doing. Using this prototype, join the group and tell me what you are thinking as you do it.
  • Create a new account on Fedora Hubs using the prototype, and tell me what you are thinking as you do it.
<figure><figcaption>Event Notifications</figcaption></figure>
  • We have a few different notifications relating to regional hubs and events. These would appear in your stream of information called “My Stream”. I would like you to take a look at these and tell me know what you think of them. What do you think you can do here, what do you think they are for; Just look around and do a little narrative.
  • Now, please respond to the first event in the list, either ‘going’ or ‘maybe’. Talk to me about what you expect to be happening here and what you are doing.
  • Please return to the first page using the back button, and select the other option from the first event.

Initial reactions

After two usability sessions, it became pretty clear that any one individual should do one, not both, of the two tasks in people, events, and join or create. Those were much too similar and were causing confusion to be done in a single session.

Similarly, in the initial prototypes, the top-most bar was too realistic-looking, having been from a screenshot of a more visually designed page. As such, to better determine the source of confusion with multiple search bars on the same page, I replaced the search bar with one from Balsamiq.

Some small issues with Balsamiq came up. First, MyBalsamiq did not show what items were linked on the prototypes my users saw. If I looked at them myself, I saw the appropriate markings.

Second, I was unable to have an entire line be clickable, which added some unnecessary confusion. As far as I could tell, this is simply not supported.

I suspect strongly that this experience would have been greatly improved by a note-taker. It’s taking a lot of time to go through the sessions after the fact, identify and gather the relevant information, and come up with a good way to summarize what I found. I do also appreciate the experience and viewpoints of others when collecting and interpreting information.

Once I’m finished collecting together the information from the usability sessions, I will be discussing what I found with Mo, likely doing more affinity analysis, and creating some sort of summary of the results and of the entire experience. I’m not yet clear on what that all will involve, and sort of suspect it’s not likely to be complete by the 6th. Frustrating, but that does happen.

Closing Activities

My internship will be coming to a close on March 6th. I would like to leave things in as clear a state as I can, both to allow others to continue my work, and to make it easier for me to pick it back up when I’m no longer able to work full-time on it.

In addition to the collating, analysis, and summary of the usability testing, I will be finishing up a number of other things. This includes summarizing what events/event planning needs to include, what ambassadors tend to be doing as resources), and making sure all the raw data (transcripts and recordings) are available to Mo.

I’m still pushing people to take the survey, and it looks like some work that Mo and I recently did improved our numbers significantly (from 28 responces to 121!). I’m not sure that I’ll have time, but I’m hoping to do some analysis of that, as well.

I’ve not really had much chance to really understand how one goes from prototype to visual design, which is unfortunate. That is one area that I definitely need more experience with! I may see about working more on that post-Outreachy.

xchat fliegt aus Fedora und wird von hexchat ersetzt

Posted by Fedora-Blog.de on February 23, 2017 07:01 PM

Debarshi Rey hat heute auf der Desktop-Mailingliste vorgeschlagen, xchat aus Fedora zu entfernen und durch xchat zu ersetzen.

Als Gründe für diesen Schritt führt er unter anderem an, dass das xchat-Projekt offensichtlich tot sei, da das letzte Release 7 Jahre zurückliegt. Für hexchat spricht aus seiner Sicht, das hexchat bereits seit 4 Jahren Teil von Fedora sei, das der Entwickler sowohl Teil der Fedora- als auch der Gnome-Community sei und das auch der MATE Spin von Fedora hexchat standardmäßig installiert.

Sollte es keine guten Gründe für einen Verbleib von xchat in den Fedora-Repositories geben, dürften die Tage der xchat Pakete in den Fedora-Repositories wohl gezählt sein.

Some stats about our dist-git usage

Posted by pingou on February 23, 2017 06:52 PM

You may have heard that there are some thoughts going on around integrating some continuous integration for our packaging work in Fedora.

Having in mind the question about how much resources we would need to offer CI in Fedora, I tried to gather some stats about our dist-git usage.

Querying datagrepper was as always the way to go, although the amount of data in datagrepper is such that it starts to be hard to query some topics (such as koji builds) or to go back very far in history.

Anyway, I went on and retrieved 87600 messages from datagrepper, covering 158 days.

Here is the output:

Over 158 days (from 2016-09-19 to 2017-02-23)
   There was an average of 554.430379747 commits per day
   The median is of 418.0 commits per day
   The minimum number of commits was 51
   The maximum number of commits was 10029
Over 158 days (from 2016-09-19 to 2017-02-23)
   There was an average of 254.151898734 packages updated per day
   The median is of 119.5 package updated per day
   The minimum number of package updated was 20
   The maximum number of package updated was 9612

To be honest I was expecting a little more, I'll try re-generating this data maybe in another way to see if that changes something, but that gives us a first clue

Factory 2, Sprint 10

Posted by Ralph Bean on February 23, 2017 05:19 PM

The Factory 2.0 team is back from Brno and DevConf. We had two talks to look for, one on Factory 2.0 current work and another done in conjunction with the Modularity team on Modularity itself. Since returning, we've been working with other teams to set our plans for F27 while simultaneously getting the module build service ready for production for F26.

For the MBS we have all the pieces in staging, and we're now working with Patrick Uiterwijk (the Fedora Infra Security Officer) on an audit of the code. At the time of this writing, we have answers and patches to all of the issues. We'll be working with Patrick in the coming days to finish this out.

The broad strokes of our plans for F27 are described in the devconf talk. We have a draft of a more focused, bullet-list of subprojects slated for F27, which we'll be publishing in about a month after sorting out some CI details with Fedora Infrastructure, Fedora QA, and the Atomic folks.

mbs-reuse-component-builds, by mprahl

This demo shows a feature for the Module Build Service which reuses component builds from previous builds of the module if the component and the buildroot haven't changed.

<video autobuffer="autobuffer" controls="controls" height="350" width="600"> <source src="https://fedorapeople.org/groups/factory2/sprint-010//mprahl-mbs-reuse-component-builds.mp4"> </video>

module-lint-on-commit, by threebean

In this demo, I show the check_modulemd[1] check (developed by the base-runtime team) being automatically run in the online taskotron environment[2].

A commit to a module in dist-git is pushed and linting errors are produced in resultsdb (which in turn can be consumed by other systems).

<video autobuffer="autobuffer" controls="controls" height="350" width="600"> <source src="https://fedorapeople.org/groups/factory2/sprint-010//threebean-module-lint-on-commit.ogv"> </video>

pdc-upgrade, by threebean

Here we show the latest upgraded instance of the Product Definition Center in Fedora[1] with the new /unreleasedvariants/ endpoint[2] for the Module Build Service[3].

<video autobuffer="autobuffer" controls="controls" height="350" width="600"> <source src="https://fedorapeople.org/groups/factory2/sprint-010//threebean-pdc-upgrade.ogv"> </video>

Factory 2, Sprint

Posted by Ralph Bean on February 23, 2017 05:08 PM

The Factory 2.0 team is back from Brno and DevConf. We had two talks to look for, one on Factory 2.0 current work and another done in conjunction with the Modularity team on Modularity itself. Since returning, we've been working with other teams to set our plans for F27 while simultaneously getting the module build service ready for production for F26.

For the MBS we have all the pieces in staging, and we're now working with Patrick Uiterwijk (the Fedora Infra Security Officer) on an audit of the code. At the time of this writing, we have answers and patches to all of the issues. We'll be working with Patrick in the coming days to finish this out.

The broad strokes of our plans for F27 are described in the devconf talk. We have a draft of a more focused, bullet-list of subprojects slated for F27, which we'll be publishing in about a month after sorting out some CI details with Fedora Engineering and the Atomic SIG.

mbs-reuse-component-builds, by mprahl

This demo shows a feature for the Module Build Service which reuses component builds from previous builds of the module if the component and the buildroot haven't changed.

<video autobuffer="autobuffer" controls="controls" height="350" width="600"> <source src="https://fedorapeople.org/groups/factory2/sprint-010//mprahl-mbs-reuse-component-builds.mp4"> </video>

module-lint-on-commit, by threebean

In this demo, I show the check_modulemd[1] check (developed by the base-runtime team) being automatically run in the online taskotron environment[2].

A commit to a module in dist-git is pushed and linting errors are produced in resultsdb (which in turn can be consumed by other systems).

<video autobuffer="autobuffer" controls="controls" height="350" width="600"> <source src="https://fedorapeople.org/groups/factory2/sprint-010//threebean-module-lint-on-commit.ogv"> </video>

pdc-upgrade, by threebean

Here we show the latest upgraded instance of the Product Definition Center in Fedora[1] with the new /unreleasedvariants/ endpoint[2] for the Module Build Service[3].

[1] - https://pdc.fedoraproject.org/ [2] - https://pdc.fedoraproject.org/rest_api/v1/unreleasedvariants/ [3] - https://fedoraproject.org/wiki/Changes/ModuleBuildService

<video autobuffer="autobuffer" controls="controls" height="350" width="600"> <source src="https://fedorapeople.org/groups/factory2/sprint-010//threebean-pdc-upgrade.ogv"> </video>

--- categories: fedora, factory2, sprint date: 2017/02/23 12:10:46 permalink: http://threebean.org/blog/factory-2,-sprint-#10/ title: Factory 2, Sprint #10 ---

The Factory 2.0 team is back from Brno and DevConf. We had two talks to look for, one on Factory 2.0 current work and another done in conjunction with the Modularity team on Modularity itself. Since returning, we've been working with other teams to set our plans for F27 while simultaneously getting the module build service ready for production for F26.

For the MBS we have all the pieces in staging, and we're now working with Patrick Uiterwijk (the Fedora Infra Security Officer) on an audit of the code. At the time of this writing, we have answers and patches to all of the issues. We'll be working with Patrick in the coming days to finish this out.

The broad strokes of our plans for F27 are described in the devconf talk. We have a draft of a more focused, bullet-list of subprojects slated for F27, which we'll be publishing in about a month after sorting out some CI details with Fedora Engineering and the Atomic SIG.

mbs-reuse-component-builds, by mprahl

This demo shows a feature for the Module Build Service which reuses component builds from previous builds of the module if the component and the buildroot haven't changed.

<video autobuffer="autobuffer" controls="controls" height="350" width="600"> <source src="https://fedorapeople.org/groups/factory2/sprint-010//mprahl-mbs-reuse-component-builds.mp4"> </video>

module-lint-on-commit, by threebean

In this demo, I show the check_modulemd[1] check (developed by the base-runtime team) being automatically run in the online taskotron environment[2].

A commit to a module in dist-git is pushed and linting errors are produced in resultsdb (which in turn can be consumed by other systems).

<video autobuffer="autobuffer" controls="controls" height="350" width="600"> <source src="https://fedorapeople.org/groups/factory2/sprint-010//threebean-module-lint-on-commit.ogv"> </video>

pdc-upgrade, by threebean

Here we show the latest upgraded instance of the Product Definition Center in Fedora[1] with the new /unreleasedvariants/ endpoint[2] for the Module Build Service[3].

<video autobuffer="autobuffer" controls="controls" height="350" width="600"> <source src="https://fedorapeople.org/groups/factory2/sprint-010//threebean-pdc-upgrade.ogv"> </video>

--- categories: fedora, factory2, sprint date: 2017/02/23 12:11:37 permalink: http://threebean.org/blog/factory-2,-sprint-10/ title: Factory 2, Sprint 10 ---

The Factory 2.0 team is back from Brno and DevConf. We had two talks to look for, one on Factory 2.0 current work and another done in conjunction with the Modularity team on Modularity itself. Since returning, we've been working with other teams to set our plans for F27 while simultaneously getting the module build service ready for production for F26.

For the MBS we have all the pieces in staging, and we're now working with Patrick Uiterwijk (the Fedora Infra Security Officer) on an audit of the code. At the time of this writing, we have answers and patches to all of the issues. We'll be working with Patrick in the coming days to finish this out.

The broad strokes of our plans for F27 are described in the devconf talk. We have a draft of a more focused, bullet-list of subprojects slated for F27, which we'll be publishing in about a month after sorting out some CI details with Fedora Engineering and the Atomic SIG.

mbs-reuse-component-builds, by mprahl

This demo shows a feature for the Module Build Service which reuses component builds from previous builds of the module if the component and the buildroot haven't changed.

<video autobuffer="autobuffer" controls="controls" height="350" width="600"> <source src="https://fedorapeople.org/groups/factory2/sprint-010//mprahl-mbs-reuse-component-builds.mp4"> </video>

module-lint-on-commit, by threebean

In this demo, I show the check_modulemd[1] check (developed by the base-runtime team) being automatically run in the online taskotron environment[2].

A commit to a module in dist-git is pushed and linting errors are produced in resultsdb (which in turn can be consumed by other systems).

<video autobuffer="autobuffer" controls="controls" height="350" width="600"> <source src="https://fedorapeople.org/groups/factory2/sprint-010//threebean-module-lint-on-commit.ogv"> </video>

pdc-upgrade, by threebean

Here we show the latest upgraded instance of the Product Definition Center in Fedora[1] with the new /unreleasedvariants/ endpoint[2] for the Module Build Service[3].

<video autobuffer="autobuffer" controls="controls" height="350" width="600"> <source src="https://fedorapeople.org/groups/factory2/sprint-010//threebean-pdc-upgrade.ogv"> </video>

--- categories: fedora, factory2, sprint date: 2017/02/23 12:15:09 permalink: http://threebean.org/blog/factory-2,-sprint-10/ title: Factory 2, Sprint 10 ---

The Factory 2.0 team is back from Brno and DevConf. We had two talks to look for, one on Factory 2.0 current work and another done in conjunction with the Modularity team on Modularity itself. Since returning, we've been working with other teams to set our plans for F27 while simultaneously getting the module build service ready for production for F26.

For the MBS we have all the pieces in staging, and we're now working with Patrick Uiterwijk (the Fedora Infra Security Officer) on an audit of the code. At the time of this writing, we have answers and patches to all of the issues. We'll be working with Patrick in the coming days to finish this out.

The broad strokes of our plans for F27 are described in the devconf talk. We have a draft of a more focused, bullet-list of subprojects slated for F27, which we'll be publishing in about a month after sorting out some CI details with Fedora Infrastructure, Fedora QA, and the Atomic folks.

mbs-reuse-component-builds, by mprahl

This demo shows a feature for the Module Build Service which reuses component builds from previous builds of the module if the component and the buildroot haven't changed.

<video autobuffer="autobuffer" controls="controls" height="350" width="600"> <source src="https://fedorapeople.org/groups/factory2/sprint-010//mprahl-mbs-reuse-component-builds.mp4"> </video>

module-lint-on-commit, by threebean

In this demo, I show the check_modulemd[1] check (developed by the base-runtime team) being automatically run in the online taskotron environment[2].

A commit to a module in dist-git is pushed and linting errors are produced in resultsdb (which in turn can be consumed by other systems).

<video autobuffer="autobuffer" controls="controls" height="350" width="600"> <source src="https://fedorapeople.org/groups/factory2/sprint-010//threebean-module-lint-on-commit.ogv"> </video>

pdc-upgrade, by threebean

Here we show the latest upgraded instance of the Product Definition Center in Fedora[1] with the new /unreleasedvariants/ endpoint[2] for the Module Build Service[3].

<video autobuffer="autobuffer" controls="controls" height="350" width="600"> <source src="https://fedorapeople.org/groups/factory2/sprint-010//threebean-pdc-upgrade.ogv"> </video>

Additional PHP packages for RHSCL

Posted by Remi Collet on February 23, 2017 12:38 PM

Current situation about official repositories providing PHP Software Collections for  RHEL and CentOS users.

Since RHSCL 2.3 was released, RHEL users can install PHP 5.6 or PHP 7.0 without base système alteration, using the appropriate channel.

These packages are also available for CentOS users in the SCL repositories, managed by the SCLo SIG:

# yum --enablerepo=extras install centos-release-scl

So the CentOS project provides the infrastructure and hosting for 4 repositories:

  • centos-sclo-rh : same content than upstream RHSCL
  • centos-sclo-sclo : additional collections maintained by the community
  • centos-sclo-rh-testing : packages to be tested (RHSCL beta version)
  • centos-scl-sclo-testing : package to be tested, maintained by the community

RHSCL users wishing to use these additional packages can configure the centos-sclo-sclo repository by using the  centos-release-scl Copr repository:

# cd /etc/yum.repos.d/
# wget https://copr.fedorainfracloud.org/coprs/rhscl/centos-release-scl/repo/epel-7/rhscl-centos-release-scl-epel-7.repo
# yum install centos-release-scl

If you are interested by these packages, I recommend to follow the sclorg@redhat.com mailing list where possible issues are discussed, and changes announced. Thanks for using this list for your feedback (testing packages) or any other request. New contributor are also welcome.

Information and documentation on https://www.softwarecollections.org/.

Development is tracked on sclorg-distgit.

Here is the list of available additional packages in centos-sclo-sclo on Feb 23th 2017:

# Package name Version Distribution Comments
  sclo-php56-php-pecl-apcu
sclo-php70-php-pecl-apcu
4.0.10
5.18
6, 7  
  sclo-php70-php-pecl-apcu-bc 1.0.3 6, 7  
  sclo-php56-php-pecl-apfd
sclo-php70-php-pecl-apfd
1.0.1 6, 7  
  sclo-php56-php-pecl-geoip
sclo-php70-php-pecl-geoip
1.1.1 7  
  sclo-php56-php-pecl-http
sclo-php70-php-pecl-http
2.5.6
3.0.1
6, 7  
  sclo-php56-php-pecl-igbinary
sclo-php70-php-pecl-igbinary
2.0.1 6, 7  
  sclo-php56-php-pecl-imagick
sclo-php70-php-pecl-imagick
3.4.3 6, 7  
  sclo-php56-php-pecl-lzf
sclo-php70-php-pecl-lzf
1.6.5 6, 7  
  sclo-php70-php-pecl-memcached 3.0.3 7 testing
  sclo-php56-php-pecl-mongodb
sclo-php70-php-pecl-mongodb
1.1.10 6, 7  
  sclo-php70-php-pecl-msgpack 2.0.2 6, 7 testing
  sclo-php56-php-pecl-propro
sclo-php70-php-pecl-propro
1.0.2
2.0.1
6, 7  
  sclo-php56-php-pecl-raphf
sclo-php70-php-pecl-raphf
1.1.2
2.0.0
6, 7  
  sclo-php56-php-pecl-redis
sclo-php70-php-pecl-redis
3.1.1 6, 7 testing
  sclo-php56-php-pecl-selinux
sclo-php70-php-pecl-selinux
0.4.1 6, 7  
  sclo-php56-php-pecl-solr2
sclo-php70-php-pecl-solr2
2.4.0 6, 7  
  sclo-php56-php-pecl-uploadprogress
sclo-php70-php-pecl-uploadprogress
1.0.3.1 6, 7  
  sclo-php56-php-pecl-uuid
sclo-php70-php-pecl-uuid
1.0.4 6, 7  
  sclo-php56-php-pecl-xattr
sclo-php70-php-pecl-xattr
1.3.0 6, 7  
  sclo-php70-php-pecl-xdebug 2.4.1 6, 7  

for now, all dependencies must be available in base repository (EPEL is excluded), this explain why some extensions cannot be added.

3 mind mapping tools in Fedora

Posted by Fedora Magazine on February 23, 2017 08:00 AM

In a previous Magazine article, we covered tracking your time and tasks. In that article we mentioned some mind mapping tools. Now we’ll cover three mind mapping apps you can use in Fedora. You can use these tools to generate and manipulate maps that show your thoughts. Mind maps can help you to improve your creativity and effectiveness. You can use them for time management, to organize tasks, to overview complex contexts, to sort your ideas, and more.

Labyrinth

labyrinth at gshell Mind Mapping Tool

Selecting Labyrinth in GNOME Shell

Labyrinth may not be intuitive at first. However, it is intended to be lightweight. It is also well-integrated with GNOME and runs smoothly. After you become familiar with the way it works, you’ll be able to create simple mind maps and save them like maps or images.

labyrinth mmap Mind Mapping Tool

labyrinth map

Installation is easy using dnf along with the sudo command:

$ sudo dnf install labyrinth

When you start Labyrinth, the first screen is not the map itself, but rather a project manager:

labyrinth welcome screen Mind Mapping Tool

Labyrinth welcome screen

Then, click New and start drawing a diagram of what’s on your mind. Features in Labyrinth include:

  • Scaling and scrollable canvas (infinite sized maps!)
  • Support for text attributes (bold, italics, underline and font selection)
  • Arrow navigation of thoughts
  • Foreground and background colouring of nodes
  • Import and export labyrinth files for maps in the form of tarballs
  • SVG export
  • PDF export
  • Save browser window state across instances (UNIX/Linux build only)
  • Selection using bounding box
  • Searching in the browser window

Labyrinth is made with Python, GTK+, and Cairo so it works smoothly in GTK-based desktops like GNOME, MATE, and Cinnamon. It’s licensed under the GPLv2.

View Your Mind (VYM)

VYM is another useful mind mapping tool packaged in Fedora. It’s a mature application, with a lot of features included. It’s easy to use and intuitive. Furthermore, the export tool is pretty powerful, and allows you to export to numerous formats, including HTML or LibreOffice.

vym at gshell Mind Mapping Tool

Selecting VYM in the GNOME Shell

The main screen shows the first map, so you can start work immediately. VYM has keyboard shortcuts that make your work easier. It also includes icons and signs to make your mind map more expressive.

vym welcome screen Mind Mapping Tool

VYM welcome screen

Another interesting thing about VYM is that the project website is entirely made in VYM itself. Check it out here. The list of features is quite long, and includes:

  • Import of Freemind maps
  • Function to export from tomboy to vym
  • Export to CSV spreadsheet
  • Autosave
  • Editor for scripts
  • Syntax highlighting for editor
  • Export of map to HTML or XML

A quick example:

vym mindmap Mind Mapping Tool

Example VYM mindmap

To install it, use this command:

$ sudo dnf install vym

VYM is written in C++ and the Qt framework. It uses the GPL license, with an exception to port it to Microsoft Windows systems.

FreeMind

FreeMind is a premier free mind-mapping software written in Java. It aims to be a high productivity tool. Its main features include:

  • Ability to follow HTML links stored in the nodes
  • Folding, an essential property of FreeMind
  • Fast one-click navigation
  • Smart Drag and Drop, including copying nodes or node styles, dragging and dropping of multiple selected nodes, and dropping texts or a list of files from external sources
  • Smart copying and pasting from and into the application
  • Export to HTML
  • Find facility, which shows found items one by one as you do Find next, and the map is unfolded only for the current item
  • Editing of long multiline nodes
  • Decorating nodes with built-in icons, colors and different fonts
freemind at gshell Mind Mapping Tool

Selecting FreeMind in GNOME Shell

The main screen allows you to work immediately. Like the other tools featured here, it’s easy, has keyboard shortcuts to speed your work, and exports to numerous formats.

freemind map

Example freemind map

To install it, run this command:

$ sudo dnf install freemind

FreeMind is written in Java and licensed under the GPLv3.

Start mapping your mind

The tools you need to organize, plan, and get clarity on your thoughts are in Fedora. So what are you waiting for? Start mind mapping!

gparted 0.28.1

Posted by nonamedotc on February 23, 2017 02:12 AM
A new version of gparted was released recently and I have updated the Fedora package to the latest version - 0.28.1.

This version brings a rather exciting (at least, to me) update - ability to copy and resize already open LUKS filesystems.

For full details, see gparted release notes of both 0.28.0 and 0.28.1

0.28.0 - release notes (0.28.0)

0.28.1 - release notes (0.28.1)


This update is, at the moment, only pushed for Fedora 25. I will test this and submit an update for EPEL-7 in the next few days.


Obligatory screenshots -




































Rawhide notes from the trail, 2017-02-22

Posted by Kevin Fenzi on February 23, 2017 12:02 AM

Some recent gotchas in rawhide:

  • firefox doesn’t seem to compile correctly with gcc7 (and the optimization levels Fedora uses by default). The current rawhide version will install and run fine, but looks horrible. As a work around you can install a flatpak or a binary version from upstream or downgrade to firefox-51.0.1-2.fc26 from koji. This is tracked in: https://bugzilla.redhat.com/show_bug.cgi?id=1422532
  • A few weeks ago, python2-jinja2 was updated to 1.9.4, then 1.9.5. Unfortunately 1.9.4 broke ansible templating, and all the problems were not fixed by 1.9.5. As a consequence, ansible hasn’t been runable the last few weeks. Until today: I pushed ansible-2.2.2.0-rc1 into rawhide. This isn’t the final 2.2.2.0, but it hopefully fixes the jinja compatibility issues and gets rawhide ansible users back on track.
  • After my last post about composes failing due to an unsigned package, we ran into 3 other issues we had to fix to actually get a compose working: First, an nss update caused lorax to break. Turns out it was due to the mock chroot used not having /dev/urandom and nss noq requires that to be there. Next was the rdma-core package breaking things. It was untagged a while back to fix, but then the mass rebuild rebuilt it again and it got pushed out again. Finally policycoreutils-python started pulling a ton of more deps which broke the minimal installer chroot.
  • Even after we now have composes, livecd’s are not working due to an anaconda bug: https://bugzilla.redhat.com/show_bug.cgi?id=1425827 Hopefully we will have a fix for that soon and livemedia will be back.
  • There was a pungi bug preventing the ostree composes working, there’s a proposed fix for that one.
  • i386 images and media are all failing due to a weird dep solving issue where it says “will not install src.rpm”. https://bugzilla.redhat.com/show_bug.cgi?id=1416699

Lots of bugs, but we are moving forward on quashing them now. I am hoping we have some good composes later this week and then we can try and keep them that way.

Pine64 – ordered

Posted by Richard W.M. Jones on February 22, 2017 10:29 PM

I ordered the 2 GB Pine64 64 bit ARM board. It’s extremely constrained compared to the normal 64 bit ARM boards I use, but it’s good that there’s one which may be supported by upstream Linux in the near future.

Total cost for the board + the wifi accessory + postage to the UK is $50.98 (£42.36).

Let’s see how it goes …


They also have this strange SO-DIMM form-factor co-processor. I’m not sure what to make of it.


Episode 34 - Bathing in Ebola Virus

Posted by Open Source Security Podcast on February 22, 2017 09:26 PM
Josh and Kurt discuss RSA, the cryptographer's panel and of course, AI.

Download Episode
<iframe frameborder="no" height="150" scrolling="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/309062655&amp;auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false&amp;visual=true" width="100%"></iframe>

Show Notes


A logo for cri-o

Posted by Máirín Duffy on February 22, 2017 08:09 PM

Dan Walsh recently asked me if I could come up with a logo for a project he is involved with – cri-o.

The “cri” of cri-o stands for Container Runtime Interface. The CRI is a different project – the CRI is an API between Kubernetes (container orchestration) and various container runtimes. cri-o is a runtime – like rkt or Docker – that can run containers that are compliant with OCI (Open Containers Initiative) specification. (Some more info on this is here.)

Dan and Antonio suggested a couple of ideas at the outset:

  • Since the project means to connect to Kubernetes via the CRI, it might be neat to have some kind of nod to Kubernetes. Kubernetes’ logo is a nautical one (the wheel of a ship, with 7 spokes.)
  • If you say cri-o out loud, it kind of sounds like cyro, e.g., icy-cool like Mr. Freeze from Batman!
  • If we want to go for a mascot, a mammoth might be a neat one (from an icy time.)

So I had two initial ideas, riffing off of those:

  1. I tried to think of something nautical and frozen that might relate to Kubernetes in a reasonable way given what cri-o actually does. I kept coming back to icebergs, but they don’t relate to ships’ steering in the same way, and worse, I think they could have a bad connotation whether it’s around stranding polar bears, melting and drowning us all, or the Titanic.
  2. Better idea – not nautical, yet it related to the Kubernetes logo in a way. I was thinking a snowflake might be an interesting representation – it could have 7 spokes like the Kubernetes wheel. It relates a bit in that snowflakes are a composition of a lot of little ice crystals (containers), and kubernetes would place them on the runtime (cri-o) in a formation that made the most sense, forming something beautiful 🙂 (the snowflake.)

I abandoned the iceberg idea and went through a lot of iterations of different snowflake shapes – there are so many ways to make a snowflake! I used the cloning feature in Inkscape to set up the first spoke, then cloned it to the other 6 spokes. I was able to tweak the shapes in the first spoke and have it affect all spokes simultaneously. (It’s a neat trick I should video blog one of these days.)

This is what I came up with:

3 versions of the crio logo with different color treatments - one on a white background, one on a flat blue background, one on a blue gradient background. on the left is a 7-spoke snowflake constructed from thin lines and surrounded by a 7-sided polygon, on the right is the logotype 'cri-o'

I ended up on a pretty simple snowflake – I think it needs to be readable at small sizes, and while you can come up with some beautiful snowflake compositions in Inkscape, it’s easy to make snowflakes that are too elaborate and detailed to work well at a small size. The challenge was clarity at a small size as well as readability as a snowflake. The narrow-line drawing style seems to be pretty popular these days too.

The snowflake shape is encased in a 7-sided polygon (similar to the Kubernetes logo) – my thinking being the shape and narrowness of the line kind of make it looked like the snowflake is encased in ice (along the cryo initial idea.)

The dark blue color is a nod to the nautical theme; the bright blue highlight color is a nod to the cyro idea.

Completely symbolic, and maybe not in a clear / rational way, but I colored a little piece of each snowflake spoke using a blue highlight color, trying to make it look like those are individual pieces of the snowflake structure (eg the crystals == containers idea) getting deployed to create the larger snowflake.

Anyway! That is an idea for the cri-o logo. What do you think? Does it work for you? Do you have other, better ideas?

Install and configure DKIM with Postfix on RHEL7

Posted by Luc de Louw on February 22, 2017 08:30 AM

Introduction DKIM (Domain Keys Identified Mail) is a measure against email spoofing, Phishing and SPAM mails. Its easy to implement as you will learn in this article. DKIM signs emails on the outgoing SMTP server, the receiving SMTP can verify the signature by looking up the mail._domainkey TXT DNS record of the respective domain to […]

The post Install and configure DKIM with Postfix on RHEL7 appeared first on Luc de Louw's Blog.

F25-20170221 Updated ISOs available!!

Posted by Corey ' Linuxmodder' Sheldon on February 22, 2017 12:46 AM

It is with great pleasure to announce that the Community run respin team has yet another Updated ISO round.  This round carries the 4.9.10-200 kernel along with 761 MB of updates (avg,  some Desktop Environments more, some less) since the Gold release.

Torrents files are available at the same link as usual alongside the .iso files.

Below are the contents of Both CHECKSUM512-20170221 and HASHSUM512-20170221 (the later is torrent hashes):

cat CHECKSUM512-20170221

(Clearsigned with 0x36BC4987EE525B60)

—–BEGIN PGP SIGNED MESSAGE—–
Hash: SHA512

1ed1b0468e0fc146bb7b92d90d09fa995a23acc515f111025c514317875add4a096795edaae7458b37f2e015faa2fdd8a8003d985afc7957a6c76fde110e461a F25-CINN-x86_64-20170221.iso
0408455ddb5c1d18e10c4bc2ebbb4887795ee1c6083a3c4e7bd5d6229930bf393a55bab0f19f4226b612aebe4586d728ec2ee5dfc8b49bc53a5667e03ccefe67 F25-KDE-x86_64-20170221.iso
a97ab34bed3b1f93b92a45fd65188640a5ce4809535c05cb1d30b86ece8adfad7dad83b0f4e8710698803c6cf3e7a9fd524890eafc1a59b14780dc0071d062bd F25-LXDE-x86_64-20170221.iso
2f296829e648da98201da8a6c6cdf3362753d22194764b05e1ef4750906f052ecfbf487f2411e1652fba00f48c0ddedaddf01d1144cc02ab3cbd233dc92a5b47 F25-MATE-x86_64-20170221.iso
03c795c09af05a00b771ddf0fbba85240a898197388ef49998a400833f33f16cd83202147a12990ba1617adde87e2c012e445ed9218a516f24a79c225b5a7ee8 F25-SOAS-x86_64-20170221.iso
3efa81950b7647af736fafc8cf9a02559dc99cd94fc130250286c7712310a279faeedbf353ec813a82bef80193502160d36fb50ad7dc8ce86fe0eadec68667d9 F25-source-20170221.iso
b11ca8005273251eff6853c01ffcf04c84abe44aef3716c5ec6355247b972f5813323d10cfb70911a05c7b0abdd00497937cd52008b205ff81f1186bcd56feef F25-WORK-x86_64-20170221.iso
52ef6db22c6c17df14e19eab50dc76459084d24f65f47e9ad7fc9de9cf6f6c97f5e47822a46f79354eb8328476084fc8ddb37ea9062a373ca4654327cef3fdbd F25-XFCE-x86_64-20170221.iso
—–BEGIN PGP SIGNATURE—–
Version: GnuPG v2

iQKbBAEBCgCFBQJYrLnYXxSAAAAAAC4AKGlzc3Vlci1mcHJAbm90YXRpb25zLm9w
ZW5wZ3AuZmlmdGhob3JzZW1hbi5uZXQzMjI0QkQyODVERUEwNEE3QzcyRTY4QTcz
NkJDNDk4N0VFNTI1QjYwHhxsaW51eG1vZGRlckBmZWRvcmFwcm9qZWN0Lm9yZwAK
CRA2vEmH7lJbYCQfD/9Rj8bMkhRaO1RyutMzLaeKvB1w3J6q51InpJ5TXJqKfbrD
U54p1BIhMQiQYTr6NgNnC4J3EcHPrGxPM6cOmnoeXTiBwaeTHh8kVqeMYRfxK9qO
mf7PR5LZpjF5dBB7zefRsOUTK3cd+iEj/RKuYKwbzjRducvU4sB/dZNqZ9avTweo
X5J3M0aX2ZpM9Qgq56+VzC0oIWo27uZEy75WZ8rlRnzN1Ex8Fk/8wTxC7UjscOa8
y6w3LlQQYeAVqI6npEHf9lM3Q6X52LLNapowcenDcPeF80Nf4AyiJmZRnV3L69DO
TEIijrP6cAGU7qz7hi8Jw3r8TaaIqK7rSWEOmMfUS467sXRSYYkZaRHXMzMKLyqf
xrdoxwS0M7xV4Mm92RJ+MdcruqdTF644vz1R1KR13cuW6QwGGTNdgovbC+2JmsHh
bj2hCVMlwfM3eqxWwsXFX2jRT1VCsNXJmeceTDDQlbEW5mKTjgN3ILVwrWbLUKMK
S1nJ1tXhap+6s/mRWDr4gAcCqJQekqtqmL2o8iTjMRwts7cgK1gmYcQtnwry48bb
YteBQ5o8BQwzqY11Nn1tXX30wrieanwLqt8Vf0eUiUTdEK5Bb8bNgAQcMH1M5qpT
L5eQzEQCmegCfEWV6B8QNKwdkXAvAtsygSWU6Mwo86fH1maLufUWPC1jHbWYZw==
=chRL
—–END PGP SIGNATURE—–

cat HASHSUM512-20170221

(Clearsigned with 0x36BC4987EE525B60)

—–BEGIN PGP SIGNED MESSAGE—–
Hash: SHA512

7496221ac06af3728a86661870feff124a028a30 – F25-CINN-x86_64-20170221.iso
5d0d3352288388505643e129ce6316300abe588c – F25-KDE-x86_64-20170221.iso
822642bfc94131a318008e5ae56ff05d7231b969 – F25-LXDE-x86_64-20170221.iso
2219277e39190cf32736c28ce5016a5bb36e33d3 – F25-MATE-x86_64-20170221.iso
152a11df6332acdaa5165b009dc6f737c39251d3 – F25-WORK-x86_64-20170221.iso
c14dc9b17c30c5d21de93c206be8e90e2ccf110a – F25-SOAS-x86_64-20170221.iso
8b9b0b4c69fc83d50a53ca5930939fcfb8d9c435 – F25-XFCE-x86_64-20170221.iso
—–BEGIN PGP SIGNATURE—–
Version: GnuPG v2

iQKbBAEBCgCFBQJYrLk/XxSAAAAAAC4AKGlzc3Vlci1mcHJAbm90YXRpb25zLm9w
ZW5wZ3AuZmlmdGhob3JzZW1hbi5uZXQzMjI0QkQyODVERUEwNEE3QzcyRTY4QTcz
NkJDNDk4N0VFNTI1QjYwHhxsaW51eG1vZGRlckBmZWRvcmFwcm9qZWN0Lm9yZwAK
CRA2vEmH7lJbYNwTD/9r9D0F3MtFuIHzkG3EM1e6xoXNGoZntcen5fHRzwPYp7X0
TzN3CfeH5xpGSOy+/IOCsZH6OC45CsnlmDkv6piLDLhU1eivDME7aNN5BNi+nYHU
fCmB4ifH6MK4Ks3ldpXyugoPBKAnc4d175D9Gf2fp7WU4k5Paql5vREF/3SR8KxK
T8o+LvNFoPr2eJddhzGKqg4Kh+TC+f/DZvyFScM0JmQjQ0O33Apz2VxNjk53J3/C
b5+SedoVk46ee+YQfkdhL3YYNDpoHkG7v/TmtvnMluK2GC+50rjZclrmKe+HrDt/
bSe/Xpwj1rXeh3rWlI70DTXUJgqh8XdoMUCbsrlM0GUDaagnVdIpkWuKDP6G+4/8
SV/wtpB85NJj3M1g1Sin2tFMKx6+e1zzWN3O6q0F7OXFFrNXTZz5BBxjXXc6NAvt
Y2Fv/7kcKSRf9ImhsO2u2ildZWHGS2qlgsPuSc8uO1mzE8eJfxb12XQFD0fKvesI
JGTMw2QOv8WszZsvixF4nbw0r+xJJkZbPhuizS6NiFLhGi0GkLU98+8Ghp3XzRk0
J33MeSs33Y2qXmr7ZvhhBZxrTT3NJz4kRDwepEItO1lc8Fu48TH8nVFPAghjOs95
X5/AetH+/1zQ1fVlpXpkit3J8ywxDHPPidYSYnjIa7iDtyaZTuBTGRaELplvYA==
=6UGR
—–END PGP SIGNATURE—–


Filed under: Community, F25, F25 Torrents, Fedora, Volunteer

Our Bootloader Problem

Posted by Nathaniel McCallum on February 21, 2017 11:05 PM

GRUB, it is time we broke up. It’s not you, it’s me. Okay, it’s you. The last 15+ years have some great (read: painful) memories. But it is time to call it quits.

Red Hat Linux (not RHEL) deprecated LILO for version 9 (PDF; hat tip: Spot). This means that Fedora has used GRUB as its bootloader since the very first release: Fedora Core 1.

GRUB was designed for a world where bootloaders had to locate a Linux kernel on a filesystem. This meant it needed support for all the filesystems anyone might conceivably use. It was also built for a world where dual-booting meant having a bootloader implemented menu to choose between operating systems.

The UEFI world we live in today looks nothing like this. UEFI requires support for a standard filesystem. This filesystem, which for all intents and purposes duplicates the contents of /boot, is required on every Linux system which boots UEFI. So UEFI loads the bootloader from the UEFI partition and then the bootloader loads the kernel from the /boot partition.

Did you know that UEFI can just boot the kernel directly? It can!

The situation, however, is much worse than just duplicated effort. With the exception of Apple hardware, practically all UEFI implementations ship with Secure Boot and a TPM enabled by default. Only appropriately signed UEFI code will be run. This means we now introduce a [shim][shim] which is signed. This, in turn, loads GRUB from the UEFI partition.

This means that our boot process now looks like this:

  • UEFI filesystem
    1. shim
    2. GRUB
  • /boot filesystem
    1. Linux

It gets worse. Microsoft OEMs are now enabling BitLocker by default. BitLocker seals (encrypts) the Windows partition to the TPM PCRs. This means that if the boot process changes (and you have no backup of the key), you can’t decrypt your data. So remember that great boot menu that GRUB provided so we can dual-boot with Windows? It can never work, cryptographically.

The user experience of this process is particularly painful. Users who manage to get Fedora installed will see a nice GRUB menu entry for Windows. But if they select it, they are immediately greeted with a terrifying message telling them that the boot configuration has changed and their encrypted data is inaccessible.

To recap, where Secure Boot is enabled (pretty much all Intel hardware), we must use the boot menu provided by UEFI. If we don’t, the PCRs of the TPM have unknown hashes and anything sealed to the boot state will fail to decrypt.

The good news is that Intel provides a reference implementation of UEFI, and it includes pretty much everything we’d ever need. This means that most vendors get it pretty much correct as well. OEMs are even using these facilities for their own (hidden) recovery partitions.

So why not just have UEFI boot the kernel directly? There are still some drawbacks to this approach.

First, it requires signing every build of the kernel. This is definitely undesirable since kernels are updated pretty regularly.

Second, every kernel upgrade would mean a write to UEFI NVRAM. There are some concerns about the longevity of the hardware under such frequent UEFI writes.

Third, it exposes kernels as a menu option in UEFI. This menu typically contains operating systems, not individual kernels, which results in a poor user experience. Most users don’t need to care about what kernel they boot. There should be a bootloader which loads the most recently installed kernel and falls back to older kernels if the new kernels fail to boot. All of this can be done without a menu (unless the user presses a key).

Fortunately, systemd already implements precisely such a bootloader. Previously, this bootloader was called gummiboot. But it has since been merged into the systemd repository as systemd-boot.

With systemd-boot, our boot process can look like this:

  • UEFI filesystem
    1. shim
    2. systemd-boot
    3. Linux

It would even be possible (though, not necessarily desirable) to sign systemd-boot directly and get rid of the shim.

In short, we need to stop trying to make GRUB work in our current context and switch to something designed specifically for the needs of our modern systems. We already ship this code in systemd. Further, systemd already ships a tool for managing the bootloader. We just need to enable it in Anaconda and test it.

Who’s with me!?

P.S. - It would be very helpful if we could get some good documentation on manually migrating from GRUB to systemd-boot. This would at least enable the testing of this setup by brave users.

Help Fedora Hubs by taking this survey

Posted by Máirín Duffy on February 21, 2017 10:10 PM

Here’s a quick and easy way to help Fedora Hubs!

Our Outreachy intern, Suzanne Hillman, has put together a survey about Fedora contributors’ usage of social media to help us prioritize potential future integration with various social media platforms with Fedora Hubs. If you’d like your social media hangouts of choice to be considered for integration, please take the survey!

Take the survey now!

F25-20170221 Updated Lives Released

Posted by Ben Williams on February 21, 2017 08:27 PM

I am happy to announce new F25-20170221 Updated Lives.

(with Kernel 4.9.10)

With F25 we are now using Livemedia-creator to build the updated lives.

Also from now on we will only be releasing updated lives on even kernel releases.

To build your own please look at  https://fedoraproject.org/wiki/Livemedia-creator-_How_to_create_and_use_a_Live_CD

With this new build of F25 Updated Lives will save you 695M of updates after install on Workstation.

As always the isos can be found at http://tinyurl.com/Live-respins2


GNOME hackaton in Brno

Posted by Jiri Eischmann on February 21, 2017 01:06 PM

Last week, we had a presentation on Google Summer of Code and Outreachy at Brno University of Technology. Around 80 students attended which was a pretty good success considering it was not part of any course. It was a surprise for the uni people as well because the room they booked was only for 60 ppl.

The main reason why we did the presentation is that there have been very few students in Brno who participated in such programs. And the open source community is pretty big at local universities due to the presence of Red Hat. When we asked students who had heard of Google Summer of Code or Outreachy before only two raised their hands. That was even fever then we expected.

Shortly before the presentation, we discovered that the money reward for successfully finishing Google Summer of Code was not the same globally any more. And for the Czech Republic, it’s now $3600 instead of $5500. So considerably less, but still fairly attractive to local students.

As a follow-up to this presentation, we organized a GNOME hackaton in the Red Hat lab at BUT. Carlos Soriano was in charge of it with me, Felipe Borges, and Debarshi Ray helping him. Carlos prepared images for VirtualBox and KVM with a prepared development environment every student was supposed to download. People had to work in a virtual machine, but they didn’t have to spend time configuring and compiling everything and it assured that everyone had the same environment.

Around 12 students showed up which I think was a good turnout. 3 of them were women which is definitely higher % than the average at the uni. First Carlos told them to read the GNOME Newcomers guide and pick an app they’d like to contribute to. Then he created a dummy bug and showed students the whole process of fixing it from searching the code to the patch review. Then they were supposed to find some easy bug in the app of their choice and fix it.

Almost all students picked apps written in C, which is not so surprising because that’s the language they learn primarily at the university. Only one picked GNOME Music written in Python. The hackaton lasted for 5 hours and all students were busy for the whole time and almost everyone submitted some fix in the end.

Carlos is planning to do a follow-up with those who want to continue, probably before our (ir)regular Linux Desktop Meetup next week. Let’s see if some of them will make it to Google Summer of Code or Outreachy and even become long-term contributors to GNOME later on. It was the first time we actually made students to dip their fingers into the code. At all events before we had presentations on how they can contribute and pointed them to the docs to study at home, but the response was minimal. Maybe such a hackaton where you help students in person to make the first steps is the right approach to break through the barrier.

I’m pretty sure Carlos will also blog about his findings and it will be much more insightful since he spent a lot of time preparing the hackaton and was the one who talked to the students the most.

<figure class="wp-caption alignnone" data-shortcode="caption" id="attachment_1288" style="width: 3840px">img_-dhuozj<figcaption class="wp-caption-text">Carlos showing students how to fix a bug in GNOME</figcaption></figure>

 


Fedora 25: The perf linux tool.

Posted by mythcat on February 21, 2017 11:58 AM
If you want a good tool to test your performance under Fedora 25 distro or linux then the perf tool is great.
You can read a full tutorial from perf wiki and that will give a good impression on this utility.
The main problem come when you need to understand why we have to use this utility in linux.
IntroA trivial use the top command will show you the necessary information about your Linux.
If you look closely you will notice that : load average: 0.09, 0.05, 0.01
The three numbers represent averages over progressively longer periods of time (one, five, and fifteen minute averages). This means for us: that lower numbers are better and the higher numbers represent a problem or an overloaded machine. Now about multicore and multiprocessor the rule is simple: the total number of cores is what matters, regardless of how many physical processors those cores are spread across.Let's use this command: First I will record some data about my CPU:
[mythcat@localhost ~]$ perf record -e cpu-clock -ag 
Error:
You may not have permission to collect system-wide stats.

Consider tweaking /proc/sys/kernel/perf_event_paranoid,
which controls use of the performance events system by
unprivileged users (without CAP_SYS_ADMIN).

The current value is 2:

-1: Allow use of (almost) all events by all users
>= 0: Disallow raw tracepoint access by users without CAP_IOC_LOCK
>= 1: Disallow CPU event access by users without CAP_SYS_ADMIN
>= 2: Disallow kernel profiling by users without CAP_SYS_ADMIN
[mythcat@localhost ~]$ su
Password:
[root@localhost mythcat]# perf record -e cpu-clock -ag
^C[ perf record: Woken up 17 times to write data ]
[ perf record: Captured and wrote 5.409 MB perf.data (38518 samples) ]

[root@localhost mythcat]# ls -l perf.data
-rw-------. 1 mythcat mythcat 5683180 Feb 21 13:24 perf.data
You can see the perf tool working with root account and result is owned by deafult user. Let's show this data using the default user - mythcat and perf tool:
[mythcat@localhost ~]$ perf report
The result of this command: You can use the full list events by using this command:
[mythcat@localhost ~]$ perf list 

List of pre-defined events (to be used in -e):

branch-instructions OR branches [Hardware event]
branch-misses [Hardware event]
bus-cycles [Hardware event]
cache-misses [Hardware event]
cache-references [Hardware event]
cpu-cycles OR cycles [Hardware event]
instructions [Hardware event]
ref-cycles [Hardware event]

alignment-faults [Software event]
bpf-output [Software event]
context-switches OR cs [Software event]
cpu-clock [Software event]
cpu-migrations OR migrations [Software event]
dummy [Software event]
emulation-faults [Software event]
major-faults [Software event]
minor-faults [Software event]
page-faults OR faults [Software event]
task-clock [Software event]
Let's see one event from this list and that will told us how Fedora working:
[root@localhost mythcat]# perf top -e minor-faults -ns comm
Is use the comm (keys are available: pid, comm, dso, symbol, parent, cpu, socket, srcline, weight, local_weight) and the -ns args see the manual of perf command. The result of this command is: This is most simple way to see how is start and close some pids and how they interact in real-time with the operating system. Another way to deal with the perf command is how to analyze most scheduler properties from within 'perf sched' alone using the perf sched with the five sub-commands currently:

perf sched record # low-overhead recording of arbitrary workloads
perf sched latency # output per task latency metrics
perf sched map # show summary/map of context-switching
perf sched trace # output finegrained trace
perf sched replay # replay a captured workload using simlated threads
Try this example to see the to capture a trace and then to check latencies (which analyzes the trace in perf.data record file).
perf sched record sleep 10     # record full system activity for 10 seconds
perf sched latency --sort max # report latencies sorted by max
You can also make a map of map of scheduling events by using this command:
[root@localhost mythcat]# perf sched record 
This tutorial show you just only 1% of ways of using the perf command.

Using Wallabag to manage your online reading list

Posted by Ankur Sinha "FranciscoD" on February 20, 2017 08:52 PM

I always have quite a bit on my pending list to read - academic papers, blogs, planets, and the sort. Usually, when I go through the planets, such as the Fedora, GNOME or the two neuroscience planets I use - neuroscience, neuroscientists, I don't have the time to read all the articles right then. I used to either bookmark links, or note them down somewhere to read later. One day, though, I ran into Pocket, which lets you save the article to read later and makes it available to you on multiple devices. It's extremely convenient.

Of course, the one issue with Pocket is that it isn't Free software. So, like I do, I went looking for an alternative. After a few hours, I ran into Wallabag on Github. It's written in PHP, and is licensed under the MIT license. It's quite easy to deploy, and there's a Gitter channel where you can get some help too.

You either enter the URL in the Wallabag page manually, or you can use the Firefox/Chrome/Opera addon - it let's you right click a page or a link and "Wallabag.it!". There's also a bookmarklet, which you can use with Pentadactyl, for example:

# Add to ~/.pentadactylrc
command! wallabagit -description "Add to wallabag" open javascript:(function(){var%20url=location.href||url;var%20wllbg=window.open('https://app.wallabag.it/bookmarklet?url='%20+%20encodeURI(url),'_blank');})();

Wallabag fetches the text of the page and stores it for you so that you can read it later. You can even organise your saved pages with tags and the sort.

Screenshot showing the Wallabag main page

Here's a page that I'm trying to read later, for example:

Screenshot showing an article in Wallabag

I played with a deployment, but decided not to deploy and maintain an instance myself. Instead, I signed up for the instance Wallabag have here - Wallabag.it. It's quite cheap - they have an offer going too at the moment - only 9€ for an entire year!

Wallabag uses a the FiveFilters Full Text RSS tool to extract the text and other data from web pages. Some websites require special instructions to tell the tool what information needs to be extracted - this tends to happen with a few academic websites. There's a repository of such config files here. So, if you do run into a website that isn't rendering properly, you can troubleshoot the issue and submit a config file too.

Whether you decide to deploy an instance yourself, or use Wallabag.it, I think it's a really useful tool to have. Of course, don't forget to install the app on your phone too!

Happy reading!

Running amdgpu driver on AMD hybrid laptop

Posted by Luya Tshimbalanga on February 20, 2017 08:18 PM
Running the ASUS X550ZE laptop on latest Linux kernel 4.9 series from Mystro256 Copr repository based on AMD contributor Alexander Deucher's freedesktop branch within Fedora Design Suite 25.

The hybrid support has improved now that the dedicated graphic card AMD R5 M230 Jet Pro (aka Hainan) is functional with enabled amdgpu module for both Sea Island (CIK) and South Island (SI) videocards thanks to the hard work from AMD developers. The latter was important in order to fully support all GCN (Graphics Core Next) chipsets as possible to allow future run on open source version of Vulkan, RADV (short for Radeon Vulkan).

The power manager is functional and further optimization will be required in term on parity with Microsoft Windows version. According to Freedesktop Radeon list almost all features are implemented into the driver, hybrid graphic cards run fine, only OpenGL needs more work. As the desktop ecosystem in the freedesktop environment modernize to support Wayland protocole, applications remain the issues.

I thank Mystro256 and the AMD contributors for their hard work in both Linux kernel and Mesa.


Atom Installer

Posted by Tummala Dhanvi on February 20, 2017 07:40 PM

tldr; tricky/hacky way of installing atom and updating it automatically with system updates

Hi Guys,

One thing that I miss about using Ubuntu is PPA’s there are lot’s of PPA in Ubuntu and you can hack around and install all types of software which are required for your usage.

In the Fedora side of the world there are copr repos but they don’t have as many repos as in Ubuntu and you can’t build non-free software (don’t get me wrong here, I love FREEdom software but couldn’t resist not using some beautiful non-free applications such as Sublime). I am creating a work around for this by using shell scripts which are open source (cc0) but when those scripts are executed they install non-free software on your system.

For the first step I have created a simple script for atom and packaged the script on copr https://copr.fedorainfracloud.org/coprs/dhanvi/atom-installer/

Enable the copr repo and install atom-installer

sudo dnf copr enable dhanvi/atom-installer

sudo dnf install atom-installer

Troubleshooting: Give it some time depending on your bandwidth as it need to download the atom rpm in the background and install it. If you still can’t see atom in after say like 30min or 1hour just run the below command once and it should be installed

atom-installer

 

Removing Atom and my repo:

sudo dnf remove atom atom-installer

sudo dnf copr disable dhanvi/atom-installer

 

This is just a workaround for the non-avaliablility of the atom in official repos of fedora and is not exactly the best way of installing atom, but it works fine for now!

I will update the repo https://github.com/dhanvi/atom-installer and also add the spec file! My next targets are Sublime and Oracle java.


Filed under: fedora, FOSS

Maintainerati

Posted by Laura Abbott on February 20, 2017 07:00 PM

I spent last Wednesday hanging out in San Francisco for the first annual maintainerati event. The idea was that there are a lot of open source maintainers out there but events are usually separated by technology areas. Javascript framework maintainers may never meet programming language maintainers even if their problems are similar. The idea with this event was to give open source maintainers a chance to vent and problem solve with others.

The event was structured as an 'unconference'. I describe it as a slightly more structured hallway track. We started the morning doing 'dot voting' on topics people wanted to talk about and then broke into groups to discuss the topics that got the highest vote. I chose to go for the discussion about recruiting newcomers and maintainers. We started with some discussion about what is a contribution and pros and cons of structuring the contribution process and eventually getting committer rights. There's no hard and fast rule about when people can/should get commit rights and it mostly comes down to relationships; you need to build relationships with existing maintainers and existing maintainers need to build relationships and mentor new committers. This let to quite a bit of discussion about free vs. paid and company vs. non-company contributors. It's a lot easier to build relationships if you can set up a meeting with a maintainer in your company but that doesn't work for outside contributors. There's also the question of trying to recruit volunteers for your sponsored project. Doing work 'for exposure' is problematic and exploitative yet open source has this idea of doing work for the inherent joy of open source and learning. Promoting unpaid contributions needs to be done very carefully if it is done at all. We ended up running out of time and I think the discussion could have certainly gone longer.

There was a second session in the afternoon about problematic communities. This one is unfortunately near and dear to my heart. We started out defining what makes a community toxic. A big point was that bad behavior prevents the community from making progress. Many of the discussion points were not just open source but other communities that tend to have overlap. Code of conducts are a necessity to make dealing with toxic behavior possible. There was some discussion about how specific these guidelines should be, and interestingly it was pointed out that having slightly less specific guidelines (but not too much) may help to avoid people trying to purposely hang out at the edge of acceptable. If your larger community is problematic, it can be helpful to work on making a smaller subset welcoming and let that influence the larger group. I appreciated everyone who took the time to contribute in the discussion.

Outside structured conversations, I spent time talking about empathy. Several attendees either were or had been in first line customer support positions. To succeed in this type of work, you need to have (or quickly build) empathy skills to keep customers satisfied. Developers are not well known for having large amounts of empathy skills. I'm guilty of this myself; empathy without emotionally draining myself is something I'm constantly working on. Figuring out how to teach empathy skills to others is a challenge. One of the ideas that came up was the need to be outside your comfort bubble. Travel and moving were a common way people cited to force yourself to have new experiences. Traditional developer mind set also tends to be very black and white (hi guilty here too). Most important was the desire to keep improving this skill and not write it off as unnecessary.

There were plenty of other conversations I'm sure I've forgotten about. Notes are available on the github and will be added as people get around to it. I really hope to see this conference happen again. It's filling a space to have important conversations about non-technical topics that tend to get sidelined elsewhere. I met so many cool people and left with a lot to think about. My biggest thanks to the organizers.

Fedora macbook pro testers++

Posted by Kevin Fenzi on February 20, 2017 02:31 PM

In the final run-up to the Fedora 25 release, we slipped a week because there was a bug in installs on apple osx (now macos again) hardware. This was (and is) a use case the Workstation working group cares about, as they would love for folks with apple hardware to install Fedora and use it on that hardware. Sadly, we don’t have too many testers with this hardware to help our testing cycles, and many community members with this hardware also are using it day to day and cannot afford to reinstall and test at the drop of a hat.

I use my personal yoga 900 for my main machine, so my work laptop has always been for me a test machine or a backup in case of failure. There are a number of reasons for this: I prefer to pick and use laptops that aren’t standard corporate offerings, I like to know that I can do anything I choose with the laptop, and if (god forbid) I moved to a new job I wouldn’t have to give my primary laptop back.

So, when I was up for laptop refresh and I saw that a macbook pro 13″ model (12,1 apparently) was available, I decided to choose that and help out with Fedora testing efforts on this hardware. I do feel a bit bad about this as I am not a big fan of Apple and this does mean giving them money, but on the other hand, hopefully I can test Fedora and help avoid slips due to lack of hardware. Also, I can run rawhide on it and see if I can get everything working fully.

The macbook arrived the other day and yesterday I unpacked it and did some initial testing:

First, the hardware: Seems nice enough, but I am not sure why anyone would get one of these over a Dell XPS 13 or a yoga 900/910. It’s got a lower resolution screen than those, 8gb memory instead of 16, and only an i5 cpu instead of an i7. The feel is very solid, but it makes it just makes it seem too heavy to me. Otherwise the screen is bright and nice, the keyboard backlight is nice (The yoga 900 only has off, dim, bright for the keyboard backlight, but this macbook you can set it to whatever you like), the power connector is neat in that it has a light on it telling you if it’s charging (orange/amber) or fully charged (green). The keyboard and trackpad seem fine.

I decided I would go through the normal macos setup first and then try and setup Fedora to dual boot, as I imagine that would be a common setup. Part of the setup instructions that came from my corporate overlords was mention of enabling File Vault full disk encryption. So, I did that and got everything installed and seemed to be working fine (at least as far as I could tell, not being a macos user normally).

Still following what I would think would be the more traveled path, I went to https://getfedora.org and downloaded the Fedora Media Writer. Download went fine and it was no trouble to run it, but it did give me a warning that this was something I had “DOWNLOADED” from the internet. I don’t guess we have much way around that warning as we are signing the binary fine and it’s not saying it’s unsigned, just that it was downloaded and are you sure you want to run it. Download and burn to usb went just fine, no problems at all with FMW. It might be nice if there is an option to download Rawhide images if you really wanted them.

Hold down the option key and power on and you get the boot selector thing. Choose FedoraMedia and there’s the Fedora Live USB. Everything booted up nicely and I poked around to see what was working or not working. Turns out so far the only thing that doesn’t appear to be working out of the box is the webcam. It seems to be a broadcom model that some folks are working on reverse engineering, but haven’t gotten that done enough to merge into the mainline kernel ( https://github.com/patjak/bcwc_pcie ). I might try that out sometime down the road. The power connector activity seems a bit odd as well: when you plug or unplug the power it seems to take a minute or two before it notices and starts updating.

Next I pulled up the Anaconda installer and looked to install Fedora along side the existing OS. I needed some space to install in, so I selected the largest partition which I knew was the main macos volume (but oddly was showing up as Unknown), and told anaconda to shrink it. That seemed to work fine, and I installed Fedora in the newly freed space. The install finished fine and I rebooted.

On reboot, I now got grub and Fedora booted and worked fine. The macos entries however, errored and didn’t work at all. Turns out this is a long standing, known bug: https://bugzilla.redhat.com/show_bug.cgi?id=893179 I tried various things suggested in the bug to chainload, but with no luck. Luckily you can still hold down option and get the native bootloader.

So, I did that and tried to boot the macos install, but it would think for a while and then error with a picture that was a circle and crossbar and fail to boot. After poking around it seems I now hit: https://bugzilla.redhat.com/show_bug.cgi?id=1033778 ( https://fedoraproject.org/wiki/Common_F24_bugs#apple-core-storage-wipe ). So, my macos partition was unknown to anaconda, so it let me shrink it, but it messed it up completely and nuked all data that was on it. Had I not encrypted things it would have worked, but since I did it was gone.

Luckily apple has the advantage of controlling the hardware platform in this case, so I just needed to boot with option command r to get a network rescue. This let me wipe the drive, repartition it, reinstall macos, then boot the Fedora USB again and install Fedora in the space I left for it. So, it wasn’t the end of the world, but it would sure have been anoying if I had data there.

After all that installing, I moved the Fedora install from F25 to rawhide. No issues there, everything seems to work and be fine after that.

So, if we could somehow fix the two bugs I ran into for f26 I think it would help macbook folks a fair bit. If anyone needs me to install or test anything on this platform, just let me know. I plan to keep the macbook ready to (re)install most anytime and hopefully can provide more test coverage for upcoming cycles.

Fedora Modularity Documentation

Posted by Adam Samalik on February 20, 2017 11:39 AM

Wiki pages are great for collaboration. But they are not that great in getting people’s attention. They can also become pretty messy and hard to navigate trough when using multiple pages that are related to each other – like documentation – which was what we had there. We needed something better. Something that would make it easy to go trough multiple pages of documentation. Something that would have a simple landing page explaining what we do. And having a simple way to review the changes people make before publishing them would be also great.

I knew we wanted something better, but I didn’t know what exactly. I also didn’t want to invent yet another way to build docs. So I looked around, and found the Fedora Release Engineering documentation. It’s hosted in Pagure Docs, it’s built with Python Sphinx, and it also used to be a wiki. And I got inspired!

So I created some drafts, made a proposal, and convinced people from the Modularity Working Group that we need a cool website. And then I just built it. Using the same tech as the Fedora Release Engineering team – Python Sphinx to build, Pagure Docs to host.

But because I’m a wanna-be designer, I also created a logo, and wrote a custom Sphinx template including a simple landing page that helps people quickly understand what Modularity is about.

And I’m happy to announce that the new Fedora Modularity documentation website has been published today!

To edit the documentation, please send pull-requests against our source repository. And maybe get in touch if you like the project!

Automate Building your Own Atomic Host

Posted by Trishna Guha on February 20, 2017 08:10 AM

Project Atomic hosts are built from standard RPM packages which have been composed into filesystem trees using rpm-ostree. This post provides method for automation of Building Atomic host (Creating new trees).

Requirements

Process

Clone the Git repo on your working machine Build-Atomic-Host.

$ git clone https://github.com/trishnaguha/build-atomic-host.git
$ cd build-atomic-host

Create VM from the QCOW2 Image

The following creates VM from QCOW2 Image where username is atomic-user and password is atomic. Here atomic-nodein the instance name.

$ sudo sh create-vm.sh atomic-node /path/to/fedora-atomic25.qcow2
# For example: /var/lib/libvirt/images/Fedora-Atomic-25-20170131.0.x86_64.qcow2

Start HTTP Server

The tree is made available via web server. The following playbook creates directory structure, initializes OSTree repository and starts the HTTP server.

$ ansible-playbook httpserver.yml --ask-sudo-pass

Use ip addr to check IP Address of the HTTP server.

Give OSTree a name and add HTTP Server IP Address

Replace the variables given in vars/atomic.yml with OSTree name and HTTP Server IP Address.

For Instance:

# Variables for Atomic host
atomicname: my-atomic
httpserver: 192.168.122.1

Here my-atomic is OSTree name and 192.168.122.1 is HTTP Server IP Address.

Run Main Playbook

The following playbook installs requirements, starts HTTP Server, composes OSTree, performs SSH-setup and rebases on created Tree.

$ ansible-playbook main.yml --ask-sudo-pass

Check IP Address of the Atomic instance

The following command returns the IP Address of the running Atomic instance

$ sudo virsh domifaddr atomic-node

Reboot

Now SSH to the Atomic Host and reboot it so that it can reboot in to the created OSTree:

$ ssh atomic-user@<atomic-hostIP>
$ sudo systemctl reboot

Verify: SSH to the Atomic Host

Wait for 10 minutes, You may want to go for a Coffee now.

$ ssh atomic-user@192.168.122.221
[atomic-user@atomic-node ~]$ sudo rpm-ostree status
State: idle
Deployments:
● my-atomic:fedora-atomic/25/x86_64/docker-host
       Version: 25.1 (2017-02-07 05:34:46)
        Commit: 15b70198b8ec7fd54271f9672578544ff03d1f61df8d7f0fa262ff7519438eb6
        OSName: fedora-atomic

  fedora-atomic:fedora-atomic/25/x86_64/docker-host
       Version: 25.51 (2017-01-30 20:09:59)
        Commit: f294635a1dc62d9ae52151a5fa897085cac8eaa601c52e9a4bc376e9ecee11dd
        OSName: fedora-atomic

Now you have the Updated Tree.

Shout-Out for the following folks:

My future post will have customizing packages (includes addition/deletion) for OSTree.


How to install WordPress on Fedora

Posted by Fedora Magazine on February 20, 2017 08:00 AM

WordPress started as a simple blogging system, but has evolved into a reputable content management system. It’s also one of the most popular open source projects. Furthermore, it’s easy to set up WordPress on your Fedora system.

Install the packages

Fedora provides a set of pre-packaged software to make installation easy. Open a terminal, and at the command prompt, use sudo to install the following packages.

sudo dnf install @"Web Server" wordpress php-mysqlnd mariadb-server

This example assumes you’ll run the web and database servers on the same machine. This is often the case for students and developers alike.

Enable the web and database services to start at boot time, then start them immediately:

sudo systemctl enable httpd.service mariadb.service
sudo systemctl start httpd.service mariadb.service

Set up the database server

If this is your first use of MariaDB, you should create a password for your root user. Store it somewhere secure and safe, in case you forget it. Don’t use the system’s own root (administrator) password.

sudo mysqladmin -u root password

Next, create a database. You can host more than one WordPress site on a machine. Therefore, you may want to choose a distinctive name for yours. For instance, this example uses mywpsite. The -p switch prompts you for a password. You’ll need that, since you’ve added a password for root.

sudo mysqladmin create mywpsite -u root -p

Next, set up a special privileged user and password for the database. The web app uses these credentials to run. Use the standard mysql client program for this step. The -D mysql option attaches to the built-in mysql database where privileges are stored.

Your input is shown in boldface in the example below. Make sure to use a strong password and not password itself.

$ sudo mysql -D mysql -u root -p
Enter password:
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 6
Server version: 10.1.18-MariaDB MariaDB Server

Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [mysql]> GRANT ALL PRIVILEGES ON mywpsite.* TO 'sqluser'@'localhost' IDENTIFIED BY 'password';
Query OK, 0 rows affected (0.00 sec)

MariaDB [mysql]> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.01 sec)

MariaDB [mysql]> QUIT;
Bye

Set up the web server

Next, tune the SELinux parameters so the web server can perform necessary functions.

sudo setsebool -P httpd_can_network_connect_db=1
sudo setsebool -P httpd_can_sendmail=1

Next, edit the configuration file for the web server to allow connections. The file to edit is /etc/httpd/conf.d/wordpress.conf. Change the following line:

Require local

Instead, edit it as follows:

Require all granted

Next, configure your firewall so it allows traffic on port 80 (HTTP):

sudo firewall-cmd --add-service=http --permanent
sudo firewall-cmd --reload

 

Configure WordPress

Next, edit the /etc/wordpress/wp-config.php file. Provide the database settings needed so WordPress can use the database you provided. Here are the lines to change. Search for each and edit the required setting:

// ** MySQL settings - You can get this info from your web host ** //
/** The name of the database for WordPress */
define('DB_NAME', 'database_name_here');

/** MySQL database username */
define('DB_USER', 'username_here');

/** MySQL database password */
define('DB_PASSWORD', 'password_here');

/** MySQL hostname */
define('DB_HOST', 'localhost');

The DB_HOST setting should stay localhost if you’re serving the database on the same system as the web server.

Finally, restart the web server:

systemctl restart httpd

Visit the WordPress site

Next, you’re ready to configure the web app itself. Open a web browser on the system, or a connected system. Then browse to the IP address of your WordPress host, followed by /wordpress. For instance, your URL might be http://192.168.122.210/wordpress. If you’re on the same box, you can use http://localhost/wordpress. This step begins the setup process:

Fill out the information required. Remember to use a strong password for this account, since it has administrator access to the entire WordPress blog. Once done, select Install WordPress at the bottom.

A login screen appears so you can verify the WordPress username and password you just entered. Login, and the following screen appears:

You’re now ready to create content. There are thousands of themes and plugins available to customize your site. For more information on how to proceed, visit the WordPress website.

Wildcard certificates in FreeIPA

Posted by Fraser Tweedale on February 20, 2017 04:55 AM

The FreeIPA team sometimes gets asked about wildcard certificate support. A wildcard certificate is an X.509 certificate where the DNS-ID has a wildcard in it (typically as the most specific domain component, e.g. *.cloudapps.example.com). Most TLS libraries match wildcard domains in the obvious way.

In this blog post we will discuss the state of wildcard certificates in FreeIPA, but before proceeding it is fitting to point out that wildcard certificates are deprecated, and for good reason. While the compromise of any TLS private key is a serious matter, the attacker can only impersonate the entities whose names appear on the certificate (typically one or a handful of DNS addresses). But a wildcard certificate can impersonate any host whose name happens to match the wildcard value.

In time, validation of wildcard domains will be disabled by default and (hopefully) eventually removed from TLS libraries. The emergence of protocols like ACME that allow automated domain validation and certificate issuance mean that there is no real need for wildcard certificates anymore, but a lot of programs are yet to implement ACME or similar; therefore there is still a perceived need for wildcard certificates. In my opinion some of this boils down to lack of awareness of novel solutions like ACME, but there can also be a lack of willingness to spend the time and money to implement them, or a desire to avoid changing deployed systems, or taking a "wait and see" approach when it comes to new, security-related protocols or technologies. So for the time being, some organisations have good reasons to want wildcard certificates.

FreeIPA currently has no special support for wildcard certificates, but with support for custom certificate profiles, we can create and use a profile for issuing wildcard certificates.

Creating a wildcard certificate profile in FreeIPA

First, kinit admin and export an existing service certificate profile configuration to a file:

ftweedal% ipa certprofile-show caIPAserviceCert --out wildcard.cfg
---------------------------------------------------
Profile configuration stored in file 'wildcard.cfg'
---------------------------------------------------
  Profile ID: caIPAserviceCert
  Profile description: Standard profile for network services
  Store issued certificates: TRUE

Modify the profile; the minimal diff is:

--- wildcard.cfg.bak
+++ wildcard.cfg
@@ -19 +19 @@
-policyset.serverCertSet.1.default.params.name=CN=$request.req_subject_name.cn$, o=EXAMPLE.COM
+policyset.serverCertSet.1.default.params.name=CN=*.$request.req_subject_name.cn$, o=EXAMPLE.COM
@@ -108 +108 @@
-profileId=caIPAserviceCert
+profileId=wildcard

Now import the modified configuration as a new profile called wildcard:

ftweedal% ipa certprofile-import wildcard \
    --file wildcard.cfg \
    --desc 'Wildcard certificates' \
    --store 1
---------------------------
Imported profile "wildcard"
---------------------------
  Profile ID: wildcard
  Profile description: Wildcard certificates
  Store issued certificates: TRUE

Next, set up a CA ACL to allow the wildcard profile to be used with the cloudapps.example.com host:

ftweedal% ipa caacl-add wildcard-hosts
-----------------------------
Added CA ACL "wildcard-hosts"
-----------------------------
  ACL name: wildcard-hosts
  Enabled: TRUE

ftweedal% ipa caacl-add-ca wildcard-hosts --cas ipa
  ACL name: wildcard-hosts
  Enabled: TRUE
  CAs: ipa
-------------------------
Number of members added 1
-------------------------

ftweedal% ipa caacl-add-profile wildcard-hosts --certprofiles wildcard
  ACL name: wildcard-hosts
  Enabled: TRUE
  CAs: ipa
  Profiles: wildcard
-------------------------
Number of members added 1
-------------------------

ftweedal% ipa caacl-add-host wildcard-hosts --hosts cloudapps.example.com
  ACL name: wildcard-hosts
  Enabled: TRUE
  CAs: ipa
  Profiles: wildcard
  Hosts: cloudapps.example.com
-------------------------
Number of members added 1
-------------------------

Then create a CSR with subject CN=cloudapps.example.com (details omitted), and issue the certificate:

ftweedal% ipa cert-request my.csr \
    --principal host/cloudapps.example.com \
    --profile wildcard
  Issuing CA: ipa
  Certificate: MIIEJzCCAw+gAwIBAgIBCzANBgkqhkiG9w0BAQsFADBBMR8...
  Subject: CN=*.cloudapps.example.com,O=EXAMPLE.COM
  Issuer: CN=Certificate Authority,O=EXAMPLE.COM
  Not Before: Mon Feb 20 04:21:41 2017 UTC
  Not After: Thu Feb 21 04:21:41 2019 UTC
  Serial number: 11
  Serial number (hex): 0xB

Discussion

Observe that the subject common name (CN) in the CSR does not contain the wildcard. FreeIPA requires naming information in the CSR to perfectly match the subject principal. As mentioned in the introduction, FreeIPA has no specific support for wildcard certificates, so if a wildcard were included in the CSR, it would not match the subject principal and the request would be rejected.

When constructing the certificate, Dogtag performs a variable substitution into a subject name string. That string contains the literal wildcard and the period to its right, and the common name (CN) from the CSR gets substituted in after that. The relevant line in the profile configuration is:

policyset.serverCertSet.1.default.params.name=CN=*.$request.req_subject_name.cn$, o=EXAMPLE.COM

When it comes to wildcards in Subject Alternative Name DNS-IDs, it might be possible to configure a Dogtag profile to add this in a similar way to the above, but I do not recommend it, nor am I motivated to work out a reliable way to do this, given that wildcard certificate are deprecated. (By the time TLS libraries eventually remove support for treating the subject CN as a DNS-ID, I will have little sympathy for organisations that still haven’t moved away from wildcard certs).

In conclusion: you shouldn’t use wildcard certificates, and FreeIPA has no special support for them, but if you really to, you can do it with a custom certificate profile.

Fedora 25: running Geekbench.

Posted by mythcat on February 19, 2017 12:24 PM
You can test your CPU with this software and will see report online.
The official website told us about this tool:
Geekbench 4 measures your system's power and tells you whether your computer is ready to roar. How strong is your mobile device or desktop computer? How will it perform when push comes to crunch? These are the questions that Geekbench can answer.
You can used free or buy a license of this software and you can get it from here.
Let's see how is working and what is tested:
[mythcat@localhost Geekbench-4.0.4-Linux]$ ls
geekbench4 geekbench.plar geekbench_x86_32 geekbench_x86_64
[mythcat@localhost Geekbench-4.0.4-Linux]$ ./geekbench4
[0219/140337:INFO:src/base/archive_file.cpp(43)] Found archive at
/home/mythcat/build.pulse/dist/Geekbench-4.0.4-Linux/geekbench.plar
Geekbench 4.0.4 Tryout : http://www.geekbench.com/

Geekbench 4 is in tryout mode.

Geekbench 4 requires an active Internet connection when in tryout mode, and
automatically uploads test results to the Geekbench Browser. Other features
are unavailable in tryout mode.

Buy a Geekbench 4 license to enable offline use and remove the limitations of
tryout mode.

If you would like to purchase Geekbench you can do so online:

https://store.primatelabs.com/v4

If you have already purchased Geekbench, enter your email address and license
key from your email receipt with the following command line:

./geekbench4 -r email address="" license key=""

Running Gathering system information
System Information
Operating System Linux 4.9.9-200.fc25.x86_64 x86_64
Model Gigabyte Technology Co., Ltd. B85-HD3
Motherboard Gigabyte Technology Co., Ltd. B85-HD3
Processor Intel Core i5-4460 @ 3.40 GHz
1 Processor, 4 Cores, 4 Threads
Processor ID GenuineIntel Family 6 Model 60 Stepping 3
L1 Instruction Cache 32.0 KB x 2
L1 Data Cache 32.0 KB x 2
L2 Cache 256 KB x 2
L3 Cache 6.00 MB
Memory 7.26 GB
BIOS American Megatrends Inc. F2
Compiler Clang 3.8.0 (tags/RELEASE_380/final)

Single-Core
Running AES
Running LZMA
Running JPEG
Running Canny
Running Lua
Running Dijkstra
Running SQLite
Running HTML5 Parse
Running HTML5 DOM
Running Histogram Equalization
Running PDF Rendering
Running LLVM
Running Camera
Running SGEMM
Running SFFT
Running N-Body Physics
Running Ray Tracing
Running Rigid Body Physics
Running HDR
Running Gaussian Blur
Running Speech Recognition
Running Face Detection
Running Memory Copy
Running Memory Latency
Running Memory Bandwidth

Multi-Core
Running AES
Running LZMA
Running JPEG
Running Canny
Running Lua
Running Dijkstra
Running SQLite
Running HTML5 Parse
Running HTML5 DOM
Running Histogram Equalization
Running PDF Rendering
Running LLVM
Running Camera
Running SGEMM
Running SFFT
Running N-Body Physics
Running Ray Tracing
Running Rigid Body Physics
Running HDR
Running Gaussian Blur
Running Speech Recognition
Running Face Detection
Running Memory Copy
Running Memory Latency
Running Memory Bandwidth


Uploading results to the Geekbench Browser. This could take a minute or two
depending on the speed of your internet connection.

Upload succeeded. Visit the following link and view your results online:

Integrate Dovecot IMAP with (Free)IPA using Kerberos SSO

Posted by Luc de Louw on February 19, 2017 10:59 AM

Dovecot can make use of Kerberos authentication and enjoying Single-Sign-On when checking emails via IMAP. This post shows you how you enable this feature. With IPA its rather simple to do so. First enroll your mail server to the IPA domain with ipa-client-install as described in various previously posted articles. Creating a Kerberos Service Priciple […]

The post Integrate Dovecot IMAP with (Free)IPA using Kerberos SSO appeared first on Luc de Louw's Blog.

Gitlab, Pelican and Let’s Encrypt for a secure blog

Posted by Fedora Magazine on February 19, 2017 02:58 AM

The Fedora Community is considering requiring https for blogs to be published on fedoraplanet.org. While it is currently possible to host an SSL blog on both github or gitlab pages only gitlab supports SSL for custom domains. This article is a tutorial on how to use Pelican and Let’s Encrypt to produce a blog hosted on gitlab pages.

The first step is to create the directory structure to support the verification process used by Let’s Encrypt. This process involves serving a page from a hidden directory. To create the directory

mkdir -p .well-known/acme-challenge

At this point you need to install certbot so you can request a certificate from your computer.

sudo dnf install certbot

After the install is complete you would issue the command to generate a certificate for a remote site.

certbot certonly -a manual -d yoursite.com --config-dir ~/letsencrypt/config --work-dir ~/letsencrypt/work --logs-dir ~/letsencrypt/logs

replace ‘yoursite.com’ with your chosen site. The results will be as follows. The log string for the file name and contents will be different.

Make sure your web server displays the following content at
 http://yoursite.com/.well-known/acme-challenge/uF2HODXEnO98ZRBLhDwFR0yOpGkyg0UyP4QZHImDfd1 before continuing:

uF2HODXEnO98ZRBLhDwFR0yOpGkyg0UyP4QZHImJ8qY.imp4JScFS23eaYWG4tF5e9TSRfGwDuFMmkQTiqN73t8

If you don't have HTTP server configured, you can run the following
 command on the target server (as root):

mkdir -p /tmp/certbot/public_html/.well-known/acme-challenge
 cd /tmp/certbot/public_html
 printf "%s" uF2HODXEnO98ZRBLhDwFR0yOpGkyg0UyP4QZHImJ8qY.imp4JScFS23eaYWG4tF5e9TSRfGwDuFMmkQTiqN73t8 > .well-known/acme-challenge/uF2HODXEnO98ZRBLhDwFR0yOpGkyg0UyP4QZHImDfd1
# run only once per server: $(command -v python2 || command -v python2.7 || command -v python2.6) -c \ "import BaseHTTPServer, SimpleHTTPServer; \ s = BaseHTTPServer.HTTPServer(('', 80), SimpleHTTPServer.SimpleHTTPRequestHandler); \ s.serve_forever()" Press ENTER to continue

d

IMPORTANT NOTES:
 - Congratulations! Your certificate and chain have been saved at
 /home/cprofitt/letsencrypt/config/live/hub.cprofitt.com/fullchain.pem.
 Your cert will expire on 2017-05-19. To obtain a new or tweaked
 version of this certificate in the future, simply run certbot
 again. To non-interactively renew *all* of your certificates, run
 "certbot renew"
 - If you like Certbot, please consider supporting our work by:

Donating to ISRG / Let's Encrypt:   https://letsencrypt.org/donate
 Donating to EFF:                    https://eff.org/donate-le

d

Anaconda Install Banners get a Makeover!

Posted by Mary Shakshober on February 19, 2017 01:02 AM

A redesign/ update for Anaconda install banners has been an ongoing project for me since the summer and has recently, in the passed month or so, had a fair amount of conversation on its Pagure ticket. I have done multiple series of iterations for these banners, and in the couple of weeks have established a design that represents the Fedora vibe. There are three, sort of, sub-categories for the banners: Common Banners, Server-specific Banners, and Desktop-specific Banners. At this point I have completed drafts of the Common banners (available on all editions) and the Desktop-specific banners (available in addition to Common for Desktop editions).

If you’d like to follow the ticket and help give feedback on the incoming iterations, take a look at https://pagure.io/design/issue/438

Here’s a sneak peak of what’s to come for Anaconda!

<figure class="wp-caption alignnone" data-shortcode="caption" id="attachment_247" style="width: 973px">ticket-438_common-banners4<figcaption class="wp-caption-text">COMMON BANNER series</figcaption></figure> <figure class="wp-caption alignnone" data-shortcode="caption" id="attachment_248" style="width: 973px">ticket-438_desktop-specific<figcaption class="wp-caption-text">DESKTOP-SPECIFIC series</figcaption></figure>

Rawhide notes from the trail: 2017-02-18 edition

Posted by Kevin Fenzi on February 19, 2017 12:27 AM

Greetings everyone, lets dive right into the latest changes in the rawhide world:

  • The Fedora 26 mass rebuild ran and finished last weekend. 16,352 successfull builds happened, along with around 1,000 that failed to build. Now we have a few weeks until we branch f26 off to fix things up.
  • The mass rebuild did disrupt signing of normal updates. Perhaps next mass rebuild we should look at standing up another set of signing servers to just sign the mass rebuild.
  • Composes for the last few days have failed. Turns out its due to an unsigned package. But how could that happen? We passed all the builds through the regular signing process. Turns out when builds were tagged in, there were a few builds that overrode newer versions already in rawhide, so releng ran a custom script to retag newer builds back in. However, there was a package where the maintainer built a new version, decided for some reason it was unusable and untagged it. Thats fine, but the custom script mistakenly tagged this “newer” build in and it was long enough ago that it’s signature was removed. Just a short note here about “newer”: koji has no concept of package versions. To koji if you ask for all the ‘newest’ builds in a tag, it will give you the most recently tagged ones. This importantly has nothing at all to do with the package-epoch-version-release, thats just not on a level koji knows or cares about.

Finally checking out FlatPak

Posted by Ankur Sinha "FranciscoD" on February 18, 2017 07:14 PM

I've been reading about FlatPak for a while now in various places (Planet Fedora) but I hadn't given it a try yet. I saw Jiri's post on the planet earlier today and finally decided to install the Firefox Nightlies using FlatPak. Of course, it works really well. I've gone ahead and installed the Telegram nightly from the FlatPak website too.

The instructions are all there in the documentation here. It's really quite simple. On Fedora, first, you must have flatpak installed:

sudo dnf install flatpak

Then, you go to the FlatPak website and click on an app that you want to install. This opens up the Gnome Software centre that installs the application for you. The application then shows up in the list in the activities menu on Gnome. For Firefox, you can follow the instructions here. For example, I now have the Firefox nightly installed:

Screenshot showing Firefox nightly FlatPak application

I now intend to make some time to learn more about FlatPak - I've read bits and pieces here and there about some of the great features it brings - sandboxing and so on - and it looks quite cool!

Project Idea: PI Sw1tch

Posted by Mo Morsi on February 18, 2017 05:02 PM

While gaming is not high on my agenda anymore (... or rather at all), I have recently been mulling buying a new console, to act as much as a home entertainment center as a gaming system.

Having owned several generations PlayStation and Sega products, a few new consoles caught my eye. While the most "open" solution, the Steambox sort-of fizzled out, Nintendo's latest console Switch does seem to stand out of the crowd. The balance between power and portability looks like a good fit, and given Nintendo's previous successes, it wouldn't be surprising if it became a hit.

In addition to the separate home and mobile gaming markets, new entertainment mechanisms are needing to provide seamless integration between the two environments, as well as offer comprehensive data and information access capabilities. After all what'd be the point of a gaming tablet if you couldn't watch Youtube on it! Neal Stephenson recently touched on this at his latest TechCrunch talk, by expressing a vision of technology that is more integrated/synergized with our immediate environment. While mobile solutions these days offer a lot in terms of processing power, nothing quite offers the comfort or immersion that a console / home entertainment solution provides (not to mention mobile phones being horrendous interfaces for gaming purposes!)

Being the geek that I am, this naturally led me to thinking about developing a hybrid mechanism of my own, based on open / existing solutions, so that it could be prototyped and demonstrated quickly. Having recently bought a Raspeberry PI (after putting my Arduino to use in my last microcontroller project), and a few other odds and end pieces, I whipped up the following:

The idea is simple, the Raspberry PI would act as the 'console', with a plethora of games and 'apps' available (via open repositories, steam, emulators, and many more... not to mention Nethack!). It would be anchorable to the wall, desk, or any other surface by using a 3D-printed mount, and made portable via a cheap wireless controller / LCD display / battery pack setup (tied together through another custom 3D printed bracket). The entire rig would be quickly assemblable and easy to use, simply snap the PI into the wall to play on your TV; remove and snap into the controller bracket to take it on the go.

I suspect the power component is going to be the most difficult to nail down, finding an affordable USB power source that is lightweight but offers sufficient juice to drive the Raspberry PI w/ LCD might be tricky. But if this is done correctly, all components will be interchangeable, and one can easily plug in a lower-power microcontroller and/or custom hardware component for a tailored experience.

If there is any interest, let me know via email. If 3 or so people commit, this could be done in a weekend! (stay tuned for updates!)

read more

modulemd 1.1.0

Posted by Petr Šabata on February 18, 2017 10:14 AM

This is a little belated announcement but let it be known that I released a new version of the module metadata library, modulemd-1.1.0, earlier this week!

This version changes the default behavior of the xmd block a little; it now defaults to being an empty dictionary rather than null. We’re also a lot smarter when it comes to loading the build and runtime dependencies blocks, reading the whole structures rather than assuming they are correct. Last but not least, it now also installs its test suite properly under modulemd.tests. That was a dumb bug. Sorry about that.

All systems go

Posted by Fedora Infrastructure Status on February 18, 2017 09:42 AM
Service 'Fedora pastebin service' now has status: good: Everything seems to be working.

#LinuxPlaya Preparation

Posted by Julita Inca Chiroque on February 18, 2017 04:05 AM

As #LinuxPlaya draws near, we’ve been preparing things to the event. We first did a workshop to help others to finish the GTK+Python tutorial for developers. While some other students from different universities in Lima did some posts to prove that they use Linux (FEDORA+GNOME). You can see in the following list, the various areas where they had worked: design, robotics, education, by using tech as Docker and a Snake GTK game.

linuxplaya

Thanks to GNOME and FEDORA we are going to go to the beach Santa Maria on March 4th, and we are going to do more than social networking.  In the morning we are going to present our projects and then we are going to encourage them to apply to the GSoC program. Lunch time and afternoon games are also planned for this occasion. These are  the summer merchandising we are going to offer to our guests.

It’s a pleasure to have Damian Nohales from GNOME Argentina as our international guest

img_7297

img_7303

Most of the participants are also leaders in their universities and they are going to replicate the meetings in their places.  This is the case of Leyla Marcelo who is an entrepreneur leader in her university UPN and our designer in the last Linux events I organised in Lima, Peru.  Special thanks to Softbutterfly for the Internet support that day!

leyla


Filed under: FEDORA, GNOME Tagged: #LinuxPlaya, evento Linux, fedora, GNOME, Julita, Julita Inca, Lima, linux, Linux Playa, Perú, Softbuttterfly

New releases in XFCE

Posted by Robert Antoni Buj Gelonch on February 17, 2017 08:52 PM

generator: stats.sh

Date Package Version
2017-02-16 xfce4-weather-plugin 0.8.9
2017-02-13 xfce4-notifyd 0.3.5
2017-02-13 Thunar 1.6.11
2017-02-12 xfce4-taskmanager 1.2.0
2017-02-10 xfce4-systemload-plugin 1.2.1
2017-02-10 xfce4-netload-plugin 1.3.1
2017-02-06 xfce4-terminal 0.8.4
2017-02-03 xfce4-whiskermenu-plugin 1.7.0-src
2017-02-01 ristretto 0.8.2
2017-01-28 xfce4-mount-plugin 1.1.0
2016-11-28 xfce4-clipman-plugin 1.4.1
2016-11-12 xfce4-time-out-plugin 1.0.2
2016-11-11 xfce4-verve-plugin 1.1.0
2016-11-05 xfce4-wavelan-plugin 0.6.0
2016-11-05 xfce4-smartbookmark-plugin 0.5.0
2016-11-05 xfce4-mpc-plugin 0.5.0
2016-11-05 xfce4-fsguard-plugin 1.1.0
2016-11-05 xfce4-diskperf-plugin 2.6.0
2016-11-05 xfce4-datetime-plugin 0.7.0
2016-11-05 xfce4-battery-plugin 1.1.0
2016-10-25 xfce4-panel 4.12.1
2016-09-15 xfce4-settings 4.12.1
2016-09-08 xfdashboard 0.7.0
2016-07-20 xfce4-hardware-monitor-plugin 1.5.0
2016-07-07 thunar-vcs-plugin 0.1.5
2016-04-26 xfce4-eyes-plugin 4.4.5
2016-04-26 xfce4-dict 0.7.2
2016-04-26 xfce4-cpufreq-plugin 1.1.3
2016-03-19 xfce4-power-manager 1.6.0
2015-10-16 parole 0.8.1
2015-09-15 exo 0.10.7
2015-07-24 xfce4-embed-plugin 1.6.0
2015-07-20 xfdesktop 4.12.3
2015-06-25 xfce4-notes-plugin 1.8.1
2015-05-17 xfburn 0.5.4
2015-05-16 xfwm4 4.12.3
2015-04-10 orage 4.12.1
2015-04-05 garcon 0.5.0
2015-03-29 xfce4-xkb-plugin 0.7.1
2015-03-29 xfce4-timer-plugin 1.6.0
2015-03-17 thunar-volman 0.8.1
2015-03-16 xfce4-session 4.12.1
2015-03-15 libxfce4ui 4.12.1
2015-03-09 xfce4-places-plugin 1.7.0
2015-03-01 xfce4-appfinder 4.12.0
2015-03-01 mousepad 0.4.0
2015-02-28 tumbler 0.1.31
2015-02-28 libxfce4util 4.12.1
2015-01-25 xfce4-screenshooter 1.8.2
2014-01-09 gigolo 0.4.2
2013-10-25 xfce4-mailwatch-plugin 1.2.0
2013-05-11 thunar-media-tags-plugin 0.2.1
2013-05-11 thunar-archive-plugin 0.3.1
2012-10-11 xfce4-mixer 4.10.0
2012-07-10 xfce4-cpugraph-plugin 1.0.5
2012-05-12 xfce4-genmon-plugin 3.4.0
2011-10-23 xfmpc 0.2.2

Filed under: Fedora

Wrapping your head around SSH tunnels

Posted by Sachin Kamath on February 17, 2017 02:13 PM
Wrapping your head around SSH tunnels

This post is for educational purposes only. VPN's might be illegal in some countries. If you are not sure of the consequences of tunnelling over a network/using a VPN, please do not attempt to do so. You have been warned.

This is my first post in the Tunnelling and OpenVPN series. More coming up soon :)

It's been really long since I blogged so here goes a pretty long-ish detailed blog about SSH tunnels. I have been playing around with VPN's for quite some time now and have learned a lot about networking, tunnelling and other awesome things about creating stable networks. OpenVPN is a free and open source application that implement s the features of a Virtual Private Network (VPN) to create a point-to-point secure connection. You can check out the features of OpenVPN here. The possibilities and endless with OpenVPN. Using it, you can build everything ranging from a simple proxy server to a completely anonymous and secure Private network of people.

I started digging into the features of OpenVPN when my university started tightening the campus network by only allowing traffic through port 80 and 443. (Yes! 22 was blocked). Initially, I thought it was the end of git over SSH until I found out I could SSH over the HTTPS port on Github. Take a look at the article here.

Before we get carried away, let's get back to VPN talk. One of the solutions to "port blocks" is SSH tunnelling.

"If we see light at the end of the tunnel, it is the light of the oncoming train" ~ Robert Lowell.

SSH tunnelling, also known as "Poor Man's VPN", is a very powerful feature of SSH which creates a secure connection between a local computer and a remote machine through which services can be relayed.

Let us try to understand SSH tunnelling first. Creating an SSH tunnel is simple. Let us assume Mr. FooMan has a cloud server in Singapore with SSH daemon running on port 22 (the default port) and he wants to redirect all this traffic via the tunnel and not directly. Now, all he will do is ssh into his box using the -D directive:

ssh -D 27015 fooman@hissingaporeserver.com -p 22

Quoting the man page of SSH:

-D [bind_address:]port

Specifies a local “dynamic” application-level port forwarding. This works by allocating a socket to listen to port on the local side, optionally bound to the specified bind_address. Whenever a connection is made to this port, the connection is forwarded over the secure channel, and the application protocol is then used to determine where to connect to from the remote machine. Currently the SOCKS4 and SOCKS5 protocols are supported, and ssh will act as a SOCKS server. Only root can forward privileged ports. Dynamic port forwardings can also be specified in the configuration file.

As always, if you need to use a port below 1000, you can but you will have to be root. To verify this, go ahead and run a netstat -tlpn on the local machine. If everything goes well, you should see something like this:

Wrapping your head around SSH tunnels

Fig 1 : Port 27015 being used by the SSH process

This means that the SSH process is now listening on port 27015 for any connections. You can now use this port for redirecting all your browser traffic or set it as a SOCKS proxy on any application that supports proxified traffic.

Let us set a system-wide proxy on Linux. For this, fire up Network Settings, select Proxy and choose the method as Manual. Now set the SOCKS proxy to be localhost and the port as 27015 (or the port that followed your -D directive).

Wrapping your head around SSH tunnels

Once you are done, check your IP address. Viola! You have successfully proxified your entire system. Make sure you disable proxy when you are done using it or you won't be able to access the internet.

You can also configure just your Web Browser to use the proxy. I use FoxyProxy to achieve this. The configuration is pretty much the same except it acts as a plugin for your browser.

There are a lot of limitations in this case. SSH tunnelling will only work if your university/office allows outgoing traffic on 22 (most probably blocked in most universities). If that is the case, you will have to take extra steps to work around the block.

I will be covering about OpenVPN in my upcoming posts. So, stay tuned! If you've anything in your mind and want to share, do drop a comment below :)

News: OpenSSL Security Advisory [16 Feb 2017]

Posted by mythcat on February 17, 2017 01:02 PM
According to this website:  www.openssl.org/news


OpenSSL Security Advisory [16 Feb 2017]
========================================

Encrypt-Then-Mac renegotiation crash (CVE-2017-3733)
====================================================

Severity: High

During a renegotiation handshake if the Encrypt-Then-Mac extension is
negotiated where it was not in the original handshake (or vice-versa) then this
can cause OpenSSL to crash (dependent on ciphersuite). Both clients and servers
are affected.

OpenSSL 1.1.0 users should upgrade to 1.1.0e

This issue does not affect OpenSSL version 1.0.2.

This issue was reported to OpenSSL on 31st January 2017 by Joe Orton (Red Hat).
The fix was developed by Matt Caswell of the OpenSSL development team.

Note
====

Support for version 1.0.1 ended on 31st December 2016. Support for versions
0.9.8 and 1.0.0 ended on 31st December 2015. Those versions are no longer
receiving security updates.

References
==========

URL for this Security Advisory:
https://www.openssl.org/news/secadv/20170216.txt

Note: the online version of the advisory may be updated with additional details
over time.

For details of OpenSSL severity classifications please see:
https://www.openssl.org/policies/secpolicy.html

2016 – My Year in Review

Posted by Justin W. Flory on February 17, 2017 08:30 AM

Before looking too far ahead to the future, it’s important to spend time to reflect over the past year’s events, identify successes and failures, and devise ways to improve. Describing my 2016 is a challenge for me to find the right words for. This post continues a habit I started last year with my 2015 Year in Review. One thing I discover nearly every day is that I’m always learning new things from various people and circumstances. Even though 2017 is already getting started, I want to reflect back on some of these experiences and opportunities of the past year.

Preface

When I started writing this in January, I read freenode‘s “Happy New Year!” announcement. Even though their recollection of the year began as a negative reflection, the freenode team did not fail to find some of the positives of this year as well. The attitude reflected in their blog post is reflective of the attitude of many others today. 2016 has brought more than its share of sadness, fear, and a bleak unknown, but the colors of radiance, happiness, and hope have not faded either. Even though some of us celebrated the end of 2016 and its tragedies, two thoughts stay in my mind.

One, it is fundamentally important for all of us to stay vigilant and aware of what is happening in the world around us. The changing political atmosphere of the world has brought a shroud of unknowing, and the changing of a number does not and will not signify the end of these doubts and fears. 2017 brings its own series of unexpected events. I don’t consider this a negative, but in order for it not to become a negative, we must constantly remain active and aware.

Secondly, despite the more bleak moments of this year, there has never been a more important time to embrace the positives of the past year. For every hardship faced, there is an equal and opposite reaction. Love is all around us and sometimes where we least expect it. Spend extra time this new year remembering the things that brought you happiness in the past year. Hold them close, but share that light of happiness with others too. You might not know how much it’s needed.

First year of university: complete!

Many things changed since I decided to pack up my life and go to a school a thousand miles away from my hometown. In May, I officially finished my first year at the Rochester Institute of Technology, finishing the full year on dean’s list. Even though it was only a single year, the changes from my decision to make the move are incomparable. Rochester exposed me to amazing, brilliant people. I’m connected to organizations and groups based on my interests like I never imagined. My courses are challenging, but interesting. If there is anything I am appreciative of in 2016, it is for the opportunities that have presented themselves to me in Rochester.

Adventures into FOSS@MAGIC

On 2016 Dec. 10th, the "FOSS Family" went to dinner at a local restaurant to celebrate the semester

On 2016 Dec. 10th, the “FOSS Family” went to dinner at a local restaurant to celebrate the semester

My involvement with the Free and Open Source Software (FOSS) community at RIT has grown exponentially since I began participating in 2015. I took my first course in the FOSS minor, Humanitarian Free and Open Source Software Development in spring 2016. In the following fall 2016 semester, I became the teaching assistant for the course. I helped show our community’s projects at Imagine RIT. I helped carry the RIT FOSS flag in California (more on that later). The FOSS@MAGIC initiative was an influencing factor for my decision to attend RIT and continues to play an impact in my life as a student.

I eagerly look forward to future opportunities for the FOSS projects and initiatives at RIT to grow and expand. Bringing open source into more students’ hands excites me!

I <3 WiC

With a new schedule, the fall 2016 semester marked the beginning of my active involvement with the Women in Computing (WiC) program at RIT, as part of the Allies committee. Together with other members of the RIT community, we work together to find issues in our community, discuss them and share experiences, and find ways to grow the WiC mission: to promote the success and advancement of women in their academic and professional careers.

WiCHacks 2016 Opening CeremonyIn spring 2016, I participated as a volunteer for WiCHacks, the annual all-female hackathon hosted at RIT. My first experience with WiCHacks left me impressed by all the hard work by the organizers and the entire atmosphere and environment of the event. After participating as a volunteer, I knew I wanted to become more involved with the organization. Fortunately, fall 2016 enabled me to become more active and engaged with the community. Even though I will be unable to attend WiCHacks 2017, I hope to help support the event in any way I can.

Also, hey! If you’re a female high school or university student in the Rochester area (or willing to do some travel), you should seriously check this out!

Google Summer of Code

Google Summer of Code, abbreviated to GSoC, is an annual program run by Google every year. Google works with open source projects to offer stipends for them to pay students to work on projects over the summer. In a last-minute decision to apply, I was accepted as a contributing student to the Fedora Project. My proposal was to work within the Fedora Infrastructure team to help automate the WordPress platforms with Ansible. My mentor, Patrick Uiterwijk, provided much of the motivation for the proposal and worked with me throughout the summer as I began learning Ansible for the first time. Over the course of the summer, my learned knowledge began to turn into practical experience.

It would be unfair for a reflection to count successes but not failures. GSoC was one of the most challenging and stressful activities I’ve ever participated in. It was a complete learning experience for me. One area I noted that I needed to improve on was communication. My failing point was not regularly communicating what I was working through or stuck on with my mentor and the rest of the Fedora GSoC community. GSoC taught me the value of asking questions often when you’re stuck, especially in an online contribution format.

On the positive side, GSoC helped formally introduce me to Ansible, and to a lesser extent, the value of automation in operations work. My work in GSoC helped enable me to become a sponsored sysadmin of Fedora, where I mostly focus my time contributing to the Badges site. Additionally, my experience in GSoC helped me when interviewing for summer internships (also more on this later).

Google Summer of Code came with many ups and downs. But I made it and passed the program. I’m happy and fortunate to have received this opportunity from the Fedora Project and Google. I learned several valuable lessons that have and will impact going forward into my career. I look forward to participating either as a mentor or organizer for GSoC 2017 with the Fedora Project this year.

Flock 2016

Group photo of all Flock 2016 attendees outside of the conference venue (Photo courtesy of Joe Brockmeier)

Group photo of all Flock 2016 attendees outside of the conference venue (Photo courtesy of Joe Brockmeier)

Towards the end of summer, in the beginning of August, I was accepted as a speaker to the annual Fedora Project contributor conference, Flock. As a speaker, my travel and accommodation were sponsored to the event venue in Kraków, Poland.

Months after Flock, I am still incredibly grateful for receiving the opportunity to attend the conference. I am appreciative and thankful to Red Hat for helping cover my costs to attend, which is something I would never be able to do on my own. Outside of the real work and productivity that happened during the conference, I am happy to have mapped names to faces. I met incredible people from all corners of the world and have made new lifelong friends (who I was fortunate to see again in 2017)! Flock introduced me in-person to the diverse and brilliant community behind the Fedora Project. It is an experience that will stay with me forever.

To read a more in-depth analysis of my time in Poland, you can read my full write-up of Flock 2016.

To Kraków for Flock with Bee, Amita, Jona, and Giannis

On a bus to the Kraków city center with Bee Padalkar, Amita Sharma, Jona Azizaj, and Giannis Konstantinidis (left to right).

Maryland (Bitcamp), Massachusetts (HackMIT), California (MINECON)

Bitcamp 2016: The Fedora Ambassadors of Bitcamp 2016

The Fedora Ambassadors at Bitcamp 2016. Left to right: Chaoyi Zha (cydrobolt), Justin W. Flory (jflory7), Mike DePaulo (mikedep333), Corey Sheldon (linuxmodder)

2016 provided me the opportunity to explore various parts of my country. Throughout the year, I attended various conferences to represent the Fedora Project, the SpigotMC project, and the RIT open source community.

There are three distinct events that stand out in my memory. For the first time, I visited the University of Maryland for Bitcamp as a Fedora Ambassador. It also provided me an opportunity to see my nation’s capitol for the first time. I also visited Boston for the first time this year as well for HackMIT, MIT’s annual hackathon event. I also participated as a Fedora Ambassador and met brilliant students from around the country (and even the world, with one student I met flying in from India for the weekend).

"Team Ubuntu" shows off their project to Charles Profitt before the project deadline for HackMIT 2016

“Team Ubuntu” shows off their project to Charles Profitt before the project deadline for HackMIT 2016

Lastly, I also took my first journey to the US west coast for MINECON 2016, the annual Minecraft convention. I attended as a staff member of the SpigotMC project and a representative of the open source community at RIT.

All three of these events have their own event reports to go with them. More info and plenty of pictures are in the full reports.

Vermont 2016 with Matt

Shortly after I arrived, Matt Coutu took me around to see the sights and find coffee

Shortly after I arrived, Matt took me around to see the sights and find coffee.

Some trips happen without prior arrangements and planning. Sometimes, the best memories are made by not saying no. I remember the phone call with one of my closest friends, Matt Coutu, at some point in October. On a sudden whim, we planned my first visit to Vermont to visit him. Some of the things he told me to expect made me excited to explore Vermont! And then in the pre-dawn hours of November 4th, I made the trek out to Vermont to see him.

50 feet up into the air atop Spruce Mountain was colder than we expected

50 feet up into the air atop Spruce Mountain was colder than we expected.

Instantly when crossing over the state border, I knew this was one of the most beautiful states I ever visited. During the weekend, the two of us did things that I think only the two of us would enjoy. We climbed a snowy mountain to reach an abandoned fire watchtower, where we endured a mini blizzard. We walked through a city without a specific destination in mind, but to go wherever the moment took us.

We visited a quiet dirt road that led to a meditation house and cavern maintained by monks, where we meditated and drank in the experience. I wouldn’t classify the trip has a high-energy or engaging trip, but for me, it was one of the most enjoyable trips I’ve embarked on yet. There are many things that I still hold on to from that weekend for remembering or reflecting back on.

A big shout-out to Matt for always supporting me with everything I do and always being there when we need each other.

Martin Bridge may not be one of your top places to visit in Vermont, but if you keep going, you'll find a one-of-a-kind view

Martin Bridge may not be one of your top places to visit in Vermont, but if you keep going, you’ll find a one-of-a-kind view.

Finally seeing NYC with Nolski

Mike Nolan and Justin W. Flory venture through New York City early on a Sunday evening

Mike Nolan and I venture through New York City early on a Sunday evening

In no short time after the Vermont trip, I purchased tickets for my favorite band, El Ten Eleven, in New York City on November 12th. What turned into a one-day trip to see the band turned into an all-weekend trip to see the band, see New York City, and spend some time catching up with two of my favorite people, Mike Nolan (nolski) and Remy DeCausemaker (decause). During the weekend, I saw the World Trade Center memorial site for the first time, tried some amazing bagels, explored virtual reality in Samsung’s HQ, and got an exclusive inside look at the Giphy office.

This was my third time in New York City, but my first time to explore the city. Another shout-out goes to Mike for letting me crash on his couch and stealing his Sunday to walk through his metaphorical backyard. Hopefully it isn’t my last time to visit the city either!

Finalizing study abroad

This may be cheating since it was taken in 2017, but this is one of my favorite photos from Dubrovnik, Croatia so far

This may be cheating since it was taken in 2017, but this is one of my favorite photos from Dubrovnik, Croatia so far. You can find more like this on my 500px gallery!

At the end of 2016, I finalized a plan that was more than a year in the making. I applied and was accepted to study abroad at the Rochester Institute of Technology campus in Dubrovnik, Croatia. RIT has a few satellite campuses across the world: two in Croatia (Zagreb and Dubrovnik) and one in Dubai, UAE. In addition to being accepted, the university provided me a grant to further my education abroad. I am fortunate to have received this opportunity and can’t wait to spend the next few months of my life in Croatia. I am currently studying in Dubrovnik since January until the end of May.

During my time here, I will be taking 12 credit hours of courses. I am taking ISTE-230 (Introduction to Database and Data Modeling), ENGL-361 (Technical Writing), ENVS-150 (Ecology of the Dalmatian Coast), and lastly, FOOD-161 (Wines of the World). The last one was a fun one that I took for myself to try broadening my experiences while abroad.

Additionally, one of my personal goals for 2017 is to practice my photography skills. During my time abroad, I have created a gallery on 500px where I upload my top photos from every week. I welcome feedback and opinions about my pictures, and if you have criticism for how I can improve, I’d love to hear about it!

Accepting my first co-op

The last big break that I had in 2016 was accepting my first co-op position. Starting in June, I will be a Production Engineering Intern at Jump Trading, LLC. I started interviewing with Jump Trading in October and even had an on-site interview that brought me to their headquarters in Chicago at the beginning of December. After meeting the people and understanding the culture of the company, I am happy to accept a place at the team. I look forward to learning from some of the best in the industry and hope to contribute to some of the fascinating projects going on there.

From June until late August, I will be starting full-time at their Chicago office. If you are in the area or ever want to say hello, let me know and I’d be happy to grab coffee, once I figure out where all the best coffee shops in Chicago are!

In summary

2015 felt like a difficult year to follow, but 2016 exceeded my expectations. I acknowledge and I’m grateful for the opportunities this year presented to me. Most importantly, I am thankful for the people who have touched my life in a unique way. I met many new people and strengthened my friendships and bonds with many old faces too. All of the great things from the past year would not be possible without the influence, mentorship, guidance, friendship, and comradery these people have given me. My mission is to always pay it forward to others in any way that I can, so that others are able to experience the same opportunities (or better).

2017 is starting off hot and moving quickly, so I hope I can keep up! I can’t wait to see what this year brings and hope that I have the chance to meet more amazing people, and also meet many of my old friends again, wherever that may be.

Keep the FOSS flag high.

The post 2016 – My Year in Review appeared first on Justin W. Flory's Blog.

North America and Fedora: Year in Review

Posted by Fedora Community Blog on February 17, 2017 08:15 AM

The past year has proven to be both challenging and demanding for our Ambassadors. During the past year there have been a lot of new ideas proposed and more events that are being sought out attempting to expand our base. Many of the ventures have been with hack-a-thons in several states. This has been a relatively new venture in those areas. Since our involvement in these types of events, we quickly discovered that Fedora and the associated spins were a new tool for most of these individuals attending and participating. That was a surprising fact within the community that the young and impressionable individuals seemed to be using Windows more than any other operating system available. Since those few we (Fedora) attended, there has been an increase in the open source software utilization across the board at these types of events, a total and undeniable success.

Looking back at North America events

Fedora Project Leader Matthew Miller and Ambassador Mike DePaulo at the Fedora table during LISA 16

Fedora Project Leader Matthew Miller and Ambassador Mike DePaulo at the Fedora table during LISA 16

Covering the larger events such as Linux Fest Northwest, SCaLE 14x, OSCON, Texas Linux Fest, Southeast Linux Fest, Ohio Linux Fest, SeaGL, and LISA 2016 were a large and diverse group of North America Ambassadors, each with their own specialties and wide range of knowledge with the hopes of showing the best Open Source Software out today. As usual, the first event of the year was SCaLE 14x, a large event that showcases various operating systems and open source software. We always have a great attendance at SCaLE. This past year, we added another Ambassador to the group with Perry Rivera (lajuggler) who has brought new ideas and vision to our group, as well as our stable Brian Monroe (ParadoxGuitarist), Scott Williams (VWBUSGUY), and Alejandro who bridges the gap with our Spanish speaking customers.

Texas events

OSCON and Texas Linux Fest also proved to be noteworthy. Since OSCON events are usually outside of our price range for sponsorship, we were entirely grateful to Red Hat for allowing us to share the booth with them. Both events were headed by Jon Disnard (parasense) for Fedora. We are also lucky to have Adam Miller (maxamillion) who is within the area and helped on short notice. Both events were successful in explaining the importance of Open Source Software and how Fedora plays a vital role in being the leader in technical and inventive ideas leading right back into the software. These two events are the only events that showed up in the Midwest region that had the local (somewhat) ambassadors available.

In the next year, Texas Linux fest will be in the Fall of 2017. We will see what that brings for attendance, hopefully more since the local college was in summer recess during the last event in 2016.

US northwest events

Our local Northwest ambassador headed up two events this past year, Linux Fest Northwest, and SeaGL, both in the host city of Seattle, Washington. Both events were extremely effective and resulted in large attendance. Jeff Sandys (jsandys) who has been with the program for eight years is Fedora’s Seattle-area local ambassador. He has been attending and planning events in the area for some time now. Although we do have some others that are in the area, Jeff is our active Ambassador for the Pacific Northwest. Thanks to Laura Abbott (labbott) for also helping us with some of our events this year in Seattle on short notice.

US east coast

Fedora Ambassadors attend Bitcamp 2016 at the University of Maryland (left to right: Chaoyi Zha, Justin W. Flory, Mike DePaulo, Corey Sheldon)

Fedora Ambassadors attend Bitcamp 2016 at the University of Maryland (left to right: Chaoyi Zha, Justin W. Flory, Mike DePaulo, Corey Sheldon)

The East coast is always busy with events during the year. Our major events include Southeast Linux Fest, Ohio Linux Fest, LISA, Software Freedom Day, and some of the smaller events such as BrickHack, Bitcamp, HackMIT, NASA Space Apps, and FOSSCON. Some key individuals in the planning and event attendance are Ben Williams (kk4ewt), Corey Sheldon (linuxmodder), Justin W. Flory (jflory7), Nick Bebout (nb), Chaoyi Zha (cydrobolt), Dan Mossor (danofsatx) and Michael DePaulo (mikedep333).

Southeast Linux Fest usually has the largest showing of Ambassadors from the Midwest and southeast corners of the country. The past year, we had seven Ambassadors in attendance. This gave us the flexibility to also make ourselves available to other activities during the event. As usual, the Amateur Radio exam was administered by Ben Williams and Nick Bebout along with other smaller activities as well.

Fedora Ambassadors invite BrickHack attendees to join them at the "hacker table" to spend time hanging out with the Fedora community

Fedora Ambassadors invite BrickHack attendees to join them at the “hacker table” to spend time hanging out with the Fedora community

Ohio Linux fest is also another event we normally attend in the greater Columbus, OH area. This event usually draws from surrounding states as well, such as Indiana, since there has been no event in the Indianapolis area for the past few years. Sadly that was a result of one of our own Ambassadors who lost his battle with cancer, Matthew Williams (Lord Drachenblut)… you will be missed.

Some of the smaller events that were on the east coast (headed up by Corey Sheldon and Justin W. Flory) were also successful in delivering a powerful message to the Free and Open Source community. Even though the events were not on a large scale of attendance such as Southeast or Northwest Linux Fests, the delivery of Fedora was there. We weren’t handing out hundreds of media discs or stickers in volume, but the small, sustainable word of what we are about spreads quickly from the small event through the local Linux Users Groups. Feedback received from attending was nothing short of wonderful. Although you will always get those hard-liner folks that use only X or Y and never consider Z, the question is then why are you not willing to try a new or different experience, and they can never give an answer supporting what they use in an operating system. Maybe it’s a knowledge factor, or a specific equipment / hardware configuration, who knows. But those individuals will always take swag from our table, and take media as well, maybe they are actually Fedora users but don’t want the other hard-lined friends to know.

Reflecting back

Our Ambassadors are the keys to our success. Without the outstanding group we currently have, I do not think that the group would be where they are at today. We have some new Ambassadors that joined the group during the year:

We hope the next year will bring us more with Ambassadors and more ideas to the table in reference to the best Operating System in the Open Source category.

Event reports

Here are a few links to event reports.

LFNW

LISA 2016

SELF 2016

Ohio Linux Fest

SeaGL

BrickHack 2016

Bitcamp 2016

HackMIT 2016


Image courtesy of Travis Torres – originally posted to Unsplash as “Untitled“. Modifications by Justin W. Flory.

The post North America and Fedora: Year in Review appeared first on Fedora Community Blog.

How to install and run Genymotion emulator on Fedora 25

Posted by Luca Ciavatta on February 17, 2017 08:00 AM

Genymotion Android emulator for all

Genymotion is an Android emulator based on Virtualbox, but it also doesn’t require you to install Virtualbox as it is bundled with the installer. It can emulate specific devices and allows you to install/run/test apps on it, which makes it great for you to use to easily showcase apps on my PC, project them on a wide screen, make perfect screencasts at full speed to show the best of them. Genymotion is a really very fast Android emulator and since Virtualbox is cross-platform compatible, Genymotion will work in Windows, Mac and Linux too.

Genymotion download

Genymotion is free for personal use, so you can download it from the official site and start playing with it. First of all, for Fedora users, on the site there are no RPM download provided, no repositories, nothing for Fedora based distro. Fortunately, the file available on the site are generic .bin and not debian-based.

How to install and run it on Fedora 25

Starting from comments on my previous article How to install Genymotion emulator on Fedora, I found that a lot of people have many issues on install Genymotion over the latest version of Fedora. So, I decided to make a fresh install of Fedora 25 into a virtual machine and to try installing the emulator on it.

To get started, you must go to Genymotion website and register a user account. You will be able to download the files after the registration process.

Go to Genymotion register page, fill all the fields with your personal information and sign up. After that, you will receive a validation e-mail, so click on the validation link to finish the registration process. If you have completed that, you’re able to sign-in in the Genymotion website, to go to the download page and to download the appropriate file for your architecture. It’s a generic Linux .bin file, so you can use it also on Fedora systems(only 64 bit).

Genymotion launch

After, we need some tweaks (maybe they are not necessary for your system). Install Virtualization Group, dkms and dependencies:

  sudo dnf groupinstall "Virtualization"
  sudo dnf install dkms kernel-devel kernel-headers

Install VirtualBox from Oracle website, so download the right version from Download VirtualBox for Linux Hosts. Just click on a Fedora 25 link and press install on Gnome Software. After that, remember to download and install the VirtualBox Extension Pack. Only to double click on it after the download. Easy, easy!

Genymotion emulator

Now, we can install properly Genymotion:

cd /home/user/Downloads
sudo mv genymotion-version-ubuntu16_x64.bin /opt ;#version depends on your download
cd /opt
sudo chmod +x genymotion-version-ubuntu16_x64.bin ;#make the bin executable
sudo ./genymotion-version-ubuntu16_x64.bin ;#install for all users - without 'sudo' install only for current user

Make a couple of tweaks for Fedora 25:

cd /opt/genymobile/genymotion/
sudo mv libxcb.so.1 libxcb.so.1.bak
sudo mv libdrm.so.2 libdrm.so.2.bak

Follow the instructions of the installer and, finally, start GenyMotion and play with it:

cd /opt/genymobile/genymotion/ ;# just if you moved away
./genymotion

You can also make a .desktop file to launch it with the icon.

Saving laptop power with powertop

Posted by Fedora Magazine on February 17, 2017 08:00 AM

If there’s one thing you want from a laptop, it’s long battery life. You want every drop of power you can get to work, read, or just be entertained on a long jaunt. So it’s good to know where your power is going.

You can use the powertop utility to see what’s drawing power when your system’s not plugged in. This utility only runs on the Terminal, so you’ll need to open a Terminal to get it. Then run this command:

sudo dnf install powertop

powertop needs access to hardware to measure power usage. So you have to run it with special privileges too:

sudo powertop

The powertop display looks similar to this screenshot. Power usage on your system will likely be different:

powertop-screenshot

The utility has several screens. You can switch between them using the Tab and Shift+Tab keys. To quit, hit the Esc key. The shortcuts are also listed at the bottom of the screen for your convenience.

The utility shows you power usage for various hardware and drivers. But it also displays interesting numbers like how many times your system wakes up each second. (Processors are so fast that they often sleep for the majority of a second of uptime.)

If you want to maximize battery power, you want to minimize wakeups. One way to do this is to use powertop‘s Tunables page. “Bad” indicates a setting that’s not saving power, although it might be good for performance. “Good” indicates a power saving setting is in effect. You can hit Enter on any tunable to switch it to the other setting.

The powertop package also provides a service that automatically sets all tunables to “Good” for optimal power saving. To use it, run this command:

sudo systemctl start powertop.service

If you’d like the service to run automatically when you boot, run this command:

sudo systemctl enable powertop.service

Caveat about this service and tunables: Certain tunables may risk your data, or (on some odd hardware) may cause your system to behave erratically. For instance, the “VM writeback timeout” setting affects how long the system waits before writing changed data to storage. This means a power saving setting trades off data security. If the system loses all power for some reason, you could lose up to 15 seconds’ of changed data, rather than the default 5. However, for most laptop users this isn’t an issue, since your system should warn you about low battery.

PHP version 7.0.16 and 7.1.2

Posted by Remi Collet on February 17, 2017 07:27 AM

RPM of PHP version 7.1.2 are available in remi-php71 repository for Fedora 23-25 and Enterprise Linux (RHEL, CentOS).

RPM of PHP version 7.0.16 are available in remi repository for Fedora 25 and in remi-php70 repository for Fedora 22-24 and Enterprise Linux (RHEL, CentOS).

emblem-important-2-24.pngPHP version 5.5 have reached its end life and is no longer maintained by the project.

These versions are also available as Software Collections.

No security fix this month, so no update for 5.6.30.

Version announcements:

emblem-notice-24.pngInstallation : use the Configuration Wizard and choose your version and installation mode.

Replacement of default PHP by version 7.1 installation (simplest):

yum-config-manager --enable remi-php71
yum update

Parallel installation of version 7.1 as Software Collection (x86_64 only):

yum install php71

Replacement of default PHP by version 7.0 installation (simplest):

yum-config-manager --enable remi-php70
yum update

Parallel installation of version 7.0 as Software Collection (x86_64 only):

yum install php70

And soon in the official updates:

emblem-important-2-24.pngTo be noticed :

  • EL7 rpm are build using RHEL-7.2
  • EL6 rpm are build using RHEL-6.8
  • a lot of new extensions are also available, see the PECL extension RPM status page

emblem-notice-24.pngInformation, read:

Base packages (php)

Software Collections (php56 / php70)

Getting started with Pagure CI

Posted by Adam Williamson on February 17, 2017 06:50 AM

I spent a few hours today setting up a couple of the projects I look after, fedfind and resultsdb_conventions, to use Pagure CI. It was surprisingly easy! Many thanks to Pingou and Lubomir for working on this, and of course Kevin for helping me out with the Jenkins side.

You really do just have to request a Jenkins project and then follow the instructions. I followed the step-by-step, submitted a pull request, and everything worked first time. So the interesting part for me was figuring out exactly what to run in the Jenkins job.

The instructions get you to the point where you’re in a checkout of the git repository with the pull request applied, and then you get to do…whatever you can given what you’re allowed to do in the Jenkins builder environment. That doesn’t include installing packages or running mock. So I figured what I’d do for my projects – which are both Python – is set up a good tox profile. With all the stuff discussed below, the actual test command in the Jenkins job – after the boilerplate from the guide that checks out and merges the pull request – is simply tox.

First things first, the infra Jenkins builders didn’t have tox installed, so Kevin kindly fixed that for me. I also convinced him to install all the variant Python version packages – python26, and the non-native Python 3 packages – on each of the Fedora builders, so I can be confident I get pretty much the same tox run no matter which of the builders the job winds up on.

Of course, one thing worth noting at this point is that tox installs all dependencies from PyPI: if something your code depends on isn’t in there (or installed on the Jenkins builders), you’ll be stuck. So another thing I got to do was start publishing fedfind on PyPI! That was pretty easy, though I did wind up cribbing a neat trick from this PyPI issue so I can keep my README in Markdown format but have setup.py convert it to rst when using it as the long_description for PyPI, so it shows up properly formatted, as long as pypandoc is installed (but work even if it isn’t, so you don’t need pandoc just to install the project).

After playing with it for a bit, I figured out that what I really wanted was to have two workflows. One is to run just the core test suite, without any unnecessary dependencies, with python setup.py test – this is important when building RPM packages, to make sure the tests pass in the exact environment the package is built in (and for). And then I wanted to be able to run the tests across multiple environments, with coverage and linting, in the CI workflow. There’s no point running code coverage or a linter while building RPMs, but you certainly want to do it for code changes.

So I put the install, test and CI requirements into three separate text files in each repo – install.requires, tests.requires and tox.requires – and adjusted the setup.py files to do this in their setup():

install_requires = open('install.requires').read().splitlines(),
tests_require = open('tests.requires').read().splitlines(),

In tox.ini I started with this:

deps=-r{toxinidir}/install.requires
     -r{toxinidir}/tests.requires
     -r{toxinidir}/tox.requires

so the tox runs get the extra dependencies. I usually write pytest tests, so to start with in tox.ini I just had this command:

commands=py.test

Pytest integration for setuptools can be done in various ways, but I use this one. Add a class to setup.py:

import sys
from setuptools import setup, find_packages
from setuptools.command.test import test as TestCommand

class PyTest(TestCommand):
    user_options = [('pytest-args=', 'a', "Arguments to pass to py.test")]

    def initialize_options(self):
        TestCommand.initialize_options(self)
        self.pytest_args = ''
        self.test_suite = 'tests'

    def run_tests(self):
        #import here, cause outside the eggs aren't loaded
        import pytest
        errno = pytest.main(self.pytest_args.split())
        sys.exit(errno)

and then this line in setup():

cmdclass = {'test': PyTest},

And that’s about the basic shape of it. With an envlist, we get the core tests running both through tox and setup.py. But we can do better! Let’s add some extra deps to tox.requires:

coverage
diff-cover
pylint
pytest-cov

and tweak the commands in tox.ini:

commands=py.test --cov-report term-missing --cov-report xml --cov fedfind
         diff-cover coverage.xml --fail-under=90
         diff-quality --violations=pylint --fail-under=90

By adding a few args to our py.test call we get a coverage report for our library with the pull request applied. The subsequent commands use the neat diff_cover tool to add some more information. diff-cover basically takes the full coverage report (coverage.xml is produced by --cov-report xml) and considers only the lines that are touched by the pull request; the --fail-under arg tells it to fail if there is less than 90% coverage of the modified lines. diff-quality runs a linter (in this case, pylint) on the code and, again, considers only the lines changed by the pull request. As you might expect, --fail-under=90 tells it to fail if the ‘quality’ of the changed code is below 90% (it normalizes all the linter scores to a percentage scale, so that really means a pylint score of less than 9.0).

So without messing around with shipping all our stuff off to hosted services, we get a pretty decent indicator of the test coverage and code quality of the pull request, and it shows up as failing tests if they’re not good enough.

It’s kind of overkill to run the coverage and linter on all the tested Python environments, but it is useful to do it at least on both Python 2 and 3, since the pylint results may differ, and the code might hit different paths. Running them on every minor version isn’t really necessary, but it doesn’t take that long so I’m not going to sweat it too much.

But that does bring me to the last refinement I made, because you can vary what tox does in different environments. One thing I wanted for fedfind was to run the tests not just on Python 2.6, but with the ancient versions of several dependencies that are found in RHEL / EPEL 6. And there’s also an interesting bug in pylint which makes it crash when running on fedfind under Python 3.6. So my tox.ini really looks this:

[tox]
envlist = py26,py27,py34,py35,py36,py37
skip_missing_interpreters=true
[testenv]
deps=py27,py34,py35,py36,py37: -r{toxinidir}/install.requires
     py26: -r{toxinidir}/install.requires.py26
     py27,py34,py35,py36,py37: -r{toxinidir}/tests.requires
     py26: -r{toxinidir}/tests.requires.py26
     py27,py34,py35,py36,py37: -r{toxinidir}/tox.requires
     py26: -r{toxinidir}/tox.requires.py26
commands=py27,py34,py35,py36,py37: py.test --cov-report term-missing --cov-report xml --cov fedfind
         py26: py.test
         py27,py34,py35,py36,py37: diff-cover coverage.xml --fail-under=90
         # pylint breaks on functools imports in python 3.6+
         # https://github.com/PyCQA/astroid/issues/362
         py27,py34,py35: diff-quality --violations=pylint --fail-under=90
setenv =
    PYTHONPATH = {toxinidir}

As you can probably guess, what’s going on there is we’re installing different dependencies and running different commands in different tox ‘environments’. pip doesn’t really have a proper dependency solver, which – among other things – unfortunately means tox barfs if you try and do something like listing the same dependency twice, the first time without any version restriction, the second time with a version restriction. So I had to do a bit more duplication than I really wanted, but never mind. What the files wind up doing is telling tox to install specific, old versions of some dependencies for the py26 environment:

[install.requires.py26]
cached-property
productmd
setuptools == 0.6.rc10
six == 1.7.3

[tests.requires.py26]
pytest==2.3.5
mock==1.0.1

tox.requires.py26 is just shorter, skipping the coverage and pylint bits, because it turns out to be a pain trying to provide old enough versions of various other things to run those checks with the older pytest, and there’s no real need to run the coverage and linter on py26 as long as they run on py27 (see above). As you can see in the commands section, we just run plain py.test and skip the other two commands on py26; on py36 and py37 we skip the diff-quality run because of the pylint bug.

So now on every pull request, we check the code (and tests – it’s usually the tests that break, because I use some pytest feature that didn’t exist in 2.3.5…) still work with the ancient RHEL 6 Python, pytest, mock, setuptools and six, check it on various other Python interpreter versions, and enforce some requirements for test coverage and code quality. And the package builds can still just do python setup.py test and not require coverage or pylint. Who needs github and coveralls? 😉

Of course, after doing all this I needed a pull request to check it on. For resultsdb_conventions I just made a dumb fake one, but for fedfind, because I’m an idiot, I decided to write that better compose ID parser I’ve been meaning to do for the last week. So that took another hour and a half. And then I had to clean up the test suite…sigh.

Bluetooth in Fedora

Posted by Nathaniel McCallum on February 16, 2017 08:53 PM

So… Bluetooth. It’s everywhere now. Well, everywhere except Fedora. Fedora does, of course support bluetooth. But even the most common workflows are somewhat spotty. We should improve this.

To this end, I’ve enlisted the help of the Don Zickus, kernel developer extrordinaire, and Adam Williamson, the inimitable Fedora QA guru. The plan is to create a set of user tests for the most common bluetooth tasks. This plan has several goals.

First, we’d like to know when stuff is broken. For example, the recent breakage in linux-firmware. Catching this stuff early is a huge plus.

Second, we’d like to get high quality bug reports. When things do break, vague bug reports often cause things to sit in limbo for a while. Making sure we have all the debugging information up front can make reports actionable.

Third, we’d (eventually) like to block a new Fedora release if major functionality is broken. We’re obviously not ready for this step yet. But once the majority of workflows work on the hardware we care about, we need to ensure that we don’t ship a Fedora release with broken code.

To this end we are targeting three workflows which cover the most common cases:

  • Keyboards
  • Headsets
  • Mice

For more information, or to help develop the user testing, see the Fedora QA bug. Here’s to a better future!