Fedora People

Bodhi 3.5.1 released

Posted by Bodhi on March 21, 2018 08:54 PM

Bug fixes

  • Use correct N, V, R splitting for module builds and add stream support (#2226).
  • Fixed Release.version_int for modular releases (#2232).


Both 3.5.1 fixes were submitted by Patrick Uiterwijk.

New badge: The Nexts !

Posted by Fedora Badges on March 21, 2018 12:57 PM
The NextsYou helped to rock fedora.next on the web, pump up the volume and dance!

ESPEasy for ESP32

Posted by Fabian Affolter on March 21, 2018 12:51 PM

ESPEasy is a very nice framework to get started quickly if you want to create a sensor node with an ESP8266. There are many different kind of flavors available. The development boards like NodeMCU, are cheap and easy to get. ESPEasy is especially useful for beginners or lazy people as there is no need to re-invent the wheel aka write or collect all the parts (reading the values from attached sensors, MQTT integration, OTA updates, etc.) by yourself. Also, you can switch sensors during runtime without re-compiling, just update the configuration and you are good to go.

For the ESP32 (“success” of the ESP8266) are also frameworks available like esphomelib and uPyEasy. esphomelib supports features which make it easy to integrate with Home Assistant but lacks essential features like i2C support and requires additional work to get it going. Unfortunatly, not for beginners. uPyEasy is Micropython-based. Using Micropython with the ESP32 is really fun. But uPyEasy also lacks features and feels like alpha. I’m really hoping that this will change soon as using Python on micro controllers is very straight-forward.

The team behind ESPEasy was, of course, not sitting around. They ported ESPEasy to the ESP32 and called it simply ESPEasy32.

To get ESPEasy32 on your ESP32, you need a tool. We are going to use

 which will handle the flashing process. On Fedora execute:

$ sudo dnf -y install esptool

Download ESPEasy32_R20100.zip and unzip it. Change to the directory where the content of the archive is stored and then to

 . Let’s erase the flash first. Adjust the port if needed.

$ esptool --port /dev/ttyUSB0 --chip esp32 erase_flash
esptool.py v2.3.1
Chip is ESP32D0WDQ6 (revision 0)
Features: WiFi, BT, Dual Core
Uploading stub...
Running stub...
Stub running...
Erasing flash (this may take a while)...
Chip erase completed successfully in 1.1s
Hard resetting via RTS pin...

Now we can flash the ESP32.

$ esptool--port /dev/ttyUSB0 --chip esp32 --baud 256000 --before default_reset \
  --after hard_reset write_flash -z --flash_mode dio --flash_freq 80m \
  --flash_size detect 0xe000 boot_app0.bin 0x1000 bootloader.bin 0x10000 \
  ESPEasy32_R20100.bin 0x8000 ESPEasy.ino.partitions.bin

After the reboot there will be a Access Point available with the SSID ESP_Easy_0 and the password configesp. Connect to that AP and open in a browser to perform the initial setup. Now follow those steps even if the frontend doesn’t match will they give you a hint to configure your ESP32.


Event Report for Ohio Linux Festival 30 September – 01 October 2017

Posted by Fedora Community Blog on March 21, 2018 08:15 AM

Ohio Linux Festival, Hyatt Regency Columbus, Ohio 29-30 September 2017

Event Report:
Andrew Ward (award3535), Julie Ward (jward78), Ben Williams (kk4ewt), Cathy Williams (cwilla)

The Fedora community has been a steadfast supporter of this event for the past 6 years. Ohio Linux Festival is the only major Linux community event that is located in the Northern Midwest region, with no Texas Linux Festival this year it is the only major event in the Midwest. The event attendance in the previous few years has gone down due to venue changes and event staff changes, but in light of 2017 the event brought just under a thousand registered enthusiasts as the OLF event president Beth Lynn Eicher (also a Fedora Ambassador) informed us the morning of 30 September while we were getting set up, which this did not count the walk-ins that showed up the morning of the EXPO opening. So the attendance was most impressive as compared to the previous year’s events and could be soundly stated that there was upwards of 1100 at the event.

The first day was arranged for Professional Registrants interested in training and was only targeted for that specific group. This gave us the perfect opportunity to explore the venue and find out where our booth was located and when set up could commence for the Expo. The time also allowed us to check in with the event and get our badges settled. I did peek into a couple of the training sessions to get an idea of what the attendance was like. The rooms were approximately 40% capacity during the talks, which was not a bad showing for the professionals. You can look at what was scheduled at https://ohiolinux.org/schedule/. The Expo opening on the 30th was the main event and had a full schedule of talks and with the expo hall opening at 8 a.m. we had got there early and set up the table for the days business to begin. The first talk was not scheduled to begin until 9 a.m. The expo showed a slow start to the day but rapidly began to come to life after the first talk was over. We decided to make a few DVD’s using the duplicator for the most common desktop environments for distribution to interested individuals and the Fedorator was set up and ready to go as well.

Our focus for the event was centered on the many varieties of desktops Fedora has to offer and the versatility of the desktop environments. Many who visited our table were aware of the Workstation environment and were already using the software but were looking for something similar to the previous Linux software they were using. Once we inquired on what they were previously using we directed them to the desktop that was most similar to what system previously installed on their computers. For example, we had quite a lot of Mint users that installed workstation and were not familiar with GNOME. This was an easy explanation and pointed out the features in Cinnamon and Mate. Needless to say, they were unaware that desktop environment was available from Fedora. It seemed to be the most common point of our day. Cinnamon happened to be our most popular desktop to handout with 70 DVD’s given to those who were interested. When asking questions on how they felt about Fedora almost all were very pleased with what they were using within the Fedora community (those who were already running Fedora) and with what they saw at our booth.
The Fedorator was a talking point as well. We did have a lot of inquiries on what the function of the unit was. After showing off the equipment used and purpose we demonstrated how to use the Fedorator. The unit had a definite positive impact on the table. Several individuals went off searching for USB drives so that they could use the Fedorator to create a bootable USB Key. We did have one individual that was really interested in getting one for use in his classroom. He currently is a High School teacher that has incorporated Linux Distributions in the curriculum for the Computer technology class. He was very interested in how to get one or put one together. After a long conversation with how he has the students try a Linux distribution in the labs, most of the students have retained the Linux distribution for their personal computers and taking home the media to use on other machines. I will provide his contact information via sepcor to those who can provide required material and specifics on, and, maybe a Fedorator can be donated. We made no promises but would get him pointed in the right direction. I believe this is exactly why we attend these community events, this enthusiastic teacher was helping high school students with a wide variety of software and was very impressed with Fedora and what we do, and furthermore I also believe that we could help him with his community with the promotion of Open source software and Fedora is a prime opportunity to deliver a stable and usable Linux operating system. It is hard to describe this teachers enthusiasm in words, his expressions said it all to us.
Throughout the day we answered many questions on what desktops are available and some of the upcoming changes with Fedora. There were many questions relating to how Red Hat is involved as well as why I should use Fedora over what I currently use. Every event that we attend the why should I use Fedora question comes up at some point, not as often as when the Ubuntu group attends events, because you always get the hardcore Ubuntu users when they have a table set up. This year Ubuntu was not present, but the same question did come up as well (always does). This time the Debian user asked why I should shift to Fedora. We do explain the differences between the two operating systems and the support network set up with Fedora, but we leave the decisions to the user. On almost every occasion that same person comes by the table again and will pick up a DVD or Fedora Stickers and make the statement I will give it a try. With that one person coming back to pick up media from our table is always the reason why we (Fedora) are there to support those who want to switch. One comment did stick out during the event. We had one individual approach us and ask why our table was so busy, everything is happening here so I had to come and see why, the other side of the expo has no one was really there, everyone is at your table.

The day continued on with various questions and individuals who were curious of what and who we are. Truly we had a very busy day with the festival. Throughout the day we had a survey available for those who desired to give us feedback on our product, the booth, and anything they wished to feed back to Fedora. We tried to complete this electronically, but found that most did not want to enter any information and have it publically sent out on the internet or the fear of social media posts.  The response that we received for having hand written surveys seems to go over a lot better than expected. We didn’t think that the overwhelming response that we did have with the surveys. All of our printed copies were filled out and returned to the booth. There was a wide variety of questions relating to how you heard about Fedora to what operating system do you currently use. We also inquired about if the individuals would be interested in getting involved in the project with some surprising results. Here are some of the results from the surveys;






Fedora Booth Experience









Fedora Future Involvement in the Project






Availability of Various Desktops



Already using


Festival or Event

Other/Word of Mouth

How Did you hear about Fedora









O/S currently using





Some Users Identified Dual boot




Member of Linux Users Groups (LUG)




There were some significant statements identified on the surveys, one in particular that a user read the Linus Tovards preferred to use Fedora, so the user identified that he immediately shifted to Fedora and has been using it since. The most interesting discovery from the surveys was the word of mouth discovery of how they heard about Fedora. This truly showed that Fedora is gaining popularity within the Linux and open source community just with person to person communications, this small sample of the community growing. It’s hard to say what made the popularity of Fedora other than its stability and technically advanced, or it could be the support channels available, even the project itself could be the flagship.

Another good point that was revealed from the survey was the probability of future involvement in the project. Most were favorable, while the no’s were limited to the insufficient time to get involved or experience level being a factor keeping them from being involved. Whatever their reasoning the number was less than 33% of not any interest in becoming involved with the project. Interesting enough the survey also pointed out that most of the individuals that filled out the survey were not members of any Linux Users Group. This number is quite surprising considering that the area has several groups that are community involved. For those who traveled from Indiana and filled out the survey also have a few groups that are quite community active.

In summary, we felt that the Ohio Linux Festival was a very successful event. The fact that attendance was up significantly did show the area growing in the Linux community interest. The attendance truly shows an interest in the open source community and we can safely state that Fedora made a large impact on that event and the community as discovered by just how many did visit our table during the Expo, and by our survey that shows a very good experience was given to all who visited the Fedora Table.

The post Event Report for Ohio Linux Festival 30 September – 01 October 2017 appeared first on Fedora Community Blog.

Python 3.7 now available in Fedora

Posted by Fedora Magazine on March 21, 2018 08:00 AM

On February 28th 2018, the second beta of Python 3.7 was released. This new version contains lots of fixes and, notably, several new features available for everyone to test. The pre-release of Python 3.7 is available not only in Fedora Rawhide but also all other Fedora versions. Read more about it below.

Installation and basics

It’s easy to install Python 3.7 on supported Fedora. Run this command:

sudo dnf install python37

Then, run the command python3.7 to test things out. You can also create a virtual environment using the new version, or add py37 to your project’s tox.ini and start testing on the freshest Python available:

$ python3.7 -m venv venv
$ . venv/bin/activate
(venv) $ python --version
Python 3.7.0b2

There are no extra libraries or software for Python 3.7 packaged in Fedora yet.  However, the whole ecosystem is available through virtualenv and pip:

(venv) $ python -m pip install requests  # or any other package
Collecting requests
  Using cached requests-2.18.4-py2.py3-none-any.whl
Collecting idna<2.7,>=2.5 (from requests)
  Using cached idna-2.6-py2.py3-none-any.whl
Collecting certifi>=2017.4.17 (from requests)
  Using cached certifi-2018.1.18-py2.py3-none-any.whl
Collecting urllib3<1.23,>=1.21.1 (from requests)
  Using cached urllib3-1.22-py2.py3-none-any.whl
Collecting chardet<3.1.0,>=3.0.2 (from requests)
  Using cached chardet-3.0.4-py2.py3-none-any.whl
Installing collected packages: idna, certifi, urllib3, chardet, requests
Successfully installed certifi-2018.1.18 chardet-3.0.4 idna-2.6 requests-2.18.4 urllib3-1.22

New Python 3.7 feature: Data classes

Here’s an example of a killer new feature in Python 3.7.

How many times have you written out self.attribute = attribute in your __init__ method? For most Python devs, the answer is “a lot.” Combined with __repr__ and support for comparisons, there’s a lot of boilerplate involved in creating a class that only holds a bunch of data. The excellent attrs project solves many of these issues. Now a carefully selected subset of the ideas in attrs is making its way to the standard library.

Here’s an example of the feature in action:

from dataclasses import dataclass

class Point:
    x: float
    y: float
    z: float = 0.0

>>> p = Point(1.5, 2.5)
>>> print(p)
Point(x=1.5, y=2.5, z=0.0)

Note the types are just documentation. They aren’t actually checked at runtime, though they will work with static checkers like mypy. Data classes are documented in PEP 557 for now. If the API feels too limited, remember the small scope is intentional. You can always switch to using the full attrs library if you need more features.

This and other new features come with the release. Python 3.7 will become the default Python version in Fedora 29. If you spot any bugs, please report them in Fedora Bugzilla or directly upstream. If you have any questions, ask the Python SIG at python-devel@lists.fedoraproject.org or visit IRC Freenode channel #fedora-python.

Photo by Alfonso Castro on Unsplash.

How to delete your Facebook account?

Posted by Kushal Das on March 21, 2018 04:49 AM

I was planning to delete my Facebook account for some time, but, never took the actual steps to do it. The recent news on how the companies are using data from Facebook made me take that next step. And I know Snowden is talking about these issues for a long time (feel free to read a recent interview), I should have done that before. I was just lazy.

First download all the current information for archive

Login to Facebook, go to your settings page. Then you can see a link saying Download a copy of your Facebook data. Click on that. It will ask your password, and then take some time to generate an archive. You can download it after some time.

Let us ask Facebook to delete the account

Warning: Once you deleted your account, you can not get back your data. So, do the next steps after think clearly (personally, I can say it is a good first step to slowly gain back privacy).

Go to this link to see the following screen.

If you click on the blue Delete my account, it will open the next screen, where it will ask you to confirm your password, and also fill in the captcha text.

After this, you will see the final screen. It will take around 90 days to delete all of your information.

Remember to use long passphrases everywhere

Now, you have deleted your account. But, remember that it is just one single step to have privacy. There various other things you can do. I think the next step should be about all of your passwords. Read this blog post about how to generate long passphrases, and use those instead of short passwords. You should also use a proper password manager to save all of these passwords.



Posted by Stephen Smoogen on March 21, 2018 01:06 AM
I have not done a general biography in a long time so figured I should put one out as a courtesy for people reading these blogs and emails I send out on various lists:

Who Am I?

My name is Stephen Smoogen, and I have been using computers for a very long time. According to a bug in a PHP website I was on years ago, I am over 400 years old which would mean I was born in Roanoke island with Virginia Dare. I think I slept a lot in the next 360 years as, according to my sister, my parents found me in a 'DO NOT RETURN TO SENDER' box outside their door. How she knew that when she is younger than me, I do not know.. but I have learned not to question her.

My first computer was a Heathkit Microcomputer Learning System ET-3400 that my Dad got at a swap meet when he was in the Navy in the 1970's. I had worked with my Dad on some other systems he fixed for coworkers but it was mostly being bored while watching an oscilloscope and moving probes around boards every now and then. When I wanted to get a computer in the early 1980's, he said I had to show that I could actually program it since an Apple ][ would have been a major investment for the family. I spent the summer learning binary, hexadecimal and doing the simple machine code that the book had in it. I also programmed a neighbour's Apple ][+ with every game I could in the public libraries Creative Computing 101 Basic Games. My mom and dad saved up for an Apple and we got an Apple ][e in 1983 which I then used through high school. The first thing I learned about the Apple ][e was how different it was with the ][+. The older systems came with complete circuit diagrams and chip layouts. It had been the reason my dad wanted to get an Apple because he knew he could fix it if a chip went bad. The ][e did not come with that and boy was Dad furious. "You don't buy a car with the engine welded shut. Don't buy a computer you can't work on." It seemed silly to me at the time, but would be a founding principle for what I do.

During those years, I went with my dad and his coworkers to various computer clubs where I learned how to play hack on a MicroVax running I think Ultrix or BSD. While I was interested in computers, I had decided I was going to university to get a degree in Astrophysics.. and the computers were just a hobby. Stubborn person that I am, I finally got the degree though I kept finding computers to be more enjoyable. I played nethack and learned more about Unix on a Vax 11/750 running BSD 4.1 and became a system administrator of a Prime 300 running a remote telescope project. I moved over to an early version of LynxOS on i386 and helped port various utilities like sendmail over to it for a short time.

After college I still tried to work in Astrophysics by being a satellite operator for an X-ray observation system at Los Alamos. However, I soon ended up administrating various systems to get them ready for an audit, and that turned into a full time job working on a vast set of systems. I got married, and we moved to Illinois where my wife worked on a graduate degree and I worked for a startup called Spyglass. I went to work for them because they had done scientific visualization which Los Alamos used.. but by the time I got there, the company had pivoted to being a browser company with Enhanced Mosaic.

For the next 2 years I learned what it is like to be a small startup trying to grow against Silicon Valley and Seattle. I got to administer even more Unix versions than I had before, and also see how Microsoft was going to take over the desktop. That was because Enhanced Mosaic was at the core of Internet Explorer. At the end of the two years, Spyglass had not gotten bought by Microsoft, and instead laid off the browser people to try and pivot once again as an embedded browser company at a different location. The company was about 15 years too soon for that as the smart items their plans had as the near future didn't start arriving until 2015 or so.

Without a job, I took a chance to work for another startup in North Carolina called Red Hat. At a Linux Conference, I had heard Bob Young give a talk about how you wouldn't buy a car with a welded bonnet and it brought back my dad's grumpiness with Apple decades ago. I realized that my work in closed source software had been one of continual grumpiness because I was welding shut the parts that other people needed open.

Because of that quote, I worked at Red Hat the next four years learning a lot about openness, startups and tech support. I found that the most important classes I had from my college were psychology and not computer science. I also learned that being a "smart mouthed know it all in" doesn't work when there are people who are much smarter and know a lot more. I think by the time I burned out on 4 years of 80 hour weeks, I was a wiser person than when I came.

I went to work elsewhere for the next 8 years, but came back to Red Hat in 2009, and have worked in the Fedora Project as a system administrator since then. I have seen 15 Fedora Linux releases go out the door, and come to really love working on the slowest part of Fedora, EPEL. I have also finally used some of the astrophysics degree as the thermodynamics and statistics have been useful with the graphs that various Fedora Project leaders have used to show how each release and how the community has continually changed.

Tracing Ruby apps with PCP

Posted by Lukas "lzap" Zapletal on March 21, 2018 12:00 AM

Tracing Ruby apps with PCP

PCP offers two APIs for instrumented applications. The first one to mention is MMV agent which uses memory mapped files for capturing high-resolution data with minimum performance impact. Currently available languages for MMV instrumentation include C/C++, Python and Perl plus native Java,Golang and Rust ports. A second agent and approach is called PMDA trace with its higher level API. It uses TCP sockets and a simple API for capturing time spent, counters, trace points and raw value observations.

The tracing API is not ideal for measuring time spent in processing web requests, but it can still be useful for tracing things like cron jobs. The API (C, Fortran and Java) is described in pmdatrace(3) man page and it is trivial, therefore I decided to create a simple Ruby wrapper which only took one evening. The wrapper offers one-to-one mapping of all functions in Ruby and also higher-level Ruby approach with blocks and more user-friendly method naming.

The first step is to install trace PMDA (agent) and pcptrace rubygem (RPM not yet available, include files and compiler required to build the rubygem). Make sure that firewall is not blocking TCP port 4323 on localhost which is used to send data from the application. Also when using SELinux the pmcd daemon will be blocked from binding, therefore it is necessary to enable boolean flag:

setsebool -P pcp_bind_all_unreserved_ports 1

Then install and configure necessary software:

yum -y install pcp-pmda-trace pcp-devel @development-tools
cd /var/lib/pcp/pmdas/trace
gem install pcptrace

Use pcp_pmcd_selinux(8) man page for more details about SELinux booleans for PCP. If you encounter any SELinux problem with PCP, please let the PCP maintainers know as they will fix it promptly (pcp-team@redhat.com, or open a bugzilla).  Alternatively, the workaround is to put PCP daemons into permissive mode keeping the rest of the system confined:

semanage permissive -a pcp_pmcd_t

Using the trace API is fairly straightforward, each function returns a status code (integer), if non-zero function pmtraceerrstr can be used to find error message if needed:

cat ruby_trace_example.rb
#!/usr/bin/env ruby

require "pcptrace"

# reached a point in source code

# observation of an arbitrary value
PCPTrace::obs("an_observation", 130.513)

# a counter - increasing or decreasing
PCPTrace::counter("a_counter", 1)

# time spent in a transaction (or block)
# ...

# transactions must be aborted (e.g. exception)

# all methods return non-zero code
result = PCPTrace::counter("a_counter", -5))
puts("Error: " + PCPTrace::errstr(result)) if result != 0

There is also more Ruby-friendly API available, see README file for more info: https://github.com/lzap/ruby-pcptrace

Tracing metrics are available in trace namespace:

pminfo trace

The trace agent uses a rolling window technique to calculate rate, total, average, min and max values for some metrics, which is very useful feature for tracing durations. By default the average is recomputed every five seconds for a period covering the prior 60 seconds. To see trace counts converted to count per second:

pmval trace.point.count

PCP also provides rate for each individual metric, let’s view that with three digits precision:

pmval -f3 trace.point.rate

Execute similar commands or use other tools like pmchart to see values for counter or observation (trace.observe.value, trace.counter.value) or to see count and rate (trace.transact.count, trace.transact.rate).

Transaction metrics (time spent) provide total_time value (trace.transact.total_time) but also ave_time, min_time and max_time aggregated values. These are quite handy for ad-hoc troubleshooting.

There is also a helper utility pmtrace for emitting tracing events from scripts like cron jobs. Although the trace API is limited and multiple syscalls are being made every measurement, it is a good starting point to go further.

Fedora 28 and GNOME 3.28: New Features for Eastern Europe

Posted by Rafał Lużyński on March 20, 2018 10:56 PM

This time this is not fake, edited, patched, nor a custom build from COPR but the real screenshots of the unmodified downstream Fedora 28 planned to be released on May 1 this year. Here is how the default calendar widget in GNOME Shell looks in Greek, Polish, and Ukrainian:

For those who can’t speak those languages: the major change here is that the month names are displayed in a correct grammatical form, both in dates and standalone. This is a new feature, or rather a new bugfix, in GNOME 3.28 which has been released on March 14 and pushed to Fedora 28 (prerelease) stable updates today. The series of bugfixes in GNOME was preceded by the similar bugfix in glibc 2.27 released earlier this year.

What Is Eastern Europe

This term must be explained because it is ambiguous. Usually when we say eastern Europe we mean the eastern end of our continent (as opposed to western, northern, southern, and, last but not least, central). But in this context I mean the eastern half of Europe (as opposed to western, and nothing else). I often strongly emphasize that this feature is not just for Slavic languages but also for other language groups of our region: Baltic, Greek, partially also Finnish, and even some western languages like Catalan or Scottish Gaelic.

More Applications

Of course, dates are now displayed correctly in all applications, not just GNOME Shell. In most of them this happened automagically. Few of them, however, needed some minor updates to make sure that the month names are displayed in a genitive case only where needed, not just everywhere. Here is an example of a correct month names display in GNOME Calendar, this time in Croatian:

Please note the difference between the nominative name for March (ožujak) and its correct genitive case as used in date (ožujka; literally: of March).

Western European Languages

English does not have any unsupported features but, while at this, I have examined the date displays in some other western European languages and few features were not supported. For example, some Romance languages (Spanish, Portuguese, etc.) also use the genitive case of both the month name and the year number but they construct it just adding the de preposition before. This feature although so simple was not yet supported so far but now it has been added to GNOME 3.28. Here is a screenshot of the same calendar widget in Spanish:

Please note the correct header saying diciembre de 2017 as opposed by the incorrect diciembre 2017 which is displayed by the older versions.

More Languages

The genitive case of month names is currently supported in Fedora 28 prerelease in only 7 languages: Belarusian, Croatian, Greek, Lithuanian, Polish, Russian, and Ukrainian. But the support of more languages is on the way: Catalan and Czech have been added to GLib and they are already used if the latest GNOME is ran on older systems. The support of these languages has been also pushed to glibc upstream and eventually will reach Fedora 28 but has not yet as of today. However, it has already reached Fedora Rawhide. If we have this chance, let’s take a look at the screenshot of GNOME in Fedora Rawhide in Catalan:

Please note the correct Catalan preposition of genitive case: de març (of March) vs. d’abril (of April).


I’d like to thank all the people from Fedora and GNOME communities and from the outer world who supported me in this challenge: Piotr Drąg, Mike Fabian, Zack Weinberg, Carlos O’Donell, Masha Leonova, Ihar Hrachyshka, Dmitry Levin, Igor Gnatenko, Charalampos Stratakis, Robert Buj, Philip Withnall, and more.

PS. If some date formats in these screenshots are incorrect please approach the respective translation teams.

Launching Custom Image VMs on Azure With Anisble

Posted by Adam Young on March 20, 2018 08:37 PM

Part of my Job is making sure our customers can run our software in Public clouds.  Recently, I was able to get CloudForms Management Engine (CFME) to deploy to Azure. Once I got it done manually, I wanted to automate the deployment, and that means Ansible.  Turns out that launching custom images from Ansible is not support int the current GA version of the Azure modules, but has been implemented upstream.

Ansible releases package versions here. I wanted the 2.5 that was aligned with Fedora 27 which is RC3 right now. Install using dnf:

sudo dnf install  http://releases.ansible.com/ansible/rpm/preview/fedora-27-x86_64/ansible-2.5.0-0.1003.rc3.fc27.ans.noarch.rpm

And then I can launch using the new syntax for the image dictionary. Here is the task fragment from my tasks/main.yml

- name: Create virtual machine
    resource_group: "{{ az_resources }}"
    name: CloudForms
    vm_size: Standard_D1
    admin_username: "{{ az_username }}"
    admin_password: "{{ az_password }}"
      name: cfme-azure-      
      resource_group: CFME-NE

Note the two dictionary values under image. This works, so long as the user has access to the image, even if it comes from a different resource group.

Big thanks to <sivel> and <jborean93> In #ansible FreeNode IRC for helping get this to work.

Using alternative utils with JRE & JDK

Posted by Robbi Nespu on March 20, 2018 04:00 PM

The /usr/sbin/alternatives is a tool for managing different software packages that provide the same functionality, for example; different version of JRE and JDK. From manual it said to creates, removes, maintains and displays information about the symbolic links comprising the alternatives system. The alternatives system is a reimplementation of the Debian alternatives system.

As Java developer I need different kind runtime of JRE and JDK enviroment during development and testing,this functionality very suitable to me.

Which JRE or JDK package I need, RPM or tarball?

First of all, you should beware not to use Oracle RPM, as it will be problematic with OpenJDK default package. So grab tarball from official JAVA website and click on the Download button.), then unpack to /usr/java/ directory :

$ cd /usr/java/
$ sudo tar zxvf jre-9.0.4_linux-x64_bin.tar.gz
$ sudo tar zxvf jdk-8u161-linux-x64.tar.gz
$ sudo rm *.gz

Adding alternative - another version of java and javac

After you extracted the tarball, use install action flag to add it into java group :

$ sudo alternatives --install /usr/bin/java java /usr/java/jre1.8.0_161/bin/java 1

Same thing goes with javac group also. Don’t forget to change the path to JDK :

$ sudo alternatives --install /usr/bin/javac javac /usr/java/jdk1.8.0_161/bin/javac 1

It better to symlink any recent version as default and latest :

$ sudo ln -s /usr/java/jdk1.8.0_161/ /usr/java/latests
$ sudo ln -s /usr/java/latests /usr/java/default

So, you suppose to get something like this:

$ ls -l /usr/java | grep -E '(default|latest)'
lrwxrwxrwx. 1 root root   16 Aug 29  2017 default -> /usr/java/latest
lrwxrwxrwx  1 root root   23 Mar 20 14:00 latest -> /usr/java/jdk1.8.0_161/

Selecting alternative

Now you can choose which java and javac version to use :

$ sudo alternatives --config java
$ sudo alternatives --config javac

(Insert any number from the selection list or press enter to using the current)

Now you installed JRE and JDK or multiple version and you can configure your system to use one or another easier with alternatives command.

Thank for reading, see ya!

Can a GSoC project beat Cambridge Analytica at their own game?

Posted by Daniel Pocock on March 20, 2018 12:15 PM

A few weeks ago, I proposed a GSoC project on the topic of Firefox and Thunderbird plugins for Free Software Habits.

At first glance, this topic may seem innocent and mundane. After all, we all know what habits are, don't we? There are already plugins that help people avoid visiting Facebook too many times in one day, what difference will another one make?

Yet the success of companies like Facebook and those that prey on their users, like Cambridge Analytica (who are facing the prospect of a search warrant today), is down to habits: in other words, the things that users do over and over again without consciously thinking about it. That is exactly why this plugin is relevant.

Many students have expressed interest and I'm keen to find out if any other people may want to act as co-mentors (more information or email me).

One Facebook whistleblower recently spoke about his abhorrence of the dopamine-driven feedback loops that keep users under a spell.

The game changer

Can we use the transparency of free software to help users re-wire those feedback loops for the benefit of themselves and society at large? In other words, instead of letting their minds be hacked by Facebook and Cambridge Analytica, can we give users the power to hack themselves?

In his book The Power of Habit, Charles Duhigg lays bare the psychology and neuroscience behind habits. While reading the book, I frequently came across concepts that appeared immediately relevant to the habits of software engineers and also the field of computer security, even though neither of these topics is discussed in the book.

where is my cookie?

Most significantly, Duhigg finishes with an appendix on how to identify and re-wire your habits and he has made it available online. In other words, a quickstart guide to hack yourself: could Duhigg's formula help the proposed plugin succeed where others have failed?

If you could change one habit, you could change your life

The book starts with examples of people who changed a single habit and completely reinvented themselves. For example, an overweight alcoholic and smoker who became a super-fit marathon runner. In each case, they show how the person changed a single keystone habit and everything else fell into place. Wouldn't you like to have that power in your own life?

Wouldn't it be even better to share that opportunity with your friends and family?

One of the challenges we face in developing and promoting free software is that every day, with every new cloud service, the average person in the street, including our friends, families and co-workers, is ingesting habits carefully engineered for the benefit of somebody else. Do you feel that asking your friends and co-workers not to engage you in these services has become a game of whack-a-mole?

Providing a simple and concise solution, such as a plugin, can help people to find their keystone habits and then help them change them without stress or criticism. Many people want to do the right thing: if it can be made easier for them, with the right messages, at the right time, delivered in a positive manner, people feel good about taking back control. For example, if somebody has spent 15 minutes creating a Doodle poll and sending the link to 50 people, is there any easy way to communicate your concerns about Doodle? If a plugin could highlight an alternative before they invest their time in Doodle, won't they feel better?

If you would like to provide feedback or even help this project go ahead, you can subscribe here and post feedback to the thread or just email me.

cat plays whack-a-mole

syslog-ng at SCALE 2018

Posted by Peter Czanik on March 20, 2018 10:10 AM

It is the fourth year that syslog-ng has participated at Southern California Linux Expo or, as better known to many, SCALE ‒ the largest Linux event in the USA. In many ways, it is similar to FOSDEM in Europe, however, SCALE also focuses on users and administrators, not just developers. It was a pretty busy four days for me.

The conference

The weather in always sunny California was far from perfect this time but I didn’t mind. I spent all my time at the conference and I loved every minute of it. The Expo was great – as always – with most of my favorite open source projects collected in a single location. I bought a nice “Release is coming” t-shirt at the openSUSE booth, many stickers at Fedora, and the Containers coloring book at Red Hat. And of course also some FreeBSD goodies. Next to ARM and x86 hardware, this year a POWER9 machine from Raptor Engineering was also on display.


I had to make some tough choices when it came to visiting talks as there were many interesting tracks: community, embedded, monitoring (including logging), security, and others. I do not want to list all the talks I visited (check my twitter feed if you are interested), so I just pick one: Marketing your open source product by Deirdre Straughan. Being part of the documentation team at Balabit, I was very happy to hear her emphasis on the key importance of good documentation. 🙂

Logging Docker using syslog-ng

As the syslog-ng image on the Docker Hub reached almost two million pulls recently, the topic of my talk this year was Logging Docker using syslog-ng. As usual, I started my talk with an overview of logging and syslog-ng functionality, followed by a quick introduction to the syslog-ng configuration language.

When I arrived at containers, I went from easy to progressively more difficult topics. Migrating your central log server into a container is really easy, even if you only know the basics of containerization. Collecting logs from the host machine when syslog-ng is running in a container needs a bit more preparations though: you need to map a few extra directories from the host system and use extra formatting or a NoSQL database if you do not want to lose important information. And reading log messages from other containers needs even more prior design. Best of all: all the previously listed methods can be freely combined, so possibilities are practically endless.

Before finishing my talk, I showed a few interesting uses of syslog-ng:

  • In the age of PCI-DSS and GDPR, the capability to remove sensitive information from log messages can come in handy. syslog-ng enables you to do that. What’s more, there is an option in syslog-ng to replace the sensitive part with a hash (instead of simply overwriting it with a constant). That facilitates analyzing sessions in log data without leaking sensitive information.
  • Parsing messages also helps you to find interesting data easier, like listing the number of SSH connections from clients on the network.
  • Using the key-value and GeoIP parsers on your firewall logs, you can easily display the location of your intruders on a map using Kibana.

I measure the success of my talks not by the number of listeners, rather by the number of questions. The SCALE audience is always fantastic from this point of view: I was answering questions for about forty minutes after my presentation.

I briefly discussed Atomic Host – a specialized container host – during my talk without knowing that the track was organized by the Red Hat container team. I learned about it only after my session was over, when I received one of my favorite speakers gift: a box of sweets (or rather sours :-)).

Further reading

My talk was recorded and hopefully will be posted in the coming weeks. If you want to read more in-depth information about containers, there is a white paper on this topic, created from my related blog posts. You can access it on the Balabit website (note: it requires registration): https://pages.balabit.com/logging-in-docker-using-syslog-ng.html

The post syslog-ng at SCALE 2018 appeared first on syslog-ng Blog.

GNOME 3.28 released & coming to Fedora 28

Posted by Fedora Magazine on March 20, 2018 02:21 AM

Last week, The GNOME project announced the release of GNOME 3.28. This major release of the GNOME desktop is the default desktop environment in the upcoming release of Fedora 28 Workstation. 3.28 includes a wide range of enhancements, including updates to Files (nautilus), Contacts, Calendar, Clocks and the on-screen keyboard. Additionally, the new application Usage is added to “make it easy to diagnose and resolve performance and capacity issues

The new Usage application in GNOME 3.28

The new Usage application in GNOME 3.28

Application Updates in GNOME 3.28

GNOME 3.28 provides updated versions of many of the GNOME default applications. The ability to “star” items is added to both the Files and the Contacts applications. This allows the user to star an item — be it a file, folder, or a contact — for quick access later. Calendar now provides a neater month view, and weather updates displayed alongside your appointments.

screenshot of the updated Calendar application in GNOME 3.28 with weather forecasts built in

Updated Calendar application in GNOME 3.28 with weather forecasts built in


Updates to Cantarell

Cantarell is the default interface font in both GNOME and Fedora Workstation. In this updated version of GNOME, Cantarell is refreshed, with updated glyph shapes and spacing. Additionally, there are two new weights: light and extra bold.

Specimen of the new Cantarell in GNOME 3.26

Specimen of the new Cantarell in GNOME 3.26

Read more about this release

There are many more changes and enhancements in this major version of GNOME. Check out the release announcement and the release notes from the GNOME Project for more information. Also, check out this awesome rundown of the best new GNOME 3.28 features from OMGUbuntu.

Screenshots in this post are from the GNOME 3.28 Release Notes

Explaining disk speeds with straws

Posted by Stephen Smoogen on March 20, 2018 12:25 AM
One of the most common user complaints in an Enterprise systems is 'why can't I have more disk space?' The idea is that they look at the costs of disks on Amazon or New Egg and see that they could get an 8 TB hard disk for $260.00 but the storage administrator says it will cost $26,000.00 for the same amount.

Years ago, I once even had someone buy me a disk and have it delivered to my desk to 'fix' the storage problem. They thought they were being funny so I thanked them for the paper weight. I then handed it back to them and then tried to explain to them why 1 drive was not going to help... I found that the developers eyes glistened over as I talked about RPM speeds of drives, cache sizes, amount of commands a ATA read/write use versus SCSI, etc. All of them are important but not terms useful for a person who just wants to never delete an email.

The best analogy I have is that you have a couple of 2 litre bottles of Coca Cola (fill in Pepsi, Fanta or Mr Pibb as needed) and a cocktail straw. You can only fill one Coke bottle with that straw. Sure the bottle is big enough but it takes a long time to move the soda from one to the other. That is what 1 SATA disk drive is like.

The next step is to add more disks and make a RAID array. Now you can get a bunch of empty coke bottles and empty out that one array through the multiple cocktail straws. Things are moving faster but it still takes a long time and you really can't use each of the large bottles as much as you like because emptying them out will be pretty slow via the cocktail straw.

The next sized solution is regular drinking straws with cans. The straws are bigger, but the cans are smaller.. you can fill the cans up or empty them without as much time waiting for a queue. However you need a lot more of them to equal the original bottle you are emptying. This is the SAS solution where the disks are smaller, faster, and much better throughput because of that. It is a tradeoff in that 15k drives use older technologies so store less data. They also have larger caches and smarter os's on the drive to make the straw bigger.

Finally there are the newest solution which would be the garden hose connected to a balloon to a coffee cup. This is the SAS SSD solution. The garden hose allows for a large amount of data to go up and down the pipe, the balloon is how much you can cache in case you are too fast somewhere in writes or reads. The coffee cup is because it is expensive and there isn't a lot of space. You need a LOT of coffee cups compared to soda cans or 2 litre bottles.

Most enterprise storage is some mixture of all of these to match the use case need.

  • SATA raid is useful for backups. You are going to sequentially read/ write large amounts of data to some other place. The straws don't need to be big per drive and you don't worry about how much is backed up. The cost per TB is of course the smallest.
  • SAS raid is useful for mixed user shared storage. The reads and writes to this need a larger straws because programs have different IO patterns. The cost per TB is usually an order or two of magnitude greater depending on other factors like how much redundancy you wanted etc.
  • SSD raid is useful for fast shared storage. It is still more expensive than SAS raid. 
And now to break the analogy completely. 

Software defined storage would be where you are using the cocktail straws with coke bottles but you have spread them around the building. Each time coke gets put on one, a hose spreads that coke around so each block of systems is equivalent. In this case the costs per system have gone down, but there needs to be a larger investment in the networking technology tying  the servers together. [A 1 gbit backbone network is like a cocktail straw between systems, A 10 gbit backbone is like a regular straw and the 40G/100G are the hoses.]

Now my question is .. has anyone done this in real life? It seems crazy enough that someone has done a video.. but my google-fu is not working tonight.

Fedora 27 Release Party Novi Sad

Posted by nmilosev on March 19, 2018 11:33 AM


Once again, we had an awesome Fedora Release Party at Univeristy of Novi Sad! :)

Thanks everyone for coming, I hope it was informative and useful for you. I would also like to thank the speakers, Doni, Igor and Marko for sharing their experience with us.

The image gallery can be found here (still updating): https://nmilosev.github.io/f27rpns-gallery/

And the talks are here (still updating also): https://github.com/nmilosev/f27rpns-gallery

Fedora 28 is just around the corner, so see you all soon! :)

SELinux should and does BLOCK access to Docker socket

Posted by Dan Walsh on March 19, 2018 10:19 AM

I get lots of bugs from people complaining about SELinux blocking access to the Docker socket.  For example https://bugzilla.redhat.com/show_bug.cgi?id=1557893

The aggravating thing is, this is exactly what we want SELinux to prevent.  If a container process got to the point of talking to the /var/run/docker.sock, you know this is a serious security issue.  Giving a container access to the Docker socket, means you are giving it full root on  your system.  

A couple of years ago, I wrote why giving docker.sock to non privileged users is a bad idea.

Now I am getting bug reports about allowing containers access to this socket.

Access to the docker.sock is the equivalent of sudo with NOPASSWD, without any logging. You are giving the process that talks to the socket, the ability to launch a process on the system as full root.

Usually people are doing this because they want the container to do benign operations, like list which containers are on the system, or look a the container logs.  But Docker does not have a nice RBAC system, you basically get full access or no access.  I choose to default to NO ACCESS.

If you need to give container full access to the system then run it as a --privileged container or disable SELinux separation for the container.

podman run --privileged ...
docker run --privileged ...

podman run --security-opt label:disable ...
docker run --security-opt label:disable ...

Run it privileged

There is a discussion going on in Moby github, about breaking out more security options, specifically adding a flag to container runtimes, to allow users to specify whether they want kernel file systems in the container to be readonly. (/proc, /sys,...)

I am fine with doing this, but my concern with this is people want to make little changes to the security of their containers, but at a certain point you allow full breakout.  Like above where you allow a container to talk to the docker.sock.  

Security tools are being developed to search for things like containers running as --privileged, but they might not understand that --security-opt selinux:disable -v /run:/run is the SAME THING from a security point of view.  If it is simple to break out of container confinement, then we should just be honest and run the container with full privilege.  (--privileged). 

Update on the Meltdown & Spectre vulnerabilities

Posted by Fedora Magazine on March 19, 2018 08:00 AM

January saw the annoucement of a series of critical vulnerabilities called Spectre and Meltdown. The nature of these issues meant the solutions were complex and required fixing delicate code. The initial fix for Meltdown on x86 was KPTI, which was available almost immediately. Developing mitigations for Spectre was more complex. Other architectures had to look at their vulnerability status as well, and get mitigation in where it was needed. As a bit of time has passed, what is the exposure on Fedora now?

Meltdown and Spectre mitigation coverage

The mitigation coverage for Spectre and Meltdown is in a pretty good state. For the x86 architecture, KPTI mitigates the Meltdown vulnerability (CVE-2017-5754), and the retpoline fixes mitigate Spectre variant 2 (CVE-2017-5715). Spectre variant 1 (CVE-2017-5753) required patching specific vulnerable code bits, and known problem areas have been mitigated upstream as well. Additionally ARM coverage landed in the 4.15.4 kernel updates for Fedora. Power architectures have initial coverage in Fedora kernel version 4.14.15.

All of this coverage is still being fine tuned. Initial rounds of mitigation development aimed to plug the holes as quickly as possible so that users were not exposed. Once that happened, developers could pay more attention to fine tuning the mitigation for performance.

With mitigation where it currently stands, the Fedora Kernel Team has closed the tracking bugs for these CVEs. It is still important that you keep your kernels updated as initial mitigation is fine tuned. Optimizations to the initial mitigation are still rolling in, and probably will for the foreseeable future. As many of these mitigations are dependent on CPU microcode updates, it is a very good idea to keep firmware updated where possible.

Fedora Diversity: 2017 Year in Review

Posted by Fedora Community Blog on March 19, 2018 08:00 AM

2017 was a milestone year for Fedora Diversity and Inclusion Team. We experienced structural changes, established new directions and mapped our goals to a long-term plan improving diversity outreach in the Fedora community. The past year included a lot of ‘figuring things out’ – including our engagement within the Fedora community but also beyond. We have come out wiser, driven and more committed to our goal then ever. Read on to know more about our past and current efforts to foster diversity and inclusion in Fedora community.

Diversity drives Fedora

We talked to Fedora community members to know more about what they love about the Fedora Project, the community, and their motivations to contribute. We received wonderful answers and got to know our community better. Here is the link to the first edition of our Fedora community video.

<iframe allowfullscreen="true" class="youtube-player" height="461" src="https://www.youtube.com/embed/Ig0gnM9jsm0?version=3&amp;rel=1&amp;fs=1&amp;autohide=2&amp;showsearch=0&amp;showinfo=1&amp;iv_load_policy=1&amp;wmode=transparent" style="border:0;" type="text/html" width="766"></iframe>

Together with Fedora Ambassadors, we showcased these videos at Fedora booths during events and conferences, with an aim to engage a diverse audience. We are especially proud that we could showcase the first video during FLISoL, the biggest event for Free Software in Latin America (and with Spanish subtitles, thanks x3mboy!).

Let us know your feedback through mailing list or IRC.

P.S. – How many new community members did you get to know through the videos?

Commitment to Diversity and Inclusion

Fedora Women Day

For the first time, we organized Fedora Women Day on a global scale. Over the month of September, community members supported by the Diversity Team organized Fedora Women Days in 10 different cities spread over three continents. Locations include Guwahati, Pune, Bangalore, Tirana, Prishtina, Managua, Cusco, Lima and Brno. We had approximately 200 attendees (~40 speakers) with sessions about…

  • Talks and discussion about contributing to open source and Fedora
  • Career opportunities in open source and how to pursue them
  • Networking opportunities including connecting with female contributors in open source communities

A huge thanks to everyone who helped in organizing Fedora Women Day. Hopefully in 2018, we can have some Fedora Women Day celebrations in North America too. Read more about Fedora Women Day celebrations below.

10 Fedora Women Days across the world

<iframe class="wp-embedded-content" data-secret="NZTIau6UQh" frameborder="0" height="338" marginheight="0" marginwidth="0" sandbox="allow-scripts" scrolling="no" security="restricted" src="https://communityblog.fedoraproject.org/fedora-women-days-2017/embed/#?secret=NZTIau6UQh" title="“10 Fedora Women Days across the world” — Fedora Community Blog" width="600"></iframe>

Do you want to organize a Fedora Women Day celebration in your local community? Send us an email via our mailing list or ping us on IRC.

LGBTQA Awareness Day

We also organized an open community call on LGBTQA Awareness Day to celebrate the LGBT+ community in Fedora, as well as recognize and discuss the unique challenges they face. Read more about the community call below:

Event Report – May 17, LGBTQA Awareness Day

<iframe class="wp-embedded-content" data-secret="wMJLsfcLeV" frameborder="0" height="338" marginheight="0" marginwidth="0" sandbox="allow-scripts" scrolling="no" security="restricted" src="https://communityblog.fedoraproject.org/event-report-may-17-lgbtqa-awareness-day/embed/#?secret=wMJLsfcLeV" title="“Event Report – May 17, LGBTQA Awareness Day” — Fedora Community Blog" width="600"></iframe>

Outreachy internship program

We had four Outreachy interns over the past year working on development for Bodhi, Fedora Hubs, Design, and more. Read more about what our current Outreachy interns are working on below.

Outreachy 2017: Meet the interns!

<iframe class="wp-embedded-content" data-secret="7Ggu8C7NiF" frameborder="0" height="338" marginheight="0" marginwidth="0" sandbox="allow-scripts" scrolling="no" security="restricted" src="https://communityblog.fedoraproject.org/outreachy-2017-meet-interns/embed/#?secret=7Ggu8C7NiF" title="“Outreachy 2017: Meet the interns!” — Fedora Community Blog" width="600"></iframe>

As always, we are committed to supporting programs like Outreachy that bring people from underrepresented groups into FOSS development. Towards this direction, Fedora Diversity Team sponsored Rails Girls Summer of Code for the first time in their upcoming 2018 cycle.

Planning and strategy

The first-ever Diversity FAD took place in early 2017 and was a great opportunity for the Diversity Team to spend time looking at how we can build more inclusive environments for Fedora contributors. We also tackled other issues like understanding who makes up the Fedora community. Our team used this valuable time to work on these issues more personally and intently than IRC or mailing lists provide. Read more about it in our FAD report below.

Mission to understand: Fedora Diversity FAD 2017

<iframe class="wp-embedded-content" data-secret="NfxTvPYAfn" frameborder="0" height="338" marginheight="0" marginwidth="0" sandbox="allow-scripts" scrolling="no" security="restricted" src="https://communityblog.fedoraproject.org/fedora-diversity-fad-2017/embed/#?secret=NfxTvPYAfn" title="“Mission to understand: Fedora Diversity FAD 2017” — Fedora Community Blog" width="600"></iframe>

Hopefully, the community will be able to see the results of our FAD soon, such as the demographic survey.

Looking forward to 2018

Drawing from our experiences in 2017, we better understand how to reach our goal of a diverse and inclusive Fedora community. Here are some D&I initiatives you can expect to see in 2018:

  • Improving accessibility and inclusiveness for Fedora events
  • Fedora Classroom sessions on topics related to Diversity and Inclusion
  • Fedora Appreciation Week (Not sure what this is? Interested? Stay tuned for more info!)

Our Pagure repository contains an updated list of tasks we are currently working on.

Want to get involved?

Group photo from the Fedora Diversity work session at Flock 2017

Group photo from the Fedora Diversity work session at Flock 2017

Want to get involved? We are always looking to involve more community members in our Diversity and Inclusion efforts. Say hello on our mailing list or IRC channel, #fedora-diversity.

Don’t have much time but still want to help?

Give us feedback about the community videos or any of our other initiatives. We would love to hear from you.

Special thanks

We want to thank Brian Exelbierd (bex), our F-CAIC, for all the support and helpful advice throughout the year. Also, Marina Zhurakhinskaya and Laura Abbott, Outreachy coordinators for Fedora who manage the whole Outreachy internship process smoothly. Also, all the community members who helped in organizing Fedora Women Day – we know it’s never easy to organize a event but thanks for being a part of this celebration. Last but not least to the team, we have done so much but still we have so many things to do!


The post Fedora Diversity: 2017 Year in Review appeared first on Fedora Community Blog.

Critical Firefox vulnerability fixed in 59.0.1

Posted by Fedora Magazine on March 19, 2018 02:34 AM

On Friday, Mozilla issued a security advisory for Firefox, the default web browser in Fedora. This advisory centered around two CVEs — both of which allowed an out of bounds memory write while processing Vorbis audio data, leading to arbitrary code execution. CVE-2018-5146 is against the bundled library libvorbis that Firefox ships to process Vorbis audio on most architectures. CVE-2018-5147 is against libtremor, which firefox bundles for the same task on ARM architectures.

At the same time as the security advisory was issued, Mozilla released Firefox 59.0.1 that fixes these issues.

Updating Firefox in Fedora

At the time of writing, Firefox 59.0.1 (with the security fixes) is heading through the update process in Fedora, and will be in the stable repositories soon. When it reaches the stable repositories, the fixes will be applied during your next system update.

However, you want to update Firefox now, install the firefox-59.0.1-1 package from the updates-testing repository with the following command:

sudo dnf --enablerepo updates-testing update firefox

Episode 88 - Chat with Chris Rosen from IBM about Container Security

Posted by Open Source Security Podcast on March 18, 2018 08:34 PM
Josh and Kurt talk about container security with IBM's Chris Rosen.
<iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="https://html5-player.libsyn.com/embed/episode/id/6378524/height/90/theme/custom/autoplay/no/autonext/no/thumbnail/yes/preload/no/no_addthis/no/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

Show Notes

Playing with PicoRV32 on the iCE40-HX8K FPGA Breakout Board (part 2)

Posted by Richard W.M. Jones on March 18, 2018 05:29 PM

It works!

Press ENTER to continue..
Press ENTER to continue..

  ____  _          ____         ____
 |  _ \(_) ___ ___/ ___|  ___  / ___|
 | |_) | |/ __/ _ \___ \ / _ \| |
 |  __/| | (_| (_) |__) | (_) | |___
 |_|   |_|\___\___/____/ \___/ \____|

SPI State:
Select an action:

   [1] Read SPI Flash ID
   [2] Read SPI Config Regs
   [3] Switch to default mode
   [4] Switch to Dual I/O mode
   [5] Switch to Quad I/O mode
   [6] Switch to Quad DDR mode
   [7] Toggle continuous read mode
   [9] Run simplistic benchmark
   [0] Benchmark all configs

Command> 9
Cycles: 0x00f3d36d
Instns: 0x0003df2d
Chksum: 0x5b8eb866

In the first part I got my reverse-engineered Lattice iCE40-HX8K FPGA using the completely free Project IceStorm toolchain working. I wrote a simple Verilog demo which flashed the LEDs.

Today I played with Clifford Wolf’s PicoRV32 core. What he’s written is actually a lot more sophisticated than I initially realized. There’s a simple memory mapped serial port, a memory mapped SPI bus, & a bit of interactive firmware so you can test it out (see above).

Rather than using Clifford’s build scripts (which compile the riscv32 cross-compiler and run sudo at various points) I wrote a Makefile to build and program the FPGA on Fedora.

And it works (see above)! Now what we need is a simple 32 bit operating system that could run on it. swapforth seems to be one but as far as I can tell it hasn’t been ported to RV32. Maybe I could port Jonesforth …

Deprecate TCP wrappers Test Day 2018-03-22

Posted by Fedora Community Blog on March 18, 2018 10:01 AM

Thursday, 2018-03-22, is Deprecate TCP wrappers Test Day!  This insallment of Test Day will focus on testing an important change set.

Why test?

Removing this package from Fedora will remove a package from default and minimal installations (removing dependency of daemons such as SSHD). It also makes the configuration straight-forward for new users (no shared files defining access rules, poorly reporting any errors to users).

Removing the dependency from all packages and retiring the package in single release will minimize users confusion and avoids opening sensitive services after the update.

We hope to see whether it’s working well enough and catch any remaining issues.

We need your help!

All the instructions are on the wiki page, so please read through and come help us test! As always, the event will be in on #fedora-test-day Freenode IRC.

Share this!

Help promote the Test Day and share the article in your own circles! Use any of the buttons below to help spread the word.

The post Deprecate TCP wrappers Test Day 2018-03-22 appeared first on Fedora Community Blog.

Flatpaking application plugins

Posted by Patrick Griffis on March 18, 2018 04:00 AM

Sometimes you simply do not want to bundle everything in a single package such as optional plugins with large dependencies or third party plugins that are not supported. In this post I’ll show you how to handle this with Flatpak using HexChat as an example.

Flatpak has a feature called extensions that allows a package to be mounted within another package. This is used in a variety of ways but it can be used by any application as a way to insert any optional bits. So lets see how to define one (details omitted for brevity):

  "app-id": "io.github.Hexchat",
  "add-extensions": {
    "io.github.Hexchat.Plugin": {
      "version": "2",
      "directory": "extensions",
      "add-ld-path": "lib",
      "merge-dirs": "lib/hexchat/plugins",
      "subdirectories": true,
      "no-autodownload": true,
      "autodelete": true
  "modules": [
      "name": "hexchat",
      "post-install": [
       "install -d /app/extensions"

The exact details of these are best documented in the Extension section of man flatpak-metadata but I’ll go over the ones used here:

  • io.github.Hexchat.Plugin is the name of the extension point and all extensions will have the same prefix.
  • version allows you to have parallel installations of extensions if you break ABI or API for example, 2 refers to HexChat 2.x incase it makes a 3.0 with an API break (It is probably smart in the future to add the runtime version here also since it will break ABI also).
  • directory sets a subdirectory where everything is mounted relative to your prefix so /app/extensions is where they will go.
  • subdirectories allows you to have multiple extensions and each one will get their own subdirectory. So io.github.Hexchat.Plugin.Perl is mounted at /app/extensions/Perl.
  • merge-dirs will merge the contents of subdirectories that match these paths (relative to their prefix). So for this case the contents of /app/extensions/Perl/lib/hexchat/plugins and /app/extensions/Python/lib/hexchat/plugins will both be in /app/extensions/lib/hexchat/plugins. This allows limiting the complexity of your loader to only need to look in one directory (applications will need to be configured/patched to look there).
  • add-ld-path adds a path, relative to extensions prefix, to the library path so for example /app/extensions/Python/lib/libpython.so can be loaded.
  • no-autodownload will not automatically install all extensions which is the default.
  • autodelete will remove all extensions when the application is removed.

So now that we defined an extension point lets make an extension:

  "id": "io.github.Hexchat.Plugin.Perl",
  "branch": "2",
  "runtime": "io.github.Hexchat",
  "runtime-version": "stable",
  "sdk": "org.gnome.Sdk//3.26",
  "build-extension": true,
  "separate-locales": false,
  "appstream-compose": false,
  "build-options": {
    "prefix": "/app/extensions/Perl",
    "env": {
      "PATH": "/app/extensions/Perl/bin:/app/bin:/usr/bin"
  "modules": [
      "name": "perl"
      "name": "hexchat-perl",
      "post-install": [
        "install -Dm644 plugins/perl/perl.so ${FLATPAK_DEST}/lib/hexchat/plugins/perl.so",
        "install -Dm644 --target-directory=${FLATPAK_DEST}/share/metainfo data/misc/io.github.Hexchat.Plugin.Perl.metainfo.xml",
        "appstream-compose --basename=io.github.Hexchat.Plugin.Perl --prefix=${FLATPAK_DEST} --origin=flatpak io.github.Hexchat.Plugin.Perl"

So again going over some key points quickly: id has the correct prefix, branch refers to the extension version, build-extension should be obvious, runtime is what defines the extension-point. Some less obvious things to make note of is that your extensions prefix will not be in $PATH or $PKG_CONFIG_PATH by default so you may need to set them (see build-options in man flatpak-manifest). $FLATPAK_DEST is also defined as your extensions prefix though not everything expands variables.

While not required you also should install appstream metainfo for easy discover-ability. For example:

<?xml version="1.0" encoding="UTF-8"?>
<component type="addon">
  <name>Perl Plugin</name>
  <summary>Provides a scripting interface in Perl</summary>
  <url type="homepage">https://hexchat.github.io/</url>

Which will be shown in GNOME-Software:


Fedora 27 : Testing the new Django web framework .

Posted by mythcat on March 17, 2018 09:34 PM
Today I tested the Django web framework version 2.0.3 with python 3 on my Fedora 27 distro.
The main reason is to see if the Django and Fedora working well.
I used the pip to install and activate the virtual environment for python. First you need to create your project into your folder. I make one folder named django_001.
$ mkdir django_001
$ cd django_001
The next step is to create and activate the virtual environment for python language and your project.
$ python3 -m venv django_001_venv
$ source django_001_venv/bin/activate
Into this virtual environment named django_001_venv you will install django web framework.
pip install django
If you have problems with update pip then update this tool. Now start the django project named django_test.
$ django-admin startproject django_test
$ cd django_test
$ python3 manage.py runserver
Open the url with your web browser.
The result is this:

If you try to use the admin web with password and user you will see errors.
One of the most common problem for django come from settings, let's see some file from the project:

  • manage.py - this runs project specific tasks (the django-admin is used to execute system wide Django tasks) and is used to execute project specific tasks;
  • __init__.py - this file that allows Python packages to be imported from directories where it's present and it's a generic file used in almost all Python applications;
  • settings.py - the configuration settings for the Django project;
  • urls.py - contains URL patterns for the Django project; 
  • wsgi.py - is WSGI configuration properties for the Django project ( you don't need to setup WSGI to develop Django applications).

The next step is to create first django application.
$ python manage.py startapp django_blog
This make a folder named django_blog into the main django_test folder. Into the main django_test folder you have another django_test folder with settings.py file. Add into settings.py file the django_blog application.
Let's fix some issues about admin and the django_blog application.
Into the main django_test folder with manage.py file use this:
$ python3 manage.py migrate
$ python3 manage.py createsuperuser

This fix the django framework and let you to add your superuser using the admin page, see:

The next issue is: create the website ( see the django_blog folder) using the Django web framework.
I don't have a issue for this django_blog but you can follow this tutorial link.

Playing with PicoRV32 on the iCE40-HX8K FPGA Breakout Board (part 1)

Posted by Richard W.M. Jones on March 17, 2018 11:23 AM

It’s now possible to get a very small 32 bit RISC-V processor onto the reverse-engineered Lattice iCE40-HX8K FPGA using the completely free Project IceStorm toolchain. That’s what I’ll be looking at in this series of two articles.

I bought my development board from DigiKey for a very reasonable £41.81 (including tax and next day delivery). It comes with everything you need. This FPGA is very low-end [datasheet (PDF)], containing just 7680 LUTs, but it does have 128 kbits of static RAM, and the board has an oscillator+PLL that can generate 2-12 MHz, a few LEDs and a bunch of GPIO pins. The board is programmed over USB with the supplied cable. The important thing is the bitstream format and the probable chip layout have been reverse-engineered by Clifford Wolf and others. All the software to drive this board is available in Fedora:

dnf install icestorm arachne-pnr yosys emacs-verilog-mode

My first job was to write the equivalent of a “hello, world” program — flash the 8 LEDs in sequence. This is a good test because it makes sure that I’ve got all the tools installed and working and the Verilog program is not too complicated.

// -*- verilog -*-
// Flash the LEDs in sequence on the Lattice iCE40-HX8K.

module flash (input clk, output reg [7:0] leds);
   // Counter which counts upwards continually (wrapping around).
   // We don't bother to initialize it because the initial value
   // doesn't matter.
   reg [18:0] counter;
   // This register counts from 0-7, incrementing when the
   // counter is 0.  The output is wired to the LEDs.
   reg [2:0] led_select;

   always @(posedge clk) begin
      counter <= counter + 1;

      if (counter[18:0] == 0) begin
         led_select <= led_select + 1;

   // Finally wire each LED so it signals the value of the led_select
   // register.
   genvar i;
   for (i = 0; i < 8; i=i+1) begin
      assign leds[i] = i == led_select;      
endmodule // flash

<iframe allowfullscreen="true" class="youtube-player" height="312" src="https://www.youtube.com/embed/Q5pDgXuywHg?version=3&amp;rel=1&amp;fs=1&amp;autohide=2&amp;showsearch=0&amp;showinfo=1&amp;iv_load_policy=1&amp;wmode=transparent" style="border:0;" type="text/html" width="500"></iframe>

It looks like the base clock frequency is 2 MHz.

The fully working example is in this repo: https://github.com/rwmjones/icestorm-flash-leds

In part 2 I’ll try to get PicoRV32 on this board.

Fedora Atomic Workstation: Ruling the commandline

Posted by Matthias Clasen on March 16, 2018 06:49 PM

In my recent posts, I’ve mostly focused on finding my way around with GNOME Builder and using it to do development in Flatpak sandboxes. But I am not really the easiest target audience for an IDE like GNOME Builder, having spent most of my life on the commandline with tools like vim and make.

So, what about the commandline in an Atomic Workstation environment? There are many container tools, like buildah, atomic, oc, podman, and so on. I am not going to talk about these, since I don’t know them very well, and they are covered, e.g. on www.projectatomic.io.

But there are a few commands that are essential to life on the Atomic Workstation: rpm-ostree and flatpak.


First of all, there’s rpm-ostree, which is the commandline frontend to the rpm-ostreed daemon that manages the OS image(s) on the Atomic Workstation.

You can run

rpm-ostree status

to get some information about your OS image (and the other images that may be present on your system). And you can run

rpm-ostree upgrade

to get the latest update for your OS image (the terminology clash here is a bit unfortunate; rpm-ostree calls an upgrade what most Linux distros and packaging tools call an update).

You can run this command as normal user in a terminal, and rpm-ostreed will present you with a polkit dialog to do its privileged operations. Recently, rpm-ostreed has also gained the ability to check for and deploy upgrades automatically.

An important thing to keep in mind is that rpm-ostree never changes your running system. You have to reboot into the new image to see the changes, so

systemctl reboot

should be in your repertoire of commands as well. Alternatively, you can use the –reboot option to tell rpm-ostree to reboot when the upgrade command completes.


The other essential command is flatpak. Where rpm-ostree controls your OS image, flatpak rules the applications. flatpak has many commands that are worth exploring, I’ll only mention the most important ones here.

It is quite common to have more than one source for flatpaks enabled.

flatpak remotes

lists them all. If you want to find applications, then

flatpak search

will do that for you, and

flatpak install

will let you install what you found. An important detail to point out here is that applications can be installed in system-wide (in /var) or per-user (in ~/.local/share). You can choose the location with the –user and  –system options. If you choose to install system-wide, you will get a polkit prompt, since this is a privileged operation.

After installing applications, you should keep them up-to-date by installing updates. The most straightforward way to so is to just run

flatpak update

which will install available updates for all applications. To just check if updates are available, you can use

flatpak remote-ls --updates
Launching applications

Probably the most important thing you will want to do with flatpak is to run applications. Unsurprisingly, the command to do so is called run, and it expects you to specify the unique application ID:

flatpak run org.gnome.gitg

This is certainly a departure from the traditional commandline, and could be considered cumbersome (even though it has bash completion for the application ID).

Thankfully, flatpak has recently gained a way to recover the familiar interface. It now installs shell wrappers for the flatpak run command in ~/.local/share/flatpak/bin. After adding that directory to your PATH, you can run gitg like this:


If (like me), you are still not satisfied with this, you can add a shell alias to get the traditional command name back:

alias gitg=org.gnome.gitg

Now gitg works again, as it used to. Nice!


Generating a list of URL patterns for OpenStack services.

Posted by Adam Young on March 16, 2018 05:35 PM

Last year at the Boston OpenStack summit, I presented on an Idea of using URL patterns to enforce RBAC. While this idea is on hold for the time being, a related approach is moving forward building on top of application credentials. In this approach, the set of acceptable URLs is added to the role, so it is an additional check. This is a lower barrier to entry approach.

One thing I requested on the specification was to use the same mechanism as I had put forth on the RBAC in Middleware spec: the URL pattern. The set of acceptable URL patterns will be specified by an operator.

The user selects the URL pattern they want to add as a “white-list” to their application credential. A user could further specify a dictionary to fill in the segments of that URL pattern, to get a delegation down to an individual resource.

I wanted to see how easy it would be to generate a list of URL patterns. It turns out that, for the projects that are using the oslo-policy-in-code approach, it is pretty easy;

cd /opt/stack/nova
 . .tox/py35/bin/activate
(py35) [ayoung@ayoung541 nova]$ oslopolicy-sample-generator  --namespace nova | egrep "POST|GET|DELETE|PUT" | sed 's!#!!'
 POST  /servers/{server_id}/action (os-resetState)
 POST  /servers/{server_id}/action (injectNetworkInfo)
 POST  /servers/{server_id}/action (resetNetwork)
 POST  /servers/{server_id}/action (changePassword)
 GET  /os-agents
 POST  /os-agents
 PUT  /os-agents/{agent_build_id}
 DELETE  /os-agents/{agent_build_id}

Similar for Keystone

$ oslopolicy-sample-generator  --namespace keystone  | egrep "POST|GET|DELETE|PUT" | sed 's!# !!' | head -10
GET  /v3/users/{user_id}/application_credentials/{application_credential_id}
GET  /v3/users/{user_id}/application_credentials
POST  /v3/users/{user_id}/application_credentials
DELETE  /v3/users/{user_id}/application_credentials/{application_credential_id}
PUT  /v3/OS-OAUTH1/authorize/{request_token_id}
GET  /v3/users/{user_id}/OS-OAUTH1/access_tokens/{access_token_id}
GET  /v3/users/{user_id}/OS-OAUTH1/access_tokens/{access_token_id}/roles/{role_id}
GET  /v3/users/{user_id}/OS-OAUTH1/access_tokens
GET  /v3/users/{user_id}/OS-OAUTH1/access_tokens/{access_token_id}/roles
DELETE  /v3/users/{user_id}/OS-OAUTH1/access_tokens/{access_token_id}

The output of the tool is a little sub-optimal, as the oslo policy enforcement used to be done using only JSON, and JSON does not allow comments, so I had to scrape the comments out of the YAML format. Ideally, we could tweak the tool to output the URL patterns and the policy rules that enforce them in a clean format.

What roles are used? Turns out, we can figure that out, too:

$ oslopolicy-sample-generator  --namespace keystone  |  grep \"role:
#"admin_required": "role:admin or is_admin:1"
#"service_role": "role:service"

So only admin or service are actually used. On Nova:

$ oslopolicy-sample-generator  --namespace nova  |  grep \"role:
#"context_is_admin": "role:admin"

Only admin.

How about matching the URL pattern to the policy rule?
If I run

oslopolicy-sample-generator  --namespace nova  |  less

In the middle I can see an example like this (# marsk removed for syntax):

# Create, list, update, and delete guest agent builds

# This is XenAPI driver specific.
# It is used to force the upgrade of the XenAPI guest agent on
# instance boot.
 GET  /os-agents
 POST  /os-agents
 PUT  /os-agents/{agent_build_id}
 DELETE  /os-agents/{agent_build_id}
"os_compute_api:os-agents": "rule:admin_api"

This is not 100% deterministic, though, as some services, Nova in particular, enforce policy based on the payload.

For example, these operations can be done by the resource owner:

# Restore a soft deleted server or force delete a server before
# deferred cleanup
 POST  /servers/{server_id}/action (restore)
 POST  /servers/{server_id}/action (forceDelete)
"os_compute_api:os-deferred-delete": "rule:admin_or_owner"

Where as these operations must be done by an admin operator:

# Evacuate a server from a failed host to a new host
 POST  /servers/{server_id}/action (evacuate)
"os_compute_api:os-evacuate": "rule:admin_api"

Both map to the same URL pattern. We tripped over this when working on RBAC in Middleware, and it is going to be an issue with the Whitelist as well.

Looking at the API docs, we can see that difference in the bodies of the operations. The Evacuate call has a body like this:

    "evacuate": {
        "host": "b419863b7d814906a68fb31703c0dbd6",
        "adminPass": "MySecretPass",
        "onSharedStorage": "False"

Whereas the forceDelete call has a body like this:

    "forceDelete": null

From these, it is pretty straight forward to figure out what policy to apply, but as of yet, there is no programmatic way to access that.

It would take a little more scripting to try and identity the set of rules that mean a user should be able to perform those actions with a project scoped token versus the set of APIs that are reserved for cloud operations. However, just looking at the admin_or_owner rule for most is sufficient to indicate that it should be performed using a scoped token. Thus, an end user should be able to determine the set of operations that she can include in a white-list.

Linux - Minimize Spotify to taskbar

Posted by Robbi Nespu on March 16, 2018 04:00 PM


Hey, the last time I using Spotify was around a years ago on my previous company desktop workstation running Ubuntu 16.06 (As far I remember). Today I installing Spotify to my Fedora 27. Seem minimize to tray when close windows options are completely missing (Last time it have, but need to configure) from Spotify setting.

I don’t like it work like now.. messy. So as workaround, let use kdocker to dock Spotify to tray. It available for most distro too (Fedora, Arch, Ubuntu, Debian).

Here some infomation from Fedora repo :

$ dnf info kdocker
Installed Packages
Name         : kdocker
Version      : 5.0
Release      : 8.fc27
Arch         : x86_64
Size         : 208 k
Source       : kdocker-5.0-8.fc27.src.rpm
Repo         : @System
From repo    : fedora
Summary      : Dock any application in the system tray
URL          : https://github.com/user-none/KDocker
License      : GPLv2+
Description  : KDocker will help you dock any application in the system tray. This means you
             : can dock OpenOffice, XMMS, Firefox, Thunderbolt, Eclipse, anything! Just point
             : and click. Works for LXQT/KDE and GTK/GNOME (In fact it should work for most
             : modern window managers that support NET WM Specification for instance).
             : All you need to do is start KDocker and select an application using the mouse
             : and lo! the application gets docked into the system tray. The application can
             : also be made to disappear from the task bar.

To install kdocker with Fedora just run these command

$ sudo dnf install kdocker -y

or if you using Ubuntu / Debian

$ sudo apt-get install kdocker -y

Now edit file /usr/share/applications/spotify.desktop and change the Exec :

Exec=kdocker -q -o -l -i /usr/share/icons/Papirus/64x64/apps/spotify.svg spotify %U

Below is my full spotify.desktop file, please take note you need to change /usr/share/icons/Papirus/64x64/apps/spotify.svg with you own path to Spotify icon :

[Desktop Entry]
GenericName=Music Player
Comment=Spotify streaming music client
Exec=kdocker -q -o -l -i /usr/share/icons/Papirus/64x64/apps/spotify.svg spotify %U

Now close instance of Spotify if already open and launch again via menu then you will notice spotify automatically minimize to taskbar. Double click on icon and kdocker will open spotify windows menu for you and minimize again if you done or click minimize.

Nice! That all, see you next round ~

Ramblings about long ago and far away

Posted by Stephen Smoogen on March 16, 2018 03:28 PM
My first job outside of college in 1994 was working at Los Alamos National Labs as a Graduate Research Assistant. It was supposed to be a post where I would use my bachelor's in Physics degree for a year until I became a graduate student somewhere. The truth was that I was burnt out of University and had little urge to go back. I instead used my time to learn much more about Unix system administration. It turned out the group I worked on had a mixture of SGI Irix's, Sun Sparcstations, HP, Convex, and I believe AIX. The systems had been run by graduate students for their professors and needed some central management. While I didn't work for the team that was doing that work, I spent more and more time working with them to get that in place. After a year, it was clear I was not going back to Physics, and my old job was ending. So the team I worked on gave me a reference to another place at the Lab where I began work. 

This network had even more Unix systems as they had NeXT cubes, old Sun boxes, Apollo, and some others I am sure to have forgotten. All of which needed a lot of love and care as they had been built for various Phd's and postdocs for various needs and then forgotten. My favorite box was one where the owner required that nearly every file was set 777. I had multiple emails which echo every comment people come up with Selinux in the last decade. If there was some problem on the system it was because it had a permission set.. and until it was shown it didn't work at 777 you could look at it being something else. [The owner was also unbelievably brilliant in other ways.. but hated arbitrary permission models.]

Any case, I got a lot of useful experience on all kinds of Unix systems, user needs, and user personalities. I also got to use Linux Softland Linux Systems (SLS) on a 486 with 4 MB of RAM running the linux kernel 0.99.4? and learn all kinds of things about PC hardware versus 'Real Computers'. The 486 was really an overclocked 386 with some added instructions that had been originally a Cyrix DX33 that had been relabeled with industrial whiteout as a 40MHz. It sort of worked at 40Mhz but was reliable only at 20Mhz. The issues with getting deals from Computer magazines.. sure the guy in the next apartment worked great.. mine was a dud.

I had originally run MCC (Manchester Computer Center Interim Linux) in college but when I moved it was easier to find a box of floppies with SLS so I had installed that on the 486. I would then download software source code from the internet and rebuild it for my own use using all the extra flags I could find in GCC to make my 20Mhz system seem faster. I instead learned that most of the options didn't do anything on i386 Linux at the time and most of my reports about it were probably met by eye-rolls with the people at Cygnus. My supposed goal was to try and set up a MUD so I could code up a text based virtual reality. Or to get a war game called Conquer working on Linux. Or maybe get xTrek working on my system. [I think I mostly was trying to become a game developer by just building stuff versus actually coding stuff. I cave-man debugged a lot of things using stuff I had learned in FORTRAN but it wasn't actually making new things.]

For years, I looked back on that time and thought it was a complete waste of time as I should have been 'coding' something. However I have come to realize I learned a lot about the nitty-gritty of hardware limitations. A 9600 baud Modem is not going to keep up with people on Ethernet playing xTrek. Moving it to a 56k modem later isn't going to keep up with a 56k partial T1. The numbers are the same but they are counting different things. A 5400 RPM IDE hard-drive is never going to be as good as 5400 RPM SCSI disks even if it is larger. 8 MB on a Sparc was enough for a MUD but on a PC it ran into problems because the CPU and MMU were not as fast or 'large'. 

All of this later became useful years later when I worked at Red Hat between 1997 and 2001. The customers at that time were people who had been using 'real Unix' hardware and were at times upset about how Linux didn't act the same way. In most cases it was the limitations of the hardware they had bought to put a system together, and by being able to debug that and recommend replacements, things improved. Being able to compare how a Convex used disks or an SGI graphics to the limitations of the old ISA and related buses helped show that you could redesign a problem to meet the hardware. [In many cases, it was cheaper to use N PC systems to replicate the behaviour of 1 Unix box but the problem needed to be broken in a way that it worked on N systems versus 1 box.] 

So what does this have to do with Linux today? Well mostly reminders to me to be less cranky with people who are 
  1. Having fun breaking things on their computers. People who want to tear apart their OS and rebuild it to something else are going to run into lots of hurdles. Don't tell them it was a stupid thing. The people at Cygnus may have rolled their eyes but they never told me to stop trying something else. Just read the documentation and see that it says 'undefined behavior' in a lot of places.
  2. Working with tiny computers to do stuff that you do on a bigger computer these days. It is very easy to think that because it is 'easier' and currently more maintainable to do a calculation on 1 large Linux box.. that you are wasting time on dozens of raspberry pis to do the same thing. But that is what the mainframers thought of the minicomputers, and the minicomputers thought of the Unix workstations, and the Unix thought of Linux on PC. 
  3. Seeming to spin around, not knowing what they are doing. I spent a decade doing that.. and while I could have been more focused.. I would have missed a lot of things that happened otherwise. Sometimes you need to do that to actually understand who you are. 

Long live Release Engineering

Posted by Dennis Gilmore on March 16, 2018 12:49 PM

My involvement in Fedora goes back to late 2003 early 2004 somewhere as a packager for fedora.us. I started by getting a few packages in to scratch some of my itches and I saw it as a way to give back to the greater open source community. Around FC3 somewhere I stepped up to help in infrastructure to rebuild the builders in plague, the build system we used before koji and that we used for EPEL(Something that I helped form) for awhile until we got external repo support in koji.

I was involved in the implementation of koji in Fedora, I joined OLPC as a build and release engineer, where I oversaw a move of the OS they shipped from FC6 to F8, and laid a foundation for the move to F9. I left OLPC when Red Hat opensourced RHN Satellite as “spacewalk project” I joined Red Hat as the release engineer for both, after a brief period there was some reorganisation in engineering that resulted in me handing off the release engineering tasks to someone closer the the engineers working on the code. As a result I worked on Fedora full time helping Jesse Keating. When he decided to work on the internal migration from CVS to git I took over as the lead.

During Fedora 14 the transition was made from Jesse to me. For the following 10 Fedora releases I was the primary person doing the work to get Fedora out the door. During that time there has been tremendous change in Fedora, how we do things and what we deliver. Fedora 14 shipped with 12905 packages, 1 install tree, a handful of livecd’s across two architectures. In Fedora 24 we shipped with 19760 packages, 4 install tree’s, 10 livecd’s, Cloud Images, Vagrant Box images, Container Base images, Atomic Host (Iamges and ostree) across 3 architectures. Along with all the primary deliverables we had a much more robust and functional Alternative Architecture program running. In Fedora 26 we added aarch64 and ppc64le to primary koji and in Fedora 27 we added s390x.

During this time we added things like Fedora Editions, and support for many new technologies. The tooling we used to compose Fedora grew in complexity to meet the growing demands and reduce the need for people to manually do the work. Management of the development of the tooling we use was taken over by different teams inside of Red Hat working upstream in the community.  Fedora Release Engineering gained a project manager who helped us to grow and become less of a black box and deal with the growing pains we faced. Fedora Infrastructure provided people to help develop and deploy the ability to build layered images for containers. Pungi the compose tool got a major version bump and grew up a lot. We also developed and worked with upstream koji to get new features and functionality into the buildsystem. Release Engineering was one of  the first adopters of pagure.

Mohan Boddu Joined Red Hat to help with Fedora just before Fedora 25, we worked on Fedora 25 together and Mohan has primarily been the person responsible for composing and making sure we ship Fedora since. During that time I have been the Team Lead for the platform team in release engineering inside of Red Hat, I have been spending my time between Fedora and RHEL and making sure that we bring together the way we build and compose both Operating Systems.

Recently I have accepted a Job offer to become the manager of a different team inside of Red Hat. My new role will be working on multi-arch support for internal products. As a result of my change in roles inside of Red Hat I will be stepping down on Friday the 23rd of March as the lead for release engineering in Fedora so I can focus on my new role. Mohan will be taking over for me,  he will be helped by the current project manager Suzanne Yeghiayan along with a cast of a handful who work tirelessly to ensure we can ship everything. All requests for projects should go though pagure or taiga for grooming and prioritisation.

Please give Mohan a big congratulations and be sure to make sure that you work with Suzanne to get your requests prioritised.

If you hitch a ride with a scorpion…

Posted by Joe Brockmeier on March 16, 2018 12:19 PM
I haven’t seen a blog post or notice about this, but according to the Twitters, Coverity has stopped supporting online scanning for open source projects. Is anybody shocked by this? Anybody? This comes the same week that Slack announces that they’re ending support for IRC/XMPP gateways — that is, the same tools that persuaded a […]

OSCAL'18, call for speakers, radio hams, hackers & sponsors reminder

Posted by Daniel Pocock on March 16, 2018 08:46 AM

The OSCAL organizers have given a reminder about their call for papers, booths and sponsors (ask questions here). The deadline is imminent but you may not be too late.

OSCAL is the Open Source Conference of Albania. OSCAL attracts visitors from far beyond Albania (OpenStreetmap), as the biggest Free Software conference in the Balkans, people come from many neighboring countries including Kosovo, Montenegro, Macedonia, Greece and Italy. OSCAL has a unique character unlike any other event I've visited in Europe and many international guests keep returning every year.

A bigger ham radio presence in 2018?

My ham radio / SDR demo worked there in 2017 and was very popular. This year I submitted a fresh proposal for a ham radio / SDR booth and sought out local radio hams in the region with an aim of producing an even more elaborate demo for OSCAL'18.

If you are a ham and would like to participate please get in touch using this forum topic or email me personally.

Why go?

There are many reasons to go to OSCAL:

  • We can all learn from their success with diversity. One of the finalists for Red Hat's Women in Open Source Award, Jona Azizaj, is a key part of their team: if she is announced the winner at Red Hat Summit the week before OSCAL, wouldn't you want to be in Tirana when she arrives back home for the party?
  • Warm weather to help people from northern Europe to thaw out.
  • For many young people in the region, their only opportunity to learn from people in the free software community is when we visit them. Many people from the region can't travel to major events like FOSDEM due to the ongoing outbreak of immigration bureaucracy and the travel costs. Many Balkan countries are not EU members and incomes are comparatively low.
  • Due to the low living costs in the region and the proximity to larger European countries, many companies are finding compelling opportunities to work with local developers there and OSCAL is a great place to make contacts informally.

Sponsors sought

Like many free software communities, Open Labs is a registered non-profit organization.

Anybody interested in helping can contact the team and ask them for whatever details you need. The Open Labs Manifesto expresses a strong commitment to transparency which hopefully makes it easy for other organizations to contribute and understand their impact.

Due to the low costs in Albania, even a small sponsorship or donation makes a big impact there.

If you can't make a direct payment to Open Labs, you could also potentially help them with benefits in kind or by contributing money to one of the larger organizations supporting OSCAL.

Getting there without direct service from Ryanair or Easyjet

These notes about budget airline routes might help you plan your journey. It is particularly easy to get there from major airports in Italy. If you will also have a vacation at another location in the region it may be easier and cheaper to fly to that location and then use a bus to Tirana.

Making it a vacation

For people who like to combine conferences with their vacations, the Balkans (WikiTravel) offer many opportunities, including beaches, mountains, cities and even a pyramid (in Tirana itself).

It is very easy to reach neighboring countries like Montenegro and Kosovo by coach in just 3-4 hours. For example, there is the historic city of Prizren in Kosovo and many beach resorts in Montenegro.

If you go to Kosovo, don't miss the Prishtina hackerspace.

Tirana Pyramid: a future hackerspace?

Fedora 27 release party: Managua, Nicaragua

Posted by Fedora Community Blog on March 16, 2018 08:15 AM

On February 27th, the Fedora Community in Nicaragua ran a Release Party for the F27 Release. The activity took place in a salon of Hotel Mansión Teodolinda in Managua. This is our first activity of the year. This event was late in the Fedora Development Schedule because the Fedora 28 release is coming soon this year, but we need to keep the community active and keep promoting the Fedora Four Foundations in Nicaragua. The event schedule was…

  1. Few talks about news of the Fedora 27 release
  2. Coffee break
  3. Questions and Answers
Fedora 27 release part in Managua, Nicaragua

Fedora 27 Release Party Managua

What is new in Fedora 27?

This was the first talk of the event: a fast overview of the Features in Fedora 27 and a overview of upcoming features for Fedora 28.

News in Fedora 27

Constant innovation sometimes makes it hard to keep track of the features available in the last Fedora release. With this talk, we gave a fast overview of the development tools that make Fedora an awesome operating system for developers.

Also, we reviewed upcoming features for the Fedora 28 release. Some people were not aware that .NET is available in Linux or the work done with Fedora developers to fully support to the Rust Programming Language. It looks like we do not have users of Fedora i686 in Nicaragua for desktops, but people was interested in the support of 32-bit for Fedora Server.

Introduction to Docker containers

Introduction to Docker Containers

Containers are a hot topic in the current Linux ecosystem and Fedora offers first class support to Docker containers with the Atomic Host and the Cockpit Project. With a introduction to this technology by Omar Berróteran, we want to show Docker can be integrated in development and deployment of critical apps that take advantage of the latest optimizations made in the Fedora system.

Python development with Fedora

Fedora Loves Python

Fedora Loves Python

Python loves Fedora! With this talk, Porfirio Páiz made a introduction of the Python Classroom and make a great overview of the Python stack available for developers.

Managua event in numbers

This are some statistics about this event:

  • Number of Attendees: 30
  • New FAS sign-ups: 1
  • Talks: 3
  • Fedora Contributors in the event: 8
  • Budget executed (snacks): USD 54.22

Also we give many ISOs of Workstation, Plasma and the Classroom installers. Many people ask about how to join the local community. We had a lot of interaction with the event in social networks.


This Fedora Release Party was a great event with a great response of the local community. This shows that people understand that the Fedora Project offers a rock solid operative system, truly free and reliable that is a amazing choice for people that need to get the things done. After the event, many people show interest to give a try to Fedora and we share the Fedora Installation Media with the assistants to the event.

A point that we want to improve is to record our talks in videos, so people that cannot attend the event can watch the videos and create a source of information to newcomers to the community.

The post Fedora 27 release party: Managua, Nicaragua appeared first on Fedora Community Blog.

PHP version 7.1.16RC1 and 7.2.4RC1

Posted by Remi Collet on March 16, 2018 06:14 AM

Release Candidate versions are available in remi-test repository for Fedora and Enterprise Linux (RHEL / CentOS) to allow more people to test them. They are available as Software Collections, for a parallel installation, perfect solution for such tests (for x86_64 only), and also as base packages.

RPM of PHP version 7.2.4RC1 are available as SCL in remi-test repository and as base packages in the remi-php72-test repository for Fedora 25-27 and Enterprise Linux.

RPM of PHP version 7.1.16RC1 are available as SCL in remi-test repository and as base packages in the remi-test repository for Fedora 26-27 or remi-php71-test repository for Fedora 25 and Enterprise Linux.

PHP version 7.0 is now in security mode only, so no more RC will be released.

emblem-notice-24.pngInstallation : read the Repository configuration and choose your version.

Parallel installation of version 7.2 as Software Collection:

yum --enablerepo=remi-test install php72

Parallel installation of version 7.1 as Software Collection:

yum --enablerepo=remi-test install php71

Update of system version 7.2:

yum --enablerepo=remi-php72,remi-php72-test update php\*

Update of system version 7.1:

yum --enablerepo=remi-php71,remi-php71-test update php\*

Notice: version 7.2.2RC1 is also available in Fedora rawhide and version 7.1.16RC1 in updates-testing for Fedora 27, for QA.

emblem-notice-24.pngEL-7 packages are built using RHEL-7.4.

emblem-notice-24.pngRC version is usually the same as the final version (no change accepted after RC, exception for security fix).

Software Collections (php71, php72)

Base packages (php)

Fedora Podcast 003 — Fedora Modularity

Posted by Fedora Magazine on March 16, 2018 05:42 AM

Langdon White

Episode 003 of the Fedora Podcast is now available. In Episode 003 features developer and software architect Langdon White from the Fedora Modularity team. Langdon also leads the Fedora Modularity objective. Langdon is a passionate technical leader with a proven success record architecting and implementing high-impact software systems. In the podcast, Langdon defines Modularity in the Fedora context, explains the issues that can be solved with it, as well as describing the process to help with this important project. You can read more about the Fedora Modularity objective over in their Pagure project.

One important thing to note: as of episode 003,  the Fedora Podcast is now available in iTunes. The Fedora Podcast series features interviews and talks with people who make the Fedora community awesome. These folks work on new technologies found in Fedora. Or they produce the distribution itself. Some work on putting Fedora in the hands of users.

<iframe frameborder="no" height="300" scrolling="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/414074076&amp;color=%23324c77&amp;auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false&amp;show_teaser=true&amp;visual=true" width="100%"></iframe>

In addition to listening above, episode 003 is also available on Soundcloud, iTunes, Stitcher, Google Play, Simplecast, and x3mboy’s site. Full transcripts of Episode 003 are available here . Transcripts are also available for previous episodes!

Subscribe to the podcast

You can subscribe to the podcast in Simplecast, follow the Fedora Podcast on Soundcloud, on iTunes, Stitcher, Google Play, or periodically check the author’s site on fedorapeople.org.


This podcast is made with the following free software: Audacity, and espeak.

The following audio files are also used: Soft echo sweep by bay_area_bob and The Spirit of Nøkken by johnnyguitar01.

Add memtest86+ into your grub menu

Posted by Robbi Nespu on March 16, 2018 04:34 AM

I just bought a 8GB DDR3L So-DIMM from Shopee. The seller said it new and not refurbished. I need to check the memory if got some corruption or unusable memory bits.

By default, after installed Fedora into you machine, it don’t have memtest86+ on grub menu when booted. You need add it manually.

$ sudo memtest-setup
$ sudo grub2-mkconfig -o /boot/grub2/grub.cfg

Please end Daylight Savings Time

Posted by Stephen Smoogen on March 15, 2018 10:37 PM
This was going to be my 3rd article this week about something EPEL related, but I am having a hard time stringing any words together coherently. The following instead boiled out and I believe I have removed the profanity that leaked in.

So I like millions of other Americans (except those people blessed to be living in Arizona and Hawaii) are going through the week long jetlag when Daylight Savings Time starts. For the last 11? years the US has had DST start 2 weeks earlier than the rest of countries which observe this monstrosity, I think to show the rest of the  why it is a bad idea.  No one seems to learn and instead try to make it longer and longer.

I understand why it was begun during World War I in order to make electricity costs for lighting cheaper in factories. It just isn't solving that problem anymore. Instead I spend a week not really awake during the day, and for some reason as I get older not able to sleep at all during the night. And I get crankier and more sarcastic by the day. Finally sometime next Monday, I will conk out for 12-14 hours and be alright again. I would like to say I am anomaly, but this seems to happen to a lot of people around the world with higher numbers of heart attacks, strokes and accidents during the month of time changes.

So please, next time this comes up with your government (be it the EU, US, Canada Parliament, etc) write to your representatives that this needs to end. [For a fun read, the Wikipedia articles on various forms of daylight savings cover the political philandering to pay for this.]

Thank you for your patience while I whine about having only the lack of sleep when there are a hell of a lot worse things going on in the world.

Using the Red Hat Developer Toolset (DTS) in EPEL-7

Posted by Stephen Smoogen on March 15, 2018 05:58 PM
One of the problems developers find in supporting packages for any long lifed Enterprise Linux is that it is harder and harder to compile newer software. Packages may end requiring newer compilers and other tools in order to be made. Back-porting fixes or updating software become harder and harder because the tools are no longer available to make the newer code work.

In the past, this has been a problem with EPEL packages as various software upstreams focus on newer toolkits to meet their development needs. This has lead to many packages to either be removed or left to mummify at some level. The problem occurs outside of EPEL also which is why Red Hat has created a product called Developer Toolset (DTS) which contains newer gcc and other tools. This product uses software collections which have had a mixed history with Fedora and EPEL but was considered useful in this limited use.

How to Use DTS in spec files

In order to use DTS in a spec file you will need to do the following:
  1. If you are not using mock and fedpkg to build packages, you will need to add either the Red Hat DTS channel to your system or if you are using CentOS/Scientific Linux, you can add the repository following these instructions.
  2. If you are using mock/fedpkg, the scl.org repository should be available in the epel mock configs.
  3. In the spec file add the following section to the top area:
    %if 0%{?rhel} == 7

    BuildRequires: devtoolset-7-toolchain, devtoolset-7-libatomic-devel


    Then in the build section add the following:

    %if 0%{?rhel}

    . /opt/rh/devtoolset-7/enable


  4. Attempt to do a build using your favorite build tool (rpmbuild, mock -r , fedpkg mockbuild, etc).  
This should start finding what things you might need to add in to the buildrequires similar problems. We in the EPEL Steering Committee would like to get feedback on this and work out what additions are needed to get this working for other developers. 


There are several caveats to using the Developer ToolSet in EPEL.
  1. Packages may only have a BuildRequire: on the packages in the DTS. If your package will need to Require: something in the DTS or Software-collections, it can NOT be in EPEL at this time as many users do not have this enabled or used.
  2. This is only for EPEL-7. At the moment, I have not set up DTS for EL-6 because it was not asked for recently. The Steering Committee would like to hear from developers if they want it enabled in EL-6.
  3. The architectures where DTS exists are: x86_64, ppc64le, and aarc64. There is no DTS for ppc64 and we do not currently have an EPEL for s390x.


Our thanks to Tom Callaway and many other developers for his patience on getting this working.


  • Originally the article stated that the text %if 0%{?rhel} == 7 should be used. That fails. The correct code is %if 0%{?rhel}
  • If you build with mock, you are restricted to only pulling in the DTS packages. Currently koji does not have this limitation which is being fixed.

Fedora - Setting power management when low battery

Posted by Robbi Nespu on March 15, 2018 04:00 PM

Peace be upon you, I have issue with Fedora power management when my DELL Inspiron 14R 7420 SE battery are low, my laptop just shutdown and I lose what I currently working on. It bad when there is no autosave while editting some document or waiting f***ing Android Studio doing indexing or compiling.

Some how our spining disk also can get corrupt if it often happen like this. To be honest, I normally sleep at late night and mostly with my lappy on my desk (bad habit pun intended huh >_<)

We actually can control what action to take when you battery / ups are reach certain percentage or time left. There is three option :

  • PowerOff
  • Hibernate
  • HybridSleep

I prefer to use hibernate so if you like please read and follow my note how to hibernate Fedora but it recommended always do hybrid-sleep instead of suspend or hibernation.

You need to open and edit /etc/UPower/UPower.conf file. Here example of mine: <script src="https://gist.github.com/RobbiNespu/ee3033954fc15653a800a4a5823c84ee.js"></script>

The file comes with comment, so just read and you should understand what to modified. Have a look on UsePercentageForPolicy=true, you can set it as false if you prefer to depend on battery time remaining before deplete istead of battery percentage.

Then take look on Percentage (if you use UsePercentageForPolicy=true) or Time (if you use UsePercentageForPolicy=false) set the value according you need.

Lastly check CriticalPowerAction, here you should choose PowerOff, Hibernate or HybridSleep.

After you done with configuration, save the file (as root) and restart and check status of upower service

$ sudo systemctl restart upower.service
$ sudo systemctl status upower.service

Now you power management should work as you want. Gnome power management You also should actived the automatic suspend to save you battery if you left the computer too long.

That all, thanks!

How I accidentally wrote a Wikipedia page on a layover in Dublin

Posted by Justin W. Flory on March 15, 2018 08:15 AM

One of the most unusual but wonderful experiences happened to me on a return trip from Europe to the United States.

A series of heavy noreasters hit the US east coast over the last couple weeks. This coincided with my travel dates back to Rochester, NY. While we didn’t have flooding, we had a lot of snow. A lot of snow means canceled flights.

As I made my way through border control in Dublin, Ireland on March 7, I discovered my connection to New York City would likely be canceled. A meander from baggage claim to the check-in desk confirmed this. Fortunately, Aer Lingus had no issue putting me up in a hotel overnight with dinner and breakfast to catch the next flight to New York the next day.

While waiting in airport queues, a friend happened to retweet a local event happening in Dublin the next day.

<script async="async" charset="utf-8" src="https://platform.twitter.com/widgets.js"></script>

The event was a local Wikimedia meet-up to celebrate International Women’s Day. Participants would create and edit Wikipedia pages for influential women in the history of the Royal College of Surgeons in Ireland. After digging deeper, I found out the event was 30 minutes away from my hotel from 09:30 to 12:30. My flight was at 16:10.

I put in my RSVP.

Meet the Wikimedia Ireland community

In an opportunistic stroke of fate, I would spend my extended layover for my first time in Dublin learning and listening about role model women in the Irish medicine community. I didn’t know it yet, but I would also take part in writing some of the history too!

Group photo of the participants and editors for the 2018 International Women's Day edit-a-thon

Group photo of the participants and editors for the 2018 International Women’s Day edit-a-thon. Source: Twitter, @RCSILibrary


The first part of the morning was an introduction to editing on Wikipedia and establishing the focus for edits.

Manuscript letters of support by men from the RCSI archive for women being admitted to medical schools and accepted into the British Medical Association. #HeForShe!

Manuscript letters of support by men from the RCSI archive for women being admitted to medical schools and accepted into the British Medical Association. #HeForShe! Source: Twitter, @RCSILibrary

The Royal College of Surgeons in Ireland (RCSI) started a new campaign to promote influential women in the history of the university. There is a historical board room in a prominent place on its campus. Inside the board room, there are portraits of influential people in the history of RCSI. But all of them are men. This makes it difficult for women to have role models or inspiration of women like them who “made it” in science and medicine.

On the contrary, there was also no shortage of influential women in the history of RCSI. Part of the morning was an introduction to primary sources that explained the pivotal work of female Irish doctors and pediatricians throughout the 20th century. After hearing about these inspirational women, it was a wonder – why were none of them represented in the board room?

This was actually the focus for the edit-a-thon. Recently, RCSI commissioned new portraits for some of the influential women alumnae. Half of the portraits in the board room would be relocated and replaced by the new portraits. This was part of their #WomenOnWalls campaign.

Discovering Victoria Coffey

After an introduction to the sources available and how to edit on Wikipedia, we began the editing. Organizers encouraged participants to improve an existing page first, since most of the participants were first-time editors.

Since I had some experience with Mediawiki mark-up and do a lot of writing, I decided to write a new page. There were a list of suggested women alumnae to write about. After hearing about Victoria Coffey, I decided to focus my two hours of writing on her legacy.

Project coordinator for Wikimedia Ireland, Rebecca O'Neill, introduces Wikipedia to students, librarians, and faculty (and me!)

Project coordinator for Wikimedia Ireland, Rebecca O’Neill, introduces Wikipedia to students, librarians, and faculty (and me!). Source: Twitter, @DrConorMalone

Who is Victoria Coffey?

Victoria Coffey was an Irish pediatrician. She was an alumna of RCSI, and one of the first to research sudden infant death syndrome (SIDS). Coffey spent most of her time in medicine researching and studying congenital abnormalities in infants and pediatrics. Later in her life, she founded the Faculty of Paediatrics at the Royal College of Physicians of Ireland in 1981 and was the first female president of the Irish Paediatric Society.

Writing her Wikipedia page

With the help and guidance of the Wikimedia Ireland and RCSI staff, I found resources to research and learn more about Victoria Coffey. While some public sources were available, I was also provided with a primary source from a paid online Irish encyclopedia.

From there, I had the basis to begin writing a stub for her biography. I created an infobox to summarize some of her contributions, wrote a paragraph on her life, and left external links for someone to expand and write more in the future.

You can find her Wikipedia page online now. Since its creation, it was viewed nearly 100 times, edited five times, and edited by three people.

Thank you RCSI and Wikimedia Ireland!

In a strange and opportunistic stroke of fate, I was lucky to meet this local community and work with a room of inspiring women in medicine (students, alumnae, and faculty) on lowering the wiki gap of women on Wikipedia. It was a privilege to take part and learn a unique kind of history for Ireland in my short stay in Dublin.

Thank you for this great experience, RCSI and Wikimedia Ireland!

I’m not sure if this will make me anticipate flight cancellations more or less from now on.

<script async="async" charset="utf-8" src="https://platform.twitter.com/widgets.js"></script>

The post How I accidentally wrote a Wikipedia page on a layover in Dublin appeared first on Justin W. Flory's Blog.

Dia-Dia com docker na aws.

Posted by kausdev on March 15, 2018 04:31 AM

Vamos la , nos pre-requisitos e no setup.

Aconselho dar uma lida na documentação Docker Enterprise edition for AWS.

por que eu to indo no que aprendi e concerteza irei esquecer mas por isso estou escrevendo..

Como fazer um Deployment

Existe  Opções de implantação
e tem duas maneiras de implantar Docker para AWS:

pode ser com  VPC pré-existente
ou pode usar um novo VPC criado pelo Docker
Pra nao da merda eu Recomedo que o Docker  AWS crie o seu VPC, pois permitindo que o Docker otimize  o seu ambiente.  de Instalação em um VPC existente isso requer mais trabalho. e senao ficar esperto ..

Vamos Criar um novissimo VPC
Eu irei tentar mostrar como pode cria um novo VPC, sub-redes, gateways e tudo isso é  necessário para executar Docker para AWS.  E a maneira mais fácil  e pratica com isso pode ir tomar café pq vc ganha tempo  e depois e  voce precisa fazer é executar o modelo do CloudFormation, responder algumas perguntas e pronto .. vai la pra tomar outro cafe..

e agora instalar usando um VPC EXISTENTE
Se você fizer essa instalaçao do Docker na AWS com um VPC existente, vai precisa fazer algumas etapas preliminares.   e  como nao entrarei em detalhes por  isso aqui so serve pra eu lembrar e mas indico que Consulte a configuração recomendada de VPC e sub-rede para obter mais detalhes e eu irei colocar link pra isso.

Escolha um VPC em uma região que você deseja usar.

Certifique-se de que o VPC selecionado esteja configurado com um Gateway de Internet, sub-redes e tabelas de rotas. (importante e ficar  bem esperto com vpc gratuito )

è preciso ter três sub-redes diferentes, idealmente cada uma em sua própria zona de disponibilidade. Se você estiver executando em uma região com apenas duas zonas de disponibilidade, você precisa adicionar mais de uma sub-rede a uma das zonas de disponibilidade. Para implementações de produção, recomendamos apenas a implantação em regiões com três ou mais Zonas de disponibilidade.

Quando  iniciar o docker para a pilha AWS CloudFormation, certifique-se de usar esse para VPCs existentes. Este modelo solicita o VPC e as sub-redes que você deseja usar para o Docker para AWS.

Pré-requisitos para o milagre divino
Acesse uma conta AWS com permissões para usar CloudFormation e criando os seguintes objetos. Conjunto completo de permissões necessárias.
Instâncias EC2 + Grupos de escala automática
Perfis IAM
Tabelas DynamoDB
Fila SQS
Sub-redes VPC + e grupos de segurança
Grupo de log CloudWatch
Chave SSH no AWS na região onde deseja implantar (necessário para acessar a instalação do Docker concluída)
Conta AWS que suporta EC2-VPC (Consulte as FAQ para obter detalhes sobre o EC2-Classic)
Para obter mais informações sobre a adição de um par de chaves SSH à sua conta, consulte os documentos do Amazon EC2 Key Pairs. e nao perder pra nao ter dores de cabeça.

Eu esstava lendo sobre que As partições AWS da China e dos EUA Gov Cloud não são atualmente suportadas. entao prescisa ser melhor estudada..

Agora vamos Configuração
O Docker da AWS ja vem instalado com um modelo CloudFormation que vem configurado Docker no modo swarm, executando em instâncias apoiadas por AMIs personalizados. Há duas maneiras de implantar o Docker na AWS. Você pode usar o AWS Management Console (baseado no navegador) ou usar a CLI AWS. Ambos têm as seguintes opções de configuração. (Atençao)

Escolha a chave SSH para ser usada quando você SSH nos nós gerenciadores.

Tipo de Instância
O tipo de instância EC2 para seus nós de trabalho.

O tipo de instância EC2 para seus nós gerenciadores. Quanto maior o seu enxame, maior o tamanho da instância que você deve usar.

O número de trabalhadores que deseja no seu enxame (0-1000).

O número de gerentes em seu enxame. No Docker CE, você pode selecionar 1, 3 ou 5 gerentes. Recomendamos apenas 1 gerente para testes e configurações de desenvolvimento. Não há garantias de failover com 1 gerente – se o gerente único falhar o enxame também diminui. Além disso, a atualização de enxames de gerenciador único não está garantida para ter sucesso.

No Docker EE, você pode escolher executar com 3 ou 5 gerentes.

Recomendamos pelo menos 3 gerentes, e se você tiver muitos trabalhadores, você deve usar 5 gerentes.

Habilite se você deseja que o Docker for AWS remova automaticamente o espaço não utilizado em seus nós de enxame.

Quando habilitado, a ameixa do sistema docker é escalonada todos os dias, começando às 1:42 AM UTC em trabalhadores e gerentes. Os tempos de ameixa são escalonados ligeiramente para que nem todos os nós sejam podados ao mesmo tempo. Isso limita os pontos de recurso no enxame.

A poda remove o seguinte:

Todos os recipientes parados
Todos os volumes não utilizados pelo menos em um recipiente
Todas as imagens pendentes
Todas as redes não utilizadas
Ative se você deseja que o Docker envie seus logs de contêiner para o CloudWatch. (“Sim”, “não”) Por padrão, sim.

Tamanho do volume de armazenamento efêmero dos trabalhadores em GiB (20 – 1024).

Tipo de volume de armazenamento efêmero do trabalhador (“padrão”, “gp2”).

Tamanho do volume de armazenamento efêmero do gerente em GiB (20 – 1024)

Gerador de tipo de volume de armazenamento efêmero (“padrão”, “gp2”)




How to test an update for EPEL

Posted by Stephen Smoogen on March 15, 2018 12:58 AM
Earlier this week the maintainer for clamav came onto the Freenode #epel channel asking for testers of EL-6. There was a security fix needing to be pushed to stable, but no one had given the package any karma in bodhi.

EPEL tries to straddle the slow and steady world of Enterprise Linux and the fast and furious world of Fedora. This means that packages are usually held in epel-testing for at least 14 days or until the package has been tested by at least 3 people who give a positive score in bodhi . Because EPEL is a 'Stone Soup' set of packages, it does not have dedicated QA which test every update, but instead relies on what people bring to the table in the form of testing things if they need it. This has its benefits, but it does lead to problems where someone who wants to get a CVE fix out right away having to find willing testers or wait 14 days for the package to auto-promote.

Since I had used clamav years ago, and I needed an article to publish on Wednesday.. I decided I would give it a go. My first step was to find a system to test with. My main website still runs happily on CentOS-6 and I saw that while I had configured spamassassin with postfix I had not done so with clamav. This would make a good test candidate because I could roll back to an older setup if the package did not work.

First step was to install the clamav updates. Unlike my desktop where I have epel-testing always on, I keep the setup rather conservative on the web server. So to get the testing version of clamav I needed to the following:

# yum list --enable=epel-testing clamav*
Available Packages
clamav.i686 0.99.4-1.el6 epel-testing
clamav-db.i686 0.99.4-1.el6 epel-testing
clamav-devel.i686 0.99.4-1.el6 epel-testing
clamav-milter.i686 0.99.4-1.el6 epel-testing
I then realized I had only configured clamav with sendmail in the past (yes it was a long time ago.. I watched the moon landings too.. and I can mostly remember what I had for breakfast). I googled through various documents and decided that a document at vpsget was a useful one to follow (Thank you to vpsget). Next up was to see if the packages listed had changed which they had not. So it is time to do an install:

# yum install --enable=epel-testing clamav clamstmp clamd
Loaded plugins: fastestmirror
Setting up Install Process
Loading mirror speeds from cached hostfile
* base: mirrordenver.fdcservers.net
* epel-testing: mirror.texas3006.com
* extras: repos-tx.psychz.net
* updates: centos.mirror.lstn.net
Resolving Dependencies
--> Running transaction check
Is this ok [y/N]:

I didn't use a -y here because I wanted to confirm that no large number of dependencies or other things were pulled in. It all looked good so I hit y and the install happened. I then went through the other steps and saw that there was a change in setup from when the document was written.

[root@linode01 smooge]# chkconfig --list | grep clam
clamd 0:off 1:off 2:off 3:off 4:off 5:off 6:off
clamsmtp-clamd 0:off 1:off 2:off 3:on 4:on 5:on 6:off
clamsmtpd 0:off 1:off 2:off 3:on 4:on 5:on 6:off

I turned on clamsmtp-clamd instead of clamd, and continued through the configs. After this I emailed an eicar to myself and saw it get blocked in the logs. I then looked at the CVE and saw if I could trigger a test against it. It didn't look like I could so I skipped that part. I then repeated the setup in a EL-6 VM I have at home to see that it worked there also. At this point it was time to report my findings in bodhi. I opened the report the owner had pointed me to and logged into the bodhi system. I added a general comment that I had tested it on 2 systems, and then plus 1'd the parts I had tested. Other people joined in and this package was able to get pushed much earlier than it would have previously.

There are currently 326 packages in EPEL-testing for EL-6. Please take the time to test 1 or two packages if you can. [I did pax-utils in writing this because I wanted to replicate the steps I had done. It needs 2 more people to test and it is a security fix also.]

Usando Docker na Aws

Posted by kausdev on March 14, 2018 08:24 PM


Aqui estou fazendo uma breve Introduçao ao uso de Docker na amazon , e na sequencia iremos inicia  com a mao na massa . usando docker , docker compose, e ansible e kubernets . o que aparecer irei testar..
Acredito que  principal intenção e mostrar a facilidade de usar e interagir em seu projeto  Docker na AWS  ou pelo tentar  mostrar o caminho  a ser seguido .

Está sendo ativamente desenvolvido para garantir que os usuários do Docker possam desfrutar de uma nova experiência  de serviço dentro da AWS  . Pode estar curioso para saber o que este projeto e o que ele tem a oferece para o gerenciamento de suas cargas de trabalho em desenvolvimento  e ṕrodução .

O Docker Nativo  da AWS.
Sim na aws tende-se a fornece uma solução nativa do Docker que evita algumas  complexidade operacional e tambem adiciona API adicionais desnecessárias .

Poedendo interagir diretamente com o Docker  e nesse sentido podendo incluindo a orquestração de seu serviço (container)  e nao tendo a necessidade de navegar por camadas extras da aplicaçao . Podendo concentrar no que mais importa: que é a execução de suas cargas de trabalho. com isso  ajuda você e sua equipe a oferecer mais valor ao negócio com mais agilidade no processos e nas entregas.

As habilidades que você e sua equipe já aprenderam e continuam aprendendo,  na utilização do Docker na área de trabalho ou em outro lugar automaticamente transferindo para usar o Docker no AWS. A consistência adicionada através de serviços em Cloud  e  também ajuda a garantir que uma migração ou uma estratégica para outras  serviços em nuvens (cloud) .

Evite o  retrabalho 

podendo utilizar o bootstraps da infra-estrutura recomendada para começar a usar  de forma automatica. Você não precisa se preocupar em rolar suas próprias instâncias, grupos de segurança ou balanceadores de carga ao usar  a AWS. (Dica isto gera custo e deve ser estudado para nao tomar susto na hora da conta ! ).

Da mesma forma, configurar e usar a funcionalidade do modo de enxame Docker para a orquestração de contêiner é gerenciada durante todo o ciclo de vida do cluster quando você usa  AWS. O Docker já coordenou os vários bits de automação que, de outra forma, estarias juntando sozinho para iniciar o modo de enxame Docker nessas plataformas. Quando o cluster terminar de inicializar, você pode pular diretamente e começar a executar os comandos do serviço docker.

Nós também fornecemos um caminho de atualização prescritivo que ajuda os usuários a atualizar entre várias versões do Docker de forma suave e automática. Em vez de experimentar o “problema de manutenção” à medida que pondera suas futuras responsabilidades, atualizando o software que você está usando, você pode atualizar facilmente para novas versões quando forem lançadas.

Base minima 
A distribuição  Linux  é personalizada usada na AWS  e é cuidadosamente desenvolvida para ser executar Docker  Tudo, desde a configuração do kernel até a pilha de rede, é personalizado para torná-lo um local favorável para executar. Por exemplo,  Aws  diz que assegura as versões do kernel  e são compatíveis com as últimas e melhores funcionalidades do Docker, como o driver de armazenamento overlay2.

Mas nada lhe impede de usar seu propria distro a qual você gosta de usar .coreOS, Debian, ubuntu ….etc

Auto-limpeza e autocuração ou autocorreção.
Mesmo o administrador mais consciencioso pode ser pego de surpresa por problemas como o logar inesperadamente agressivo ou o kernel do Linux matando processos com fome de memória. No Docker for AWS, seu cluster é resiliente a uma variedade de tais problemas por padrão. (isso e um problema grave porem nunca passei por isso).

A rotação do registro nativa do host está configurada para você automaticamente, então os logs conversíveis não usam todo o seu espaço em disco. Da mesma forma, a opção “sistema prune” permite garantir que os recursos do Docker não utilizados, como as imagens antigas, sejam limpos automaticamente. O ciclo de vida dos nós é gerenciado usando grupos de escala automática ou construções similares, de modo que, se um nó entrar em estado insalubre por motivos imprevistos, o nó é retirado da rotação do balanceador de carga e / ou substituído automaticamente e todas as suas tarefas de contêiner são reprogramadas .

Essas propriedades auto-limpeza e autocura são habilitadas por padrão e não precisam de configuração, para que você possa respirar mais fácil, pois o risco de tempo de inatividade é reduzido.

Registro nativo das plataformas
O registro centralizado é um componente crítico de muitas pilhas modernas de infraestrutura. Para que estes logs sejam indexados e pesquisáveis, são inestimáveis ​​para a depuração de problemas de aplicativos e sistemas à medida que surgiram. Fora da caixa, o Docker for AWS encaminha os logs dos contêineres para uma abstração do provedor da nuvem nativa (CloudWatch).

Ferramentas de relatório de erros do Docker de próxima geração
Um ponto de dor comum no relatório de problemas de código aberto é efetivamente comunicar o estado atual de sua infra-estrutura e os problemas que você está vendo para o rio a montante. No Docker for AWS, você recebe novas ferramentas para comunicar todos os problemas que você experimenta de forma rápida e segura para os funcionários da Docker. O Docker for AWS shell inclui um script docker-diagnostic que, a seu pedido, transmite informações de diagnóstico detalhadas para a equipe de suporte do Docker para reduzir o tradicional “please-post-the-output-of-this-command” de ida e volta freqüentemente encontrado em relatório de erros.

Part 1 of my article on RISC-V on LWN

Posted by Richard W.M. Jones on March 14, 2018 05:10 PM


I think part 2 will be next week.

LWN is a great publication, everyone should support it by subscribing.

Harden your JBoss EAP 7.1 Deployments with the Java Security Manager

Posted by Red Hat Security on March 14, 2018 01:30 PM


The Java Enterprise Edition (EE) 7 specification introduced a new feature which allows application developers to specify a Java Security Manager (JSM) policy for their Java EE applications, when deployed to a compliant Java EE Application Server such as JBoss Enterprise Application Platform (EAP) 7.1. Until now, writing JSM policies has been pretty tedious, and running with JSM was not recommended because it adversely affected performance. Now a new tool has been developed which allows the generation of a JSM policy for deployments running on JBoss EAP 7.1. It is possible that running with JSM enabled will still affect performance, but JEP 232 indicates the performance impact would be 10-15% (it is still recommended to test the impact per application).

Why Run with the Java Security Manager Enabled?

Running a JSM will not fully protect the server from malicious features of untrusted code. It does, however, offer another layer of protection which can help reduce the impact of serious security vulnerabilities, such as deserialization attacks. For example, most of the recent attacks against Jackson Databind rely on making a Socket connection to an attacker-controlled JNDI Server to load malicious code. This article provides information on how this issue potentially affects an application written for JBoss EAP 7.1. The Security Manager could block the socket creation, and potentially thwart the attack.

How to generate a Java Security Manager Policy


  • Java EE EAR or WAR file to add policies to;
  • Targeting JBoss EAP 7.1 or later;
  • Comprehensive test plan which exercises every "normal" function of the application.

If a comprehensive test plan isn't available, a policy could be generated in a production environment, as long as some extra disk space for logging is available and there is confidence the security of the application is not going to be compromised while generating policies.

Setup 'Log Only' mode for the Security Manager

JBoss EAP 7.1 added a new feature to its custom Security Manager that is enabled by setting the org.wildfly.security.manager.log-only System Property to true.

For example, if running in stand-alone mode on Linux, enable the Security Manager and set the system property in the bin/standalone.conf file using:

JAVA_OPTS="$JAVA_OPTS -Dorg.wildfly.security.manager.log-only=true"

We'll also need to add some additional logging for the log-only property to work, so go ahead and adjust the logging categories to set org.wildfly.security.access to DEBUG, as per the documentation, e.g.:


Test the application to generate policy violations

For this example we'll use the batch-processing quickstart. Follow the README to deploy the application and access it running on the application server at http://localhost:8080/batch-processing. Click the 'Generate a new file and start import job' button in the Web UI and notice some policy violations are logged to the $JBOSS_HOME/standalone/log/server.log file, for example:

DEBUG [org.wildfly.security.access] (Batch Thread - 1) Permission check failed (permission "("java.util.PropertyPermission" "java.io.tmpdir" "read")" in code source 
"(vfs:/content/batch-processing.war/WEB-INF/classes <no signer certificates>)" of "ModuleClassLoader for Module "deployment.batch-processing.war" from Service Module Loader")

Generate a policy file for the application

Checkout the source code for the wildfly-policygen project written by Red Hat Product Security.

git clone git@github.com:jasinner/wildfly-policygen.git

Set the location of the server.log file which contains the generated security violations in the build.gradle script, i.e.:

task runScript (dependsOn: 'classes', type: JavaExec) {
    main = 'com.redhat.prodsec.eap.EntryPoint'
    classpath = sourceSets.main.runtimeClasspath
    args '/home/jshepher/products/eap/7.1.0/standalone/log/server.log'

Run wildfly-policygen using gradle, i.e.:

gradle runScript

A permissions.xml file should be generated in the current directory. Using the example application, the file is called batch-processing.war.permissions.xml. Copy that file to src/main/webapp/META-INF/permissions.xml, build, and redeploy the application, for example:

cp batch-processing.war.permissions.xml $APP_HOME/src/main/webapp/META-INF/permissions.xml

Where APP_HOME is an environment variable pointing to the batch-processing application's home directory.

Run with the security manager in enforcing mode

Recall that we set the org.wildfly.security.manager.log-only system property in order to log permission violations. Remove that system property or set it to false in order to enforce the JSM policy that's been added to the deployment. Once that line has been changed or removed from bin/standalone.conf, restart the application server, build, and redeploy the application.

JAVA_OPTS="$JAVA_OPTS -Dorg.wildfly.security.manager.log-only=false"

Also go ahead and remove the extra logging category that was added previously using the CLI, e.g.:


This time there shouldn't be any permission violations logged in the server.log file. To verify the Security Manager is still enabled look for this message in the server.log file:

INFO  [org.jboss.as] (MSC service thread 1-8) WFLYSRV0235: Security Manager is enabled


While the Java Security Manager will not prevent all security vulnerabilities possible against an application deployed to JBoss EAP 7.1, it will add another layer of protection, which could mitigate the impact of serious security vulnerabilities such as deserialization attacks against Jackson Databind. If running with Security Manager enabled, be sure to check the impact on the performance of the application to make sure it's within acceptable limits. Finally, use of the wildfly-policygen tool is not officially supported by Red Hat, however issues can be raised for the project in Github, or reach out to Red Hat Product Security for usage help by emailing secalert@redhat.com.



Red Hat JBoss Enterprise Application Platform





Chemnitzer Linux Tage 2018

Posted by Fabian Affolter on March 14, 2018 10:38 AM

As usual was the Fedora Project present at the Chemnitzer Linux Tage 2018. For CLT this was kind of their 20th birthday.

Instead of showing how you can use Fedora to do 3D printing, we went with two Fedora Demo stations. CLT is still attracting new Linux users and it’s nice to show them a running Fedora installation.

There are some stability issues when you are running Fedora a single board computer. It’s was a bit annoying that we needed to power cycle our two Raspberry Pi on a regular base. Not sure, if the the GUI was the cause or the hardware itself.

This year we had the new Workstation guides to give away. It’s a nice replacement for the media and probably more sustainable than an installation disc.

People are still looking for live media. I’m not sure if they are collectors or actually using it but most visitors understood why we no longer have media. So, they went with a sticker or a pen.

During the event I needed to switch my hats. CLT is also a little bit about Home Assistant. We got our second Thomas-Krenn award. This was every unexpected and a really nice surprise. I’m hoping that Fedora IoT will bring Fedora and Home Assistant closer together. It’s a very long way to go and will require a huge amount of work.

If you are surrounded by other distributions then you will always hear the latest and greatest about their projects. At the end of the day it’s a bit sad to see that after almost 10 years (the first big attempt to join efforts between RPM based distributions was made during LinuxTag in 2009) we are still spending time to solve the same problems independently. I would really like to see that one day OpenSuSE, Mageia and Fedora are using the workflow for building packages and not need to maintain their own SPEC files.

Lucky me. At the end all cap were gone 😉

3 security videos from DevConf.cz 2018

Posted by Fedora Magazine on March 14, 2018 08:00 AM

The recent DevConf.cz conference in Brno, Czechia is an annual event run by and for open source developers and enthusiasts. Hundreds of speakers showed off countless technologies and features advancing the state of open source in Linux and far beyond. A perennially popular subject at open source conferences is security. Below is a selection of videos from the many outstanding sessions where presenters covered security topics.

Everyday security

Developers’ and administrators’ daily work can bring them into situations where mistakes can be costly. Miscreants can use numerous vectors to stage attacks or take advantage of software flaws. In this session, Christian Heimes shows how he has run into these issues in his work, and shares some thoughts on how to avoid common blunders. View the session here:

<iframe allow="autoplay; encrypted-media" allowfullscreen="allowfullscreen" frameborder="0" height="507" src="https://www.youtube.com/embed/HA932zMkLQc?feature=oembed" width="676"></iframe>


Autonomous security agents

Computer attacks are basically driven by scripts. In seconds, they can recon, exploit, and collect data of interest. DARPA’s Cyber Grand Challenge this year showed that computer security must match the speed of these attacks. In this session, Steve Grubb covers how autonomous security agents can deal with these threats. Watch the session here:

<iframe allow="autoplay; encrypted-media" allowfullscreen="allowfullscreen" frameborder="0" height="507" src="https://www.youtube.com/embed/omrByMoey2A?feature=oembed" width="676"></iframe>

SELinux loves Modularity

Currently, Fedora delivers the entire distribution SELinux policy in a single RPM package. This approach worked well when SELinux was first introduced. But as the legacy Fedora model starts to shift towards a decomposed, modular approach, so should the Fedora SELinux policy. In this session, Paul Moore talks about SELinux Modularity concepts, its advantages, and its necessity. Check out the talk here:

<iframe allow="autoplay; encrypted-media" allowfullscreen="allowfullscreen" frameborder="0" height="507" src="https://www.youtube.com/embed/7foVfBX0gH0?start=960&amp;feature=oembed" width="676"></iframe>

Nullcon 2018

Posted by Fabian Affolter on March 14, 2018 07:55 AM

Nullcon is a security conference which takes place in India. For me it was the second time I attended and it was again a very nice experience.

Jörg’s Audit +++ took place on Wednesday and Thursday including the option to do the OPSE certification. The training session is not so much about technical skills but more about the soft skills. It should help managers to understand the work which security testers are doing and help security testers to do their work in a proper way.

Last year the infrastructure was suffering because of a couple of power outages. At some point the previously used OpenStack setup was just dead. To avoid this I decided to go with actual hardware which is backed up by a powerbank. The three Orange Pi Zero were running with Armbian. All attempts with Fedora failed unfortunatly.

The attendees are free to use whatever operating system or tools they want to perform the technical exercises. Last year we provided the Fedora Security Lab on USB keys but decided to do that for this training. One reason was that most exercises could be performed without any help of a computer.

During the conference I orchestrated the Crypto currencies village. This was about the basics of the crypto currencies and included my “Mini mining rig”. An old Orange Pi PC with an attached USB ASIC miner. Well, not much to see but it was a possibility for the attendees to see mining hardware. Fedora would have been my primary choice for the operating system but again I needed to go with Armbian. It’s is way simpler and faster to get it running.

As Nullcon is a security conference you see a lot of Windows related topic. But from my point of view it would be a perfect place to talk about about the measures the Linux community is taking to make the world a more secure place. The exhibition area was always crowded. Even in 2018 the first question you get is “For what company do you work?”. It seems that it’s still not common that Open Source contributors are attending those kind of events. Sure, I work for a company but there I’m not representing that company but the Fedora Project. I’m still hoping to see more Open Source project at those kind of events.

If you are attending a conference on a different continent then one big plus is that you can meet people in real life. Especially people you rarely meet online because their timezone is so much different than yours that it’s almost impossible to chat on a regular base.

Anyway, I would like to thanks the guys behind Nullcon for making it possible for me to be there.