Fedora People

FPgM report: 2019-24

Posted by Fedora Community Blog on June 14, 2019 09:26 PM
Fedora Program Manager weekly report on Fedora Project development and progress

Here’s your report of what has happened in Fedora Program Management this week. Elections voting is open through 23:59 UTC on Thursday 20 June.

I have weekly office hours in #fedora-meeting-1. Drop by if you have any questions or comments about the schedule, Changes, elections, or anything else.

Announcements

Help wanted

  • Help with Flock tasks is appreciated. Contact bex to be added to the board.

Upcoming meetings

Fedora 31 Status

Changes

Announced

Submitted to FESCo

Approved by FESCo

The post FPgM report: 2019-24 appeared first on Fedora Community Blog.

An OpenJPEG Surprise

Posted by Michael Catanzaro on June 14, 2019 02:43 PM

My previous blog post seems to have resolved most concerns about my requests for Ubuntu stable release updates, but I again received rather a lot of criticism for the choice to make WebKit depend on OpenJPEG, even though my previous post explained clearly why there are are not any good alternatives.

I was surprised to receive a pointer to ffmpeg, which has its own JPEG 2000 decoder that I did not know about. However, we can immediately dismiss this option due to legal problems with depending on ffmpeg. I also received a pointer to a resurrected libjasper, which is interesting, but since libjasper was removed from Ubuntu, its status is not currently better than OpenJPEG.

But there is some good news! I have looked through Ubuntu’s security review of the OpenJPEG code and found some surprising results. Half the reported issues affect the library’s companion tools, not the library itself. And the other half of the issues affect the libmj2 library, a component of OpenJPEG that is not built by Ubuntu and not used by WebKit. So while these are real security issues that raise concerns about the quality of the OpenJPEG codebase, none of them actually affect OpenJPEG as used by WebKit. Yay!

The remaining concern is that huge input sizes might cause problems within the library that we don’t yet know about. We don’t know because OpenJPEG’s fuzzer discards huge images instead of testing them. Ubuntu’s security team thinks there’s a good chance that fixing the fuzzer could uncover currently-unknown multiplication overflow issues, for instance, a class of vulnerability that OpenJPEG has clearly had trouble with in the past. It would be good to see improvement on this front. I don’t think this qualifies as a security vulnerability, but it is certainly a security problem that would facilitate discovering currently-unknown vulnerabilities if fixed.

Still, on the whole, the situation is not anywhere near as bad as I’d thought. Let’s hope OpenJPEG can be included in Ubuntu main sooner rather than later!

Personal assistant with Mycroft and Fedora

Posted by Fedora Magazine on June 14, 2019 09:46 AM

Looking for an open source personal assistant ? Mycroft is allowing you to run an open source service which gives you better control of your data.

Install Mycroft on Fedora

Mycroft is currently not available in the official package collection, but it can be easily installed from the project source. The first step is to download the source from Mycroft’s GitHub repository.

$ git clone https://github.com/MycroftAI/mycroft-core.git

Mycroft is a Python application and the project provides a script that takes care of creating a virtual environment before installing Mycroft and its dependencies.

$ cd mycroft-core
$ ./dev_setup.sh

The installation script prompts the user to help him with the installation process. It is recommended to run the stable version and get automatic updates.

When prompted to install locally the Mimic text-to-speech engine, answer No. Since as described in the installation process this can take a long time and Mimic is available as an rpm package in Fedora so it can be installed using dnf.

$ sudo dnf install mimic

Starting Mycroft

After the installation is complete, the Mycroft services can be started using the following script.

$ ./start-mycroft.sh all

In order to start using Mycroft the device running the service needs to be registered. To do that an account is needed and can be created at https://home.mycroft.ai/.

Once the account created, it is possible to add a new device at the following address https://account.mycroft.ai/devices. Adding a new device requires a pairing code that will be spoken to you by your device after starting all the services.

<figure class="wp-block-image"></figure>

The device is now ready to be used.

Using Mycroft

Mycroft provides a set of skills that are enabled by default or can be downloaded from the Marketplace. To start you can simply ask Mycroft how is doing, or what the weather is.

Hey Mycroft, how are you ?

Hey Mycroft, what's the weather like ?

If you are interested in how things works, the start-mycroft.sh script provides a cli option that lets you interact with the services using the command line. It is also displaying logs which is really useful for debugging.

Mycroft is always trying to learn new skills, and there are many way to help by contributing the Mycroft community.


Photo by Przemyslaw Marczynski on Unsplash

Baseball In Baltimore

Posted by Zach Oglesby on June 14, 2019 01:35 AM

Baseball game with the boys.

Untitled Post

Posted by Zach Oglesby on June 13, 2019 11:17 PM

Checked in at Oriole Park at Camden Yards for Toronto Blue Jays vs Baltimore Orioles. Baseball with the boys.

Modularity vs. libgit

Posted by Fedora Community Blog on June 13, 2019 06:13 PM

Fire! Libgit can’t be installed, module changes are being temporarily reverted, and one of our great contributors are thinking about moving their packages out of Fedora.

This blog post has been written to summarize the problem, explain how we got here, offer potential solutions that would work right now, and to set a common ground for a discussion on the devel list about how to address this and similar problems properly.

Summary of what happened

This is about two modules, silver and bat, that depend on another module, libgit. The problem is that libgit changes its ABI (application binary interface) quite often. With every upstream release, in fact. So naturally there are new module streams for it appearing in Fedora. And the silver and bat modules need to consume the new versions in order to keep functioning properly. However, changing the stream of a module they depend on causes trouble.

Right now, DNF just can’t handle it and keeps erroring out. However, with the current design, changing streams should not happen as explained below in the Our intentions and how we got here section.

The real issue is how we solve this problem. Not just to stop the errors, but fix this situation in general. There is a dependency (libgit in this example) that needs an ABI-incompatible upgrade, and that dependency is required by different pieces of software (including silver and bat, but also others) of which only some need that upgrade. Not upgrading it breaks silver and bat. Upgrading it breaks the others. What to do?

In Fedora (and probably other distros as well) we have this concept of compatibility packages. These are packages with different names providing different version of the same library, potentially installable in parallel. So this problem is solvable with RPMs. But what about modules?

You can’t install two streams of a module on the same system at once. At first glance, that seems like the limiting factor here. And it is right now with the way libgit modules are structured. But there are existing approaches for fixing that.

Our intentions and how we got here

Principle 1: Only one stream of a given module can be installed on a system. That is because Modularity doesn’t modify the RPM packages in any way — they install files in standard paths so your system works as expected. The side effect of this is that installing multiple versions of one package is not possible, because the files would collide

In some cases the package is designed with parallel-installation in mind and those cases can be enabled by providing a different module name to each version. (This is one of the possible solutions offered later in this post.)

Principle 2: Updates respect your choices. Running dnf update on a system will not change any module stream. So when you choose a particular module stream, your system is upgraded to the newest versions of RPMs within that stream

But when changes are desired, there are explicit choices to be expressed. The user can explicitly change a module stream using a set of DNF commands when they are ready for the change.

This is quite clear and people have always liked this principle of stability.

But what about dependencies? Can a module change its dependencies (meaning its dependent module streams) during an update? Right now it can not, because that would break this very principle. But there are ways to solve this problem in the “Solutions right now section”, and a discussion about a potential tweak of this principle in the “Potential solutions for the future” section.

Solutions right now

There are existing ways to solve the libgit issue. Let’s have a look at each of them.

Multiple module names to parallel install

As already mentioned, we have the concept of compatibility packages in Fedora that enable us to provide multiple versions of the same library that are potentially parallel-installable. We can create the parallel-installable compatibility packages, and put them into modules having different names which would make them parallel-installable.

So instead of having libgit:0.26, libgit:0.27, and libgit:0.28, we could have something such as libgit-0.26:0.26, libgit-0.27:0.27, and libgit-0.28:0.28. It might not look ideal, but it would work today.

Module Stream Expansion — building one module against multiple streams of another one — would continue to work, because this mechanism allows to expand not just across multiple streams of the same module, but across multiple modules as well by specifying multiple entries under the dependencies field — see the Complex example in the Modularity docs / Defining modules in modulemd.

One stream with compatibility packages

Because libgit is changing so fast — reportedly changing its ABI with each upstream release — there could be just one libgit module containing compatibility packages for all versions. This could work if the RPMs define version-specific dependencies, however, it would prevent the packager from easily building against all versions using stream expansion as they could in the previous example.

Not modularize libgit

Libgit could also be built as a set of non-modular RPM compatibility packages, having basically the same effect as the previous example.

New streams of silver and bat

This would not solve the problem directly, but it is also a valid technical way of getting rid of the errors — so it’s listed here to make the list complete.

When requiring a new module stream of its dependencies, the silver and bat modules could add new streams that would have this new dependency. Users would need to switch to this stream to use the versions depending on the new libgit stream. However, if there was anything else installed on the system that requires the old stream of libgit, this approach wouldn’t work.

Bundle libgit into the silver and bat modules

This one is tricky, might have weird consequences, and is not recommended. But again, is listed here to make the list complete.

The silver and bat modules could bundle the libgit package in themselves and it would probably work.

What would happen, though, would be two instances of the libgit package in the repository. And when both silver and bat modules would get installed, only one of the two libgit packages would get installed based on their name-version-release (NVR). This is a general issue of modules overlapping.

Potential solutions for the future

One option is to allow switching streams in the background under some very specific conditions — such as only enable this for modules that have been enabled as a dependency (implicitly). Modules installed explicitly would never automatically switch stream. This approach would probably not solve the problem of different packages requiring different versions of a library at the same time, but might be worth looking into

Let’s discuss potential better ways to deal with this kind of problem in the future on the devel list.

The post Modularity vs. libgit appeared first on Fedora Community Blog.

PHP version 7.2.20RC1 and 7.3.7RC1

Posted by Remi Collet on June 13, 2019 11:26 AM

Release Candidate versions are available in testing repository for Fedora and Enterprise Linux (RHEL / CentOS) to allow more people to test them. They are available as Software Collections, for a parallel installation, perfect solution for such tests (for x86_64 only), and also as base packages.

RPM of PHP version 7.3.7RC1 are available as SCL in remi-test repository and as base packages in the remi-test repository for Fedora 30 or remi-php73-test repository for Fedora 27-29 and Enterprise Linux.

RPM of PHP version 7.2.20RC1 are available as SCL in remi-test repository and as base packages in the remi-test repository for Fedora 28-29 or remi-php72-test repository for Fedora 27 and Enterprise Linux.

 

emblem-notice-24.pngPHP version 7.1 is now in security mode only, so no more RC will be released.

emblem-notice-24.pngInstallation : read the Repository configuration and choose your version.

Parallel installation of version 7.3 as Software Collection:

yum --enablerepo=remi-test install php73

Parallel installation of version 7.2 as Software Collection:

yum --enablerepo=remi-test install php72

Update of system version 7.3:

yum --enablerepo=remi-php73,remi-php73-test update php\*

or, the modular way (Fedora and RHEL 8):

dnf module enable php:remi-7.3
dnf --enablerepo=remi-modular-test update php\*

Update of system version 7.2:

yum --enablerepo=remi-php72,remi-php72-test update php\*

or, the modular way (Fedora and RHEL 8):

dnf module enable php:remi-7.2
dnf --enablerepo=remi-modular-test update php\*

Notice: version 7.3.5RC1 in Fedora rawhide for QA.

emblem-notice-24.pngEL-7 packages are built using RHEL-7.6.

emblem-notice-24.pngPackages of 7.4.0alpha1 are also available as a Software Collections.

emblem-notice-24.pngRC version is usually the same as the final version (no change accepted after RC, exception for security fix).

Software Collections (php72, php73)

Base packages (php)

Fedora 30 Release Party Mexico City

Posted by Fedora Community Blog on June 13, 2019 06:46 AM

On May 23, 2019, the Fedora Community in Mexico City ran an awesome Fedora 30 Release Party. This activity took place in the local Red Hat office. We really appreciate the space for our activities and particularly thanks to Alex Callejas (darkaxl017) for doing all the necessary paperwork.

We had three main activities: An amazing talk from Rolando Cedillo (@rolman) about KVM in Fedora, a Q&A session and our networking time with pizza and Fedora cup cakes.

Mexico City event in numbers:

Conclusion

This Fedora Release Party was a great event with a great response of the local community. This shows that people understand that the Fedora Project offers a rock solid operative system that is truly free, reliable, and an amazing choice for people that need to get the things done.

The post Fedora 30 Release Party Mexico City appeared first on Fedora Community Blog.

Find files bigger than a specific file size

Posted by Wilfredo Porta on June 12, 2019 05:47 PM

To achieve this, we can use the command find with its -size flag.

Example:

To find files larger than 10MB:

find . -type f -size +10M

If you want to find only in the current directory:

find . -maxdepth 1 -type f -size +10M

Keep SSH session alive

Posted by Wilfredo Porta on June 12, 2019 05:40 PM

To avoid having your SSH session timeout due to inactivity, you can tweak your server and client settings.

Server side

Edit the fie: /etc/ssh/sshd_config
Set the values:

ClientAliveInterval 120
ClientAliveCountMax 720

Client Side

Edit the file: ~/.ssh/config
Set the value:

ServerAliveInterval 120

That should do the trick!

Untitled Post

Posted by Zach Oglesby on June 12, 2019 04:51 PM

Checked in at &pizza

Sending logs to Google Stackdriver using syslog-ng PE

Posted by Peter Czanik on June 12, 2019 10:28 AM

Google Stackdriver collects and analyses logs, events and metrics of your infrastructure. Using syslog-ng PE 7.0.14 or later, you can send your logs to Google Stackdriver. While originally designed to quickly respond to events in the Google Cloud Platform (GCP), you can use Google Stackdriver with any other cloud providers (like Amazon Web Services) or on premises data as well. This way you can view events of a hybrid infrastructure at a single location.

Before you begin

In order to test the Google Stackdriver destination of syslog-ng PE you need two things:

  • a Google account

  • syslog-ng PE 7.0.14 or later

You can test both for free for a limited amount of time. You can learn more about syslog-ng PE and ask for a trial version at https://www.syslog-ng.com/products/log-management-software/

Configuring Google Stackdriver

It is very tempting just to click “Try it free” on the Google Stackdriver page. It most likely works starting from there if you are already a GCP customer. But if you are just trying to push your on-premise or your AWS logs to Google Stackdriver, you better follow the steps outlined in the syslog-ng PE documentation: http://support.oneidentity.com/technical-documents/syslog-ng-premium-edition/7.0.14/administration-guide/sending-and-storing-log-messages-destinations-and-destination-drivers/stackdriver-sending-logs-to-the-google-stackdriver-cloud/configuring-syslog-ng-pe-to-send-logs-to-google-stackdriver

Make sure that you complete all the steps outlined in the documentation. Save the JSON file containing the key for the service account to a location where you can easily find it and note down the project ID.

Configuring syslog-ng PE

If you have not done so yet, install syslog-ng PE in server mode. For that you need a valid (trial) license. Once syslog-ng is up and running locally you can add the Stackdriver destination.

First of all, copy the downloaded JSON file to the location of your syslog-ng configuration, the /opt/syslog-ng/etc/ directory.

Next append a few lines to the syslog-ng PE configuration in /opt/syslog-ng/etc/syslog-ng.conf After installation it provides you with a working minimal configuration. It collects local log messages and saves them to /var/log/messages. The name of the source it creates is s_local. We reuse it for the Stackdriver destination in the log statement:

destination d_stackdriver {
  stackdriver(
    gcp_auth_header(
      credentials("/opt/syslog-ng/etc/czpsngstackdriver-01fcc6750db7.json")
    )
    log_id("mylogid")
    resource(
      generic_node(
        project_id("czpsngstackdriver")
        location("EU/Budapest")
        namespace("my cluster")
        node_id("$HOST")
      )
    )
  );
};

log {source(s_local); destination(d_stackdriver);};

As you can see numerous time in the documentation, while most of syslog-ng is using the hyphen (-) and underscore (_) characters interchangeably, in the Stackdriver destination driver you have to use the underscore.

For credentials() you need to provide the location of the JSON file. For project_id use the name you entered on the web interface.

Read the Google Stackdriver documentation to learn more about the other fields in the configuration. It is available at https://cloud.google.com/monitoring/api/resources

Testing

Once you have everything configured, reload syslog-ng so the new configuration comes into effect. In a short while your log messages should be arriving to Google Stackdriver. On the Dashboard on the left hand side click on Logging. This will open up a new tab in your browser.

Note that when you first open this page, you will see the service account logs. On the top left of the screen in the drop down menu switch to Generic Node. You should see your Linux logs on the screen.

Uploading name-value pairs

The default template sends the basic syslog fields in JSON format to Stackdriver. You can extend your configuration to send name-value pairs created by syslog-ng as well. For example, recent versions of syslog-ng parse sudo log messages automatically and create name-value pairs out of them. These start with .sudo, where the leading dot is turned into an underscore when formatted as JSON.

Here is the line you need to add to your configuration (for example right under log_id()):

    json-payload("$(format-json --scope rfc5424 --scope dot-nv-pairs --exclude DATE --key ISODATE)")

Check the value-pairs documentation for a detailed description of possible configuration options for the JSON template.

Testing name-value pairs

The easiest way is enter a few commands through sudo and then search the results in the Stackdriver web interface. You can search the content of the JSON fields easily as the search interface helps you to find the names of the fields. Here I searched for sudo logs coming from user czanik:

Stackdriver -- sudo

Learn more

If you want to learn more about how to use syslog-ng PE with the Google Stackdriver join our next webinar: https://www.syslog-ng.com/event/live-webinar-how-to-use-the-syslogng-pes-new-google-stackdriver-destin8139195/

If you’d like to try sending logs with syslog-ng PE to Google Stackdriver, download a trial version of syslog-ng Premium Edition.

If you have questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by email or you can even chat with us. For a list of possibilities, check our GitHub page under the “Community” section at https://github.com/balabit/syslog-ng. On Twitter, I am available as @PCzanik.

Installing alternative versions of RPMs in Fedora

Posted by Fedora Magazine on June 12, 2019 08:00 AM

Modularity enables Fedora to provide alternative versions of RPM packages in the repositories. Several different applications, language runtimes, and tools are available in multiple versions, build natively for each Fedora release. 

The Fedora Magazine has already covered Modularity in Fedora 28 Server Edition about a year ago. Back then, it was just an optional repository with additional content, and as the title hints, only available to the Server Edition. A lot has changed since then, and now Modularity is a core part of the Fedora distribution. And some packages have moved to modules completely. At the time of writing — out of the 49,464 binary RPM packages in Fedora 30 — 1,119 (2.26%) come from a module (more about the numbers).

Modularity basics

Because having too many packages in multiple versions could feel overwhelming (and hard to manage), packages are grouped into modules that represent an application, a language runtime, or any other sensible group.

Modules often come in multiple streams — usually representing a major version of the software. Available in parallel, but only one stream of each module can be installed on a given system.

And not to overwhelm users with too many choices, each Fedora release comes with a set of defaults — so decisions only need to be made when desired.

Finally, to simplify installation, modules can be optionally installed using pre-defined profiles based on a use case. A database module, for example, could be installed as a client, a server, or both.

Modularity in practice

When you install an RPM package on your Fedora system, chances are it comes from a module stream. The reason why you might not have noticed is one of the core principles of Modularity — remaining invisible until there is a reason to know about it.

Let’s compare the following two situations. First, installing the popular i3 tiling window manager, and second, installing the minimalist dwm window manager:

$ sudo dnf install i3
...
Done!

As expected, the above command installs the i3 package and its dependencies on the system. Nothing else happened here. But what about the other one?

$ sudo dnf install dwm
...
Enabling module streams:
dwm 6.1
...
Done!

It feels the same, but something happened in the background — the default dwm module stream (6.1) got enabled, and the dwm package from the module got installed.

To be transparent, there is a message about the module auto-enablement in the output. But other than that, the user doesn’t need to know anything about Modularity in order to use their system the way they always did.

But what if they do? Let’s see how a different version of dwm could have been installed instead.

Use the following command to see what module streams are available:

$ sudo dnf module list
...
dwm latest ...
dwm 6.0 ...
dwm 6.1 [d] ...
dwm 6.2 ...
...
Hint: [d]efault, [e]nabled, [x]disabled, [i]nstalled

The output shows there are four streams of the dwm module, 6.1 being the default.

To install the dwm package in a different version — from the 6.2 stream for example — enable the stream and then install the package by using the two following commands:

$ sudo dnf module enable dwm:6.2
...
Enabling module streams:
dwm 6.2
...
Done!
$ sudo dnf install dwm
...
Done!

Finally, let’s have a look at profiles, with PostgreSQL as an example.

$ sudo dnf module list
...
postgresql 9.6 client, server ...
postgresql 10 client, server ...
postgresql 11 client, server ...
...

To install PostgreSQL 11 as a server, use the following command:

$ sudo dnf module install postgresql:11/server

Note that — apart from enabling — modules can be installed with a single command when a profile is specified.

It is possible to install multiple profiles at once. To add the client tools, use the following command:

$ sudo dnf module install postgresql:11/client

There are many other modules with multiple streams available to choose from. At the time of writing, there were 83 module streams in Fedora 30. That includes two versions of MariaDB, three versions of Node.js, two versions of Ruby, and many more.

Please refer to the official user documentation for Modularity for a complete set of commands including switching from one stream to another.

Outreachy with Fedora Happiness Packets: Phase 1

Posted by Fedora Community Blog on June 12, 2019 06:52 AM
Fedora Happiness Packets - project update

This blog post summaries what I’ve completed in Phase 1 in my Outreachy internship with Fedora Happiness Packets, things I learned and the challenges I faced 🙂

What progress did I make?

  1. WYSIWYG Editor Integration: Prior to this change, Happiness Packets could not be formatted or embed pictures. CKEditor integration enabled users to send messages with rich text embedded. It also supports embedding images via links.
  2. Message confirmation link is now only accessible to the sender: When a sender composes a Happiness Packet and sends it, they first receive a confirmation email to send the message. Previously, this link was accessible to anyone, even a user that is not logged in. As this shouldn’t be the case, this bug resolving now only allows the sender of the message to access this link to confirm sending their Happiness Packet.
  3. Auto Reload code changes: This was one I was personally most excited about. Prior to this change, for any modifications made to the code base, we have to rebuild the container for them take effect. In a continuous development workflow, one can imagine how tiresome the waiting process for containers to be build would be. Now any changes made to the back-end or front-end are reflected without rebuilding the containers which speeds up the development workflow by a great measure!
  4. Customize Admin Interface for Message Model: For Admin users, Django Admin Site is a handy tool to access the database. One of the most important functionalities for admins in Fedora Happiness Packets is to approve messages to be displayed in the happiness archive. Prior to this change, due to the lack of filters, admins had to manually find messages that have not be approved. In this change, I introduced better listing of the messages so it’s easier to see at a single glance which message need permission approval. I added filters so that messages that have been approved by sender and receiver can be easily filtered for the admins to grant permission to. I also formatted the message detail page to better categorize each section of the various message model field.
  5. RPM package builds in COPR: This was the most challenging tasks of all but was totally worth it for the awesome badge I got for it! The PyPI package for message schema is also needs to be packaged as a RPM package to deploy the application in production. I created the RPM package and finally got it building successfully in a COPR repository.
  6. Custom Logout function: Fedora Happiness Packets uses Mozilla Django OIDC. When logging out, OpenID logs out of the host application but the OpenID Provider keeps them logged in. This is often referred to as the Single Sign-Out. This is essential to prompt the user for credentials after they log out of the application. This is a WIP.

What did I learn?

Here comes the best part. This initial phase had a lot of new learnings in store for me. I learned about the workings of CKEditor in detail to integrate it. I learned a great deal about Docker and how Volumes work. I learned how are RPM packages build and how to debug failing builds in COPR. I also learned about OpenID and how it workflow in which it provides authentication and why OP logout is not considered an issue.

Apart from all that I learned while solving these issues, I also learned how to deploy a Django application and did the same in Vagrant.

Challenges? Heck yes!

I had my fair share of moments when nothing seemed to work. The second week of the internship was that challenging time for me. I was working on RPM builds which were failing and I had no clue why or how to debug. The next issue I was tackling at the same time was auto-reload changes and was trying to figure out how to integrate this for back-end while not overwriting the container build process due to volumes. And lastly, custom logout function for logging out of OP was proving to be a difficult finding documentation about. While this struggle is hard and sometimes makes you question yourself, it is only these times, when you learn in abundance. While trying to find solution for my problems I gained a deeper understanding of all these three topics.

My mentors jflory7, skamath, cverna, bt0dotninja, AnXh3L0 and jonatoni were great help. They helped during every step of the way and provided me with the essential resources. They also directed me to the right place to ask my doubts.

That summaries phase 1 of my Outreachy internship! Hopefully the next phase will turn out to be just as fun and challenging! 🙂

The post Outreachy with Fedora Happiness Packets: Phase 1 appeared first on Fedora Community Blog.

Cockpit 196

Posted by Cockpit Project on June 12, 2019 12:00 AM

Cockpit is the modern Linux admin interface. We release regularly. Here are the release notes from version 196.

Machines: Installation ISOs can be specified with a URL

Cockpit can now create a virtual machine from a ISO disc image URL.

Virtual machine from URL

Machines: IP addresses for network interfaces

The information about network interfaces now includes their IP addresses.

Interface addresses

Try it out

Cockpit 196 is available now:

Katacoda scenario creation

Posted by Pablo Iranzo Gómez on June 11, 2019 07:16 PM

After some time checking the scenarios at https://learn.openshift.com, I decided to give it a try.

With the help of Mario Vázquez, author of Getting Started with Kubefed, I did create two scenarios:

You can check how them can be created by looking at their code at: Katacoda Scenarios or the ‘playable’ version at https://www.katacoda.com/iranzo/.

Enjoy!

Fedora Gooey Karma Week 2 report GSoC

Posted by Fedora Community Blog on June 11, 2019 04:05 PM
This is an generic banner that goes with Weekly Updates
  • Fedora Account: imzubin
  • IRC: iamzubin || iamzubin_ (found in #fedora-summer-coding, #fedora-devel, #fedora-qa )
  • Fedora User Wiki Page

Tasks Completed

Mockup for Fedora-Gooey-Karma

We got the first mockup ready, thanks to Duffy

<figure class="wp-block-image"></figure>

Community Feedback

We discussed the features that previous users like. also decided the frameworks to use. these are the issues:-

Framework

Design and features

What’s Happening

Bodhi API

Right now I am getting familiar with bodhi API and python bindings.

QT design for Mockup

I am also working on the QT design, working on the mockup attached above.


Please send in your feedback at zchoudhary [dot] 10 [at] gmail [dot] com

The post Fedora Gooey Karma Week 2 report GSoC appeared first on Fedora Community Blog.

What, Why and How: Outreachy 101

Posted by Fedora Community Blog on June 11, 2019 07:29 AM
Outreachy 2019 with Fedora Happiness Packets: application period

This is part of a recurring series between May – August 2019 on the Community Blog about Fedora Happiness Packets. These posts are published as part of a series of prompts from the Outreachy program.

I recently got selected for Outreachy with Fedora and thought I should document the entire process for other curious souls looking to participate! 🙂

Note: This article by no means provides a ‘hack’ or definite steps to get into Outreachy. These are just my thoughts on what worked for me.

The What

As mentioned on the official Outreachy website:

Outreachy provides internships to work in Free and Open Source Software (FOSS). It’s a three month remote internship.

Outreachy internships provide a wide varity of domains to work in. Projects may include programming, user experience, documentation, illustration, graphical design, or data science.

outreachy.org

If you’ve heard about Google Summer of Code, Outreachy in many ways is similar to it with some key differences:

  • While Outreachy is an internship, GSoC is not considered as an ‘internship’.
  • Outreachy is open to only under-represented individuals in tech whereas GSoC is a student-centric program.
  • Outreachy is semiannual while GSoC runs once per year.
  • Stipends for Outreachy are fixed while GSoC stipends may vary depending on where you reside.

Who can apply?

Outreachy has very strict eligibility criteria. Two of the most important ones are:

  • Anyone who faces under-representation, systemic bias, or discrimination in the technology industry of their country.
  • You must have at least 49 consecutive days free from full-time commitments.

You can check the complete eligibility criteria on the official website. The initial application verifies your eligibility. More on the same is detailed below.

When can you start?

Outreachy runs twice a year, mid year and end of year. For the mid year round, the application period starts from February and the internship concludes in August while for the end of year round, the application period starts from September and the internship concludes in March. If you’re a student, depending on where you school is, you can apply to either of these rounds.

The Why

Not that all that is out of the way, essentially the most important question to ask here is: why do you want to apply for Outreachy. Summer of Code has gained a considerable amount of traction in the last few years and if you’re based in India, there are GSoC/Outreachy fellows left, right, and centre. Don’t do it because everyone you know is doing it, peer pressure be damned.

Here are some takeaways from my experience that hopefully will help you choose if Outreachy is the best bet for you!

  1. Quality abundance of learning. Contributing to a FOSS project is a huge step up from a personal project. You get to learn plentiful and get a taste of what it’s like to be a part of a big system with various working parts. Contributing to the project not only enhances your skill set as a software developer but also endows you with essential soft skills.
  2. Gateway into Open Source Software development. If you’re passionate about OSS and have been looking to contribute to the same, Outreachy sets an effective precursor. Getting involved with OSS can be tricky and often daunting for beginners, as was the case with me. Just knowing who to ask help from is barrier breaker. Outreachy provides specific steps on how to connect with your mentor and contact them in case you feel stuck or are in need of help. This brings me to my third most important takeaway.
  3. Build your network. Irrespective of whether you get selected or not, while contributing to the project you get to know and work with experienced and highly talented software developers and your fellow talented applicants. Interacting and working with OSS veterans across the globe teaches you the current best practices, brings new and enlightening perspectives into focus and exposes you to opportunities you might not have stumbled upon elsewhere.
  4. Credibility If you’re like me who regularly needs a reminder to hush that small voice in the back of your head telling you that you’re not skilled enough, or often doubt your own capabilities as a software developer, Outreachy will provide that much needed boost to your self-esteem.
  5. Monetary Perks Outreachy provides $5500 USD to each intern as an incentive to get involved in OSS. Additionally, Outreachy also provides $500 USD travel stipend to attend workshops/conferences.

The How

Now if most of the above reasons seem fitting, let’s get into the stepping stones towards getting selected for Outreachy.

The Initial Application

The first step during the application period is the initial application. This is for the Outreachy organisers to verify your eligibility. It requires you to answer four essay based question and some others to verify your time availability.

This must be taken extremely seriously. Only after the initial application is accepted, the projects are made visible to the applicants.

Selecting a Project

Going through the list of projects I had these important points in mind:

  1. Look for a project that will enhance your current skill set and simultaneously nudge you to expand your knowledge spectrum.
  2. Question to self: Do you see yourself contributing to the project long after the internship is done? If the answer is yes, you’re good to go.
  3. Don’t go after an organisation because you’ve heard too much about it. Again, a tag won’t help you if your heart’s not in it.

Don’t select too many projects and juggle between them as that will only divide your time that could instead be devoted to understanding one/two projects and giving it your best input.

Contribution Period

Outreachy requires you to solve at least one issue to submit a final application. After introducing yourself on the community’s preferred mode of communication, go on the hunt for your first issue to solve.

This is one of the most crucial period based on which mentors decide whether you’re fit for the project or not. Some key points to keep in mind are:

  1. Solve as many bugs as possible. Don’t just go for issues that fall under your spectrum of knowledge, try to solve issues that urge you to step outside your comfort zone and learn new things on the fly. This will help mentors see your ability to learn and adapt according to project requirements. Be involved not only by contributing but also open issues when you come across a bug.
  2. Be an active member of the community. Communicate effectively with your mentors. Follow the etiquettes to communicate on a public platform. Keep them updated of your progress and any obstacles you’re facing.
  3. Do not ask questions to your mentors unless you’ve done enough research about the same. Respect their time and efforts and use Google and Stack Overflow in abundance.
  4. Help others out as much as possible. I can’t stress this enough, don’t make this into a dirty race where you belittle your co-applicants or the likes. Genuinely help other contributors and build a supporting community.
  5. Most of all, have fun! Strive for those Eureka moments when you solve bugs or add new features. Give yourself a pat on the back, you’ve earned it!

Note: If you’re like me and big code-base seem daunting, remember, you don’t need to know it in its entirety. Start small and build from there.

The Final Application

The final application requires you to record all your contributions mention your experience working with the organisation, give details of any past projects you’ve made/FOSS organisation you’ve worked with and supply a timeline mapping out the course of action for the next three months of the internship. Each and every step is important for outreachy organisers to understand if you’re suitable for the internship so give plenty time to each and mention any and all details.

After submitting the final application, you can keep on contributing to the project and maintain a steady communication with the mentors.

And, you’re IN!

That’s it. As I start my internship with Fedora I can safely say, all it needs is consistent efforts by a passionate being.

You can read about my experience during the application period here.

Hope you found something valuable here and most of all the drive to apply for that Summer of Code you’ve been wanting to since eternity. Quit questioning, take a leap of faith and dive right in!

Feel free to reach out about any doubts concerning the application process, I’ll love to help you out! 🙂

The post What, Why and How: Outreachy 101 appeared first on Fedora Community Blog.

Slow boot: LVM2 PV scan on device

Posted by Lukas "lzap" Zapletal on June 11, 2019 12:00 AM

Slow boot: LVM2 PV scan on device

My workstation was sometimes slow to boot and got stuck at:

Start job is running for LVM PV Scan on device 0:1 ...
Start job is running for activation of DM RAID sets ...

These jobs were taking minutes and I am not patient person so it always ended up in hard reboot. Today, I decided to take a look. So after quick search, it looks like LVM scan can get stuck when searching block devices which are not part of the LVM configuration. What’s weird in my case is that lvscan is quick after the system is booted:

# time vgscan
  Reading volume groups from cache.
  Found volume group "vg_home" using metadata type lvm2
  Found volume group "vg_virt" using metadata type lvm2
real    0m0,051s
user    0m0,016s
sys     0m0,010s

It must be something else, I suspect that it’s the CDROM device which is not yet fully initialized. This was happening randomly, most of the times it booted quicky but when it did not it was pain in the ass. Let’s have a look, I have the following block devices as part of my LVM:

# pvdisplay | grep PV.Name
  PV Name               /dev/sda1
  PV Name               /dev/nvme0n1p5
  PV Name               /dev/sda3

However it looks like LVM scan need to sniff more of them:

# cat /etc/lvm/cache/.cache

# This file is automatically maintained by lvm.
persistent_filter_cache {
        valid_devices=[
                "/dev/disk/by-path/pci-0000:02:00.1-ata-1-part3",
                "/dev/disk/by-id/wwn-0x5000c500a2e9c4ca-part1",
                "/dev/block/259:5",
                "/dev/disk/by-id/nvme-eui.0025385471b19e14-part5",
                "/dev/disk/by-id/wwn-0x5000c500a2e9c4ca-part3",
                "/dev/disk/by-partuuid/fa532780-e770-448d-b004-120f1128b5b8",
                "/dev/disk/by-path/pci-0000:02:00.1-ata-1-part1",
                "/dev/disk/by-partuuid/35f3b55b-5259-45e2-aa93-b8eb8b9b0cd0",
                "/dev/block/8:1",
                "/dev/nvme0n1p5",
                "/dev/sda1",
                "/dev/disk/by-id/nvme-Samsung_SSD_960_EVO_250GB_S3ESNX0J444591D-part5",
                "/dev/disk/by-id/ata-ST4000DM005-2DP166_ZDH1KYP4-part3",
                "/dev/disk/by-path/pci-0000:01:00.0-nvme-1-part5",
                "/dev/disk/by-partuuid/73267321-1c7e-4510-86ab-6cd45bd38518",
                "/dev/block/8:3",
                "/dev/sda3",
                "/dev/disk/by-id/ata-ST4000DM005-2DP166_ZDH1KYP4-part1"
        ]
}

There is a configuration option called global filter which whitelists (a) or blacklists (r) devices by regular expressions. It’s easy to understand, for me this is the correct value:

# grep global_filter /etc/lvm/lvm.conf
global_filter = [ "a|/dev/sda[13]|", "a|/dev/nvme0n1p5|", "r|.*|" ]

Let’s run VG and PV scan again to confirm it’s finding the devices correctly:

# vgscan
  Reading volume groups from cache.
  Found volume group "vg_home" using metadata type lvm2
  Found volume group "vg_virt" using metadata type lvm2

# pvscan 
  PV /dev/sda1        VG vg_home         lvm2 [<1024,00 GiB / 0    free]
  PV /dev/nvme0n1p5   VG vg_home         lvm2 [<149,69 GiB / 0    free]
  PV /dev/sda3        VG vg_virt         lvm2 [<1024,00 GiB / <226,00 GiB free]
  Total: 3 [<2,15 TiB] / in use: 3 [<2,15 TiB] / in no VG: 0 [0   ]

Let’s now delete the cache, this is safe operation do not worry:

rm /etc/lvm/cache/.cache

On the next reboot, it should be fast enough every time. Special care needs to be done:
  • During fedora upgrades (to keep the lvm.conf change when doing rpmconf)
  • When adding new disks/devices to the LVM (it might not see it until the filter is adjusted)

Hopefully this blog helps me to get here when I search “LVM does not see a block device”. Let’s see, cheers!

Converting fedmsg consumers to fedora-messaging

Posted by Adam Williamson on June 10, 2019 07:24 PM

So in case you hadn’t heard, the Fedora infrastructure team is currently trying to nudge people in the direction of moving from fedmsg to fedora-messaging.

Fedmsg is the Fedora project-wide messaging bus we’ve had since 2012. It backs FMN / Fedora Notifications and Badges, and is used extensively within Fedora infrastructure for the general purpose of “have this one system do something whenever this other system does something else”. For instance, openQA job scheduling and result reporting are both powered by fedmsg.

Over time, though, there have turned out to be a few issues with fedmsg. It has a few awkward design quirks, but most significantly, it’s designed such that message delivery can never be guaranteed. In practice it’s very reliable and messages almost always are delivered, but for building critical systems like Rawhide package gating, the infrastructure team decided we really needed a system where message delivery can be formally guaranteed.

There was initially an idea to build a sort of extension to fedmsg allowing for message delivery to be guaranteed, but in the end it was decided instead to replace fedmsg with a new AMQP-based system called fedora-messaging. At present both fedmsg and fedora-messaging are live and there are bridges in both directions: all messages published as fedmsgs are republished as fedora-messaging messages by a 0MQ->AMQP bridge, and all messages published as fedora-messaging messages are republished as fedmsgs by an AMQP->0MQ bridge. This is intended to ease the migration process by letting you migrate a publisher or consumer of fedmsgs to fedora-messaging at any time without worrying about whether the corresponding consumers and/or publishers have also been migrated.

This is just the sort of project I usually work on in the ‘quiet time’ after one release comes out and before the next one really kicks into high gear, so since Fedora 30 just came out, last week I started converting the openQA fedmsg consumers to fedora-messaging. Here’s a quick write-up of the process and some of the issues I found along the way!

I found these three pages in the fedora-messaging docs to be the most useful:

  1. Consumers
  2. Messages
  3. Configuration (especially the ‘consumer-config’ part)

Another important bit you might need are the sample config files for the production broker and stable broker.

All the fedmsg consumers I wrote followed this approach, where you essentially write consumer classes and register them as entry points in the project’s setup.py. Once the project is installed, the fedmsg-hub service provided by fedmsg runs all these registered consumers (as long as a configuration setting is set to turn them on).

This exact pattern does not exist in fedora-messaging – there is no hub service. But fedora-messaging does provide a somewhat-similar pattern which is the natural migration path for this type of consumer. In this approach you still have consumer classes, but instead of registering them as entry points, you write configuration files for them and place them in /etc/fedora-messaging. You can then run an instantiated systemd service that runs fedora-messaging consume with the configuration file you created.

So to put it all together with a specific example: to schedule openQA jobs, we had a fedmsg consumer class called OpenQAScheduler which was registered as a moksha.consumer called fedora_openqa.scheduler.prod in setup.py, and had a config_key named “fedora_openqa.scheduler.prod.enabled”. As long as a config file in /etc/fedmsg.d contained 'fedora_openqa.scheduler.prod.enabled': True, the fedmsg-hub service then ran this consumer. The consumer class itself defined what messages it would subscribe to, using its topic attribute.

In a fedora-messaging world, the OpenQAScheduler class is tweaked a bit to handle an AMQP-style message, and the entrypoint in setup.py and the config_key in the class are removed. Instead, we create a configuration file /etc/fedora-messaging/fedora_openqa_scheduler.toml and enable and start the fm-consumer@fedora_openqa_scheduler.service systemd service. Note that all the necessary bits for this are shipped in the fedora-messaging package, so you need that package installed on the system where the consumer will run.

That configuration file looks pretty much like the sample I put in the repository. This is based on the sample files I mentioned above.

The amqp_url specifies which AMQP broker to connect to and what username to use: in this sample we’re connecting to the production Fedora broker and using the public ‘fedora’ identity. The callback specifies the Python path to the consumer callback class (our OpenQAScheduler class). The [tls] section points to the CA certificate, certificate and private key to be used for authenticating with the broker: since we’re using the public ‘fedora’ identity, these are the files shipped in the fedora-messaging package itself which let you authenticate as that identity. For production use, I think the intent is that you request a separate identity from Fedora infra (who will generate certs and keys for it) and use that instead – so you’d change the amqp_url and the paths in the [tls] section appropriately.

The other key things you have to set are the queue name – which appears twice in the sample file as 00000000-0000-0000-0000-000000000000, for each consumer you are supposed to generate a random UUID with uuidgen and use that as the queue name, each consumer should have its own queue – and the routing_keys in the [[bindings]] section. Those are the topics the consumer will subscribe to – unlike in the fedmsg system, this is set in configuration rather than in the consumer class itself. Another thing you may wish to take advantage of is the consumer_config section: this is basically a freeform configuration store that the consumer class can read settings from. So you can have multiple configuration files that run the same consumer class but with different settings – you might well have different ‘production’ and ‘staging’ configurations. We do indeed use this for the openQA job scheduler consumer: we use a setting in this consumer_config section to specify the hostname of the openQA instance to connect to.

So, what needs changing in the actual consumer class itself? For me, there wasn’t a lot. For a start, the class should now just inherit from object – there is no base class for consumers in the fedora-messaging world, there’s no equivalent to fedmsg.consumers.FedmsgConsumer. You can remove things like the topic attribute (that’s now set in configuration) and validate_signatures. You may want to set up a __init__, which is a good place to read in settings from consumer_config and set up a logger (more on logging in a bit). The method for actually reading a message should be named __call__() (so yes, fedora-messaging just calls the consumer instance itself on the message, rather than explicitly calling one of its methods). And the message object itself the method receives is slightly different: it will be an instance of fedora_messaging.api.Message or a subclass of it, not just a dict. The topic, body and other bits of the message are available as attributes, not dict items. So instead of message['topic'], you’d use message.topic. The message body is message.body.

Here I ran into a significant wrinkle. If you’re consuming a native fedora-messaging message, the message.body will be the actual body of the message. However, if you’re consuming a message that was published as a fedmsg and has been republished by the fedmsg->fedora-messaging bridge, message.body won’t be what you’d probably expect. Looking at an example fedmsg, we’d probably expect the message.body of the converted fedora-messaging message to be just the msg dict, right? Just a dict with keys repo and agent. However, at present, the bridge actually publishes the entire fedmsg as the message.body – what you get as message.body is that whole dict. To get to the ‘true’ body, you have to take message.body['msg']. This is a problem because whenever the publisher is converted to fedora-messaging, there won’t be a message.body['msg'] any more, and your consumer will likely break. It seems that the bridge’s behavior here will likely be changed soon, but for now, this is a bit of a problem.

Once I figured this out, I wrote a little helper function called _find_true_body to fudge around this issue. You are welcome to steal it for your own use if you like. It should always find the ‘true’ body of any message your consumer receives, whether it’s native or converted, and it will work when the bridge is fixed in future too so you won’t need to update your consumer when that happens (though later on down the road it’ll be safe to just get rid of the function and use message.body directly).

Those things, plus rejigging the logging a bit, were all I needed to do to convert my consumers – it wasn’t really that much work in the end.

To dig into logging a bit more: fedmsg consumer class instances had a log() method you could use to send log messages, you didn’t have to set up your own logging infrastructure. (Although a problem of this system was that it gave no indication which consumer a log message came from). fedora-messaging does not have this. If you want a consumer to log, you have to set up the logging infrastructure within the consumer, and tweak the configuration file a bit.

The pattern I chose was to import logging and then init a logger instance for each consumer class in its __init__(), like this:

self.logger = logging.getLogger(self.__class__.__name__)

Then you can log messages with self.logger.info("message") or whatever. I thought that would be all I’d need, but actually, if you just do that, there’s nothing set up to actually receive the messages and log them anywhere. So you have to add a bit to the TOML config file that looks like this:

[log_config.loggers.OpenQAScheduler]
level = "INFO"
propagate = false
handlers = ["console"]

the OpenQAScheduler there is the class name; change it to the actual name of the consumer class. That will have the messages logged to the console, which – when you run the consumer as a systemd service – means they wind up in the system journal, which was enough for me. You can also configure a handler to send email alerts, for instance, if you like – you can see an example of this in Bodhi’s config file.

One other wrinkle I ran into was with authenticating to the staging broker. The sample configuration file has the right URL and [tls] section for this, but the files referenced in the [tls] section aren’t actually in the fedora-messaging package. To successfully connect to the staging broker, as fedora.stg, you need to grab the necessary files from the fedora-messaging git repo and place them into /etc/fedora-messaging.

To see the whole of the changes I had to make to the openQA consumers, you can look at the commits on the fedora-messaging branch of the repo and also this set of commits to the Fedora infra ansible repo.

Securing Linux with Ansible

Posted by Christopher Smart on June 10, 2019 11:25 AM

The Ansible Hardening role from the OpenStack project is a great way to secure Linux boxes in a reliable, repeatable and customisable manner.

It was created by former colleague of mine Major Hayden and while it was spun out of OpenStack, it can be applied generally to a number of the major Linux distros (including Fedora, RHEL, CentOS, Debian, SUSE).

The role is based on the Secure Technical Implementation Guide (STIG) out of the Unites States for RHEL, which provides recommendations on how best to secure a host and the services it runs (category one for highly sensitive systems, two for medium and three for low). This is similar to the Information Security Manual (ISM) we have in Australia, although the STIG is more explicit.

Rules and customisation

There is deviation from the STIG recommendations and it is probably a good idea to read the documentation about what is offered and how it’s implemented. To avoid unwanted breakages, many of the controls are opt-in with variables to enable and disable particular features (see defaults/main.yml).

You probably do not want to blindly enable everything without understanding the consequences. For example, Kerberos support in SSH will be disabled by default (via “security_sshd_disable_kerberos_auth: yes” variable) as per V-72261, so this might break access if you rely on it.

Other features also require values to be enabled. For example, V-71925 of the STIG recommends passwords for new users be restricted to a minimum lifetime of 24 hours. This is not enabled by default in the Hardening role (central systems like LDAP are recommended), but can be enabled be setting the following variable for any hosts you want it set on.

security_password_min_lifetime_days: 1

In addition, not all controls are available for all distributions.

For example, V-71995 of the STIG requires umask to be set to 077, however the role does not currently implement this for RHEL based distros.

Run a playbook

To use this role you need to get the code itself, using either Ansible Galaxy or Git directly. Ansible will look in the ~/.ansible/roles/ location by default and find the role, so that makes a convenient spot to clone the repo to.

mkdir -p ~/.ansible/roles
git clone https://github.com/openstack/ansible-hardening \
~/.ansible/roles/ansible-hardening

Next, create an Ansible play which will make use of the role. This is where we will set variables to enable or disable specific control for hosts which are run using the play. For example, if you’re using a graphical desktop, then you will want to make sure X.Org is not removed (see below). Include any other variables you want to set from the defaults/main.yml file.

cat > play.yml << EOF
---
- name: Harden all systems
  hosts: all
  become: yes
  vars:
    security_rhel7_remove_xorg: no
    security_ntp_servers:
      - ntp.internode.on.net
  roles:
    - ansible-hardening
EOF

Now we can run our play! Ansible uses an inventory of hosts, but we’ll just run this against localhost directly (with the options -i localhost, -c local). It’s probably a good idea to run it with the –check option first, which will not actually make any changes.

If you’re running in Fedora, make sure you also set Python3 as the interpreter.

ansible-playbook -i localhost, -c local \
-e ansible_python_interpreter=/usr/bin/python3 \
--ask-become-pass \
--check \
./play.yml

This will run through the role, executing all of the default tasks while including or excluding others based on the variables in your play.

Running specific sets of controls

If you only want to run a limited set of controls, you can do so by running the play with the relevant –tags option. You can also exclude specific tasks with –skip-tags option. Note that there are a number of required tasks with the always tag which will be run regardless.

To see all the available tags, run your playbook with the –list-tags option.

ansible-playbook --list-tags ./play.yml

For example, if you want to only run the dozen or so Category III controls you can do so with the low tag (don’t forget that some tasks may still need enabling if you want to run them and that the always tagged tasks will still be run). Combine tags by comma separating them, so to also run a specific control like V-72057, or controls related to SSH, just add it them with low.

ansible-playbook -i localhost, -c local \
-e ansible_python_interpreter=/usr/bin/python3 \
--ask-become-pass \
--check \
--tags low,sshd,V-72057 \
./play.yml

Or if you prefer, you can just run everything except a specific set. For example, to exclude Category I controls, skip the high tag. You can also add both options.

ansible-playbook -i localhost, -c local \
-e ansible_python_interpreter=/usr/bin/python3 \
--ask-become-pass \
--check \
--tags sshd,V-72057 \
--skip-tags high \
./play.yml

Once you’re happy, don’t forget to remove the –check option to apply the changes.

Applications for writing Markdown

Posted by Fedora Magazine on June 10, 2019 08:56 AM

Markdown is a lightweight markup language that is useful for adding formatting while still maintaining readability when viewing as plain text. Markdown (and Markdown derivatives) are used extensively as the priumary form of markup of documents on services like GitHub and pagure. By design, Markdown is easily created and edited in a text editor, however, there are a multitude of editors available that provide a formatted preview of Markdown markup, and / or provide a text editor that highlights the markdown syntax.

This article covers 3 desktop applications for Fedora Workstation that help out when editing Markdown.

UberWriter

UberWriter is a minimal Markdown editor and previewer that allows you to edit in text, and preview the rendered document.

<figure class="wp-block-image"></figure>

The editor itself has inline previews built in, so text marked up as bold is displayed bold. The editor also provides inline previews for images, formulas, footnotes, and more. Ctrl-clicking one of these items in the markup provides an instant preview of that element to appear.

In addition to the editor features, UberWriter also features a full screen mode and a focus mode to help minimise distractions. Focus mode greys out all but the current paragraph to help you focus on that element in your document

Install UberWriter on Fedora from the 3rd-party Flathub repositories. It can be installed directly from the Software application after setting up your system to install from Flathub

Marker

Marker is a Markdown editor that provides a simple text editor to write Markdown in, and provides a live preview of the rendered document. The interface is designed with a split screen layout with the editor on the left, and the live preview on the right.

<figure class="wp-block-image"></figure>

Additionally, Marker allows you to export you document in a range of different formats, including HTML, PDF, and the Open Document Format (ODF).

Install Marker on Fedora from the 3rd-party Flathub repositories. It can be installed directly from the Software application after setting up your system to install from Flathub

Ghostwriter

Where the previous editors are more focussed on a minimal user experice, Ghostwriter provides many more features and options to play with. Ghostwriter provides a text editor that is partially styled as you write in Markdown format. Bold text is bold, and headings are in a larger font to assist in writing the markup.

<figure class="wp-block-image"></figure>

It also provides a split screen with a live updating preview of the rendered document.

<figure class="wp-block-image"></figure>

Ghostwriter also includes a range of other features, including the ability to choose the Markdown flavour that the preview is rendered in, as well as the stylesheet used to render the preview too.

Additionally, it provides a format menu (and keyboard shortcuts) to insert some of the frequent markdown ‘tags’ like bold, bullets, and italics.

Install Ghostwriter on Fedora from the 3rd-party Flathub repositories. It can be installed directly from the Software application after setting up your system to install from Flathub

State of the Community Platform Engineering team

Posted by Fedora Community Blog on June 10, 2019 07:41 AM

About two years ago the Fedora Engineering team merged with the CentOS Engineering team to form what is now called the Community Platform Engineering (CPE) team. For the team members, the day to day work did not change much.

The members working on Fedora are still fully dedicated to work on the Fedora Project, and those working on CentOS are still fully dedicated to CentOS. On both projects its members are involved in infrastructure, release engineering, and design. However, it brought the two infrastructures and teams closer to each other, allowing for more collaboration between them.

There are 20 people on this consolidated team. The breakdown looks like this:

  • In Fedora:
    • 3 dedicated system administrators
    • 5 dedicated developers
    • 1 doing both development and system administration
    • 1 doing both release engineering and system administration
    • 1 person dedicated to Fedora CoreOS
    • 2 release engineers
    • 1 person dedicated to documentation
    • 1 designer
  • In CentOS:
    • 1 system administrator
    • 2 doing both development and system administration
    • 1 dedicated to the build systems
  • There is also one additional person working on projects internal to Red Hat

So as you can see, of the CPE team itself is composed of 19 people working on Fedora or CentOS of which there are only 7 system administrators and 7 dedicated developers. There are no dedicated database administrators and no dedicated network engineers even though most of the tools use a database as a backend or need sophisticated network tools for clustering, or both.

This team was under the supervision of a single manager, Jim Perrin, last year. But the team is too big for a single manager, so, earlier this year the team got an additional  manager, Leigh Griffin.

Leigh is new to Fedora and CentOS, so he started by looking to see what services/applications we are running. The outcome of this research was quite impressive:

This team of 19 persons is maintaining 112 services!

And 590 physical machines (140 for Fedora, 450 for CentOS) and 516 virtual machines (486 for Fedora, 30 for CentOS)

As you can imagine, this means we are quite swamped and that we do not have many cycles to take up new things (technology stack, applications, onboarding…). In addition, developers are split across multiple applications. This creates a situation where they often work alone, with little cross-knowledge and many single point of failure. Finally, we have to acknowledge that the number of people required to properly maintain all of these services has grown much faster than our ability to make the team grow.

So, in order for this team to improve, have fewer points of failure, increased reliability, be better upstream for the application we maintain, and maximize the value we bring to our communities, we need to change how we work.

This week, (June 10th to June 14th) the CPE team is meeting, face to face, to discuss what and how we can change the way we work. In the next article we will share the outcome of these discussions.

The post State of the Community Platform Engineering team appeared first on Fedora Community Blog.

Fedora Update Week 20--22

Posted by Elliott Sales de Andrade on June 10, 2019 05:03 AM
Oops, again a bit late, but the past two weekends were fairly busy. I decided to post this today so that it wouldn’t slip another full week. So this probably looks a bit larger than usual, but I hope I didn’t miss anything. Two weeks ago was rather busy with many updates. Not just new releases, but I also spent a little time going over old updates that I’ve missed and ignored due to missing dependencies.

Episode 149 - Chat with Michael Coates about data security

Posted by Open Source Security Podcast on June 10, 2019 12:01 AM
Josh and Kurt have a chat with Michael Coates from Altitude Networks. We cover what Altitude is up to as well as general trends we're seeing around data security in the cloud. Michael lays out his vision for "data first security".


<iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="https://html5-player.libsyn.com/embed/episode/id/10106331/height/90/theme/custom/thumbnail/yes/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

Show Notes


    On Ubuntu Updates

    Posted by Michael Catanzaro on June 09, 2019 09:20 PM

    I’d been planning to announce that Ubuntu has updated Epiphany appropriately in Ubuntu 18.04, 19.04, and its snap packaging, but it seems I took too long and Sebastien has beaten me to that. So thank you very much, Sebastien! And also to Ken and Marcus, for helping with the snap update. I believe I owe you, and also Iain, an apology. My last blog post was not very friendly, and writing unfriendly blog posts is not a good way to promote a healthy community. That wasn’t good for GNOME or for Ubuntu.

    Still, I was rather surprised by some of the negative reaction to my last post. I took it for granted that readers would understand why I was frustrated, but apparently more explanation is required.

    We’re Only Talking about Micro-point Updates!

    Some readers complained that stable operating systems should not take updates from upstream, because they could  introduce new bugs. Well, I certainly don’t expect stable operating systems to upgrade to new major release versions. For instance, I wouldn’t expect Ubuntu 18.04, which released with GNOME 3.28, to upgrade packages to GNOME 3.32. That would indeed defeat the goal of providing a stable system to users. We are only talking about micro version updates here, from 3.28.0 to 3.28.1, or 3.28.2, or 3.28.3, etc. These updates generally contain only bugfixes, so the risk of regressions is relatively low. (In exceptional circumstances, new features may be added in point releases, but such occurrences are very rare and carefully-considered; the only one I can think of recently was Media Source Extensions.) That doesn’t mean there are never any regressions, but the number of regressions introduced relative to the number of other bugs fixed should be very small. Sometimes the bugs fixed are quite serious, so stable release updates are essential to providing a quality user experience. Epiphany stable releases usually contain (a) fixes for regressions introduced by the previous major release, and (b) fixes for crashes.

    Other readers complained that it’s my fault for releasing software with  bugs in the first place, so I shouldn’t expect operating system updates to fix the bugs. Well, the first point is clearly true, but the second doesn’t follow at all. Expecting free software to be perfect and never make any bad releases is simply unreasonable. The only way to fix problems when they occur is with a software update. GNOME developers try to ensure stable branches remain stable and reliable, so operating systems packaging GNOME can have high confidence in our micro-point releases, even though we are not perfect and cannot expect to never make a mistake. This process works very well in other Linux-based operating systems, like Fedora Workstation.

    How Did We Get Here?

    The lack of stable release updates for GNOME in Ubuntu has been a serious ongoing problem for most of the past decade, across all packages, not just Epiphany. (Well, probably for much longer than a decade, but my first Ubuntu was 11.10, and I don’t claim to remember how it was before that time.) Look at this comment I wrote on an xscreensaver blog post in 2016, back when I had already been fed up for a long time:

    Last week I got a bug report from a Mint user, complaining about a major, game-breaking bug in a little GNOME desktop game that was fixed two and a half years ago. The user only needed a bugfix-only point release upgrade (from the latest Mint version x.y.z to ancient version x.y.z+1) to get the fix. This upgrade would have fixed multiple major issues.

    I would say the Mint developers are not even trying, but they actually just inherit this mess from Ubuntu.

    So this isn’t just a problem for Ubuntu, but also for every OS based on Ubuntu, including Linux Mint and elementary OS. Now, the game in question way back when was Iagno. Going back to find that old bug, we see the user must have been using Iagno 3.8.2, the version packaged in Ubuntu 14.04 (and therefore the version available in Linux Mint at the time), even though 3.8.3, which fixed the bug, had been available for over two years at that point. We see that I left dissatisfied yet entirely-appropriate comments on Bugzilla, like “I hate to be a distro crusader, but if you use Linux Mint then you are gonna have to live with ancient bugs.”

    So this has been a problem for a very long time.

    Hello 2019!

    But today is 2019. Ubuntu 14.04 is ancient history, and a little game like Iagno is hardly a particularly-important piece of desktop software anyway. Water under the bridge, right? It’d be more interesting to look at what’s going on today, rather than one specific example of a problem from years ago. So, checking the state of a few different packages in Ubuntu 19.04 as of Friday, June 7, I found:

    • gnome-shell 3.32.1 update released to Ubuntu 19.04 users on June 3, while 3.32.2 was released upstream on May 14
    • mutter 3.32.1 update released to Ubuntu 19.04 users on June 3, while 3.32.2 was released upstream on May 14 (same as gnome-shell)
    • glib 2.60.0 never updated in Ubuntu 19.04, while 2.60.1 was released upstream on April 15, and 2.60.3 is the current stable version
    • glib-networking 2.60.1 never updated in Ubuntu 19.04, while I released 2.60.2 on May 2
    • libsoup 2.66.1 never updated in Ubuntu 19.04, while 2.66.2 was released upstream on May 15

    (Update: Sebastien points out that Ubuntu 19.04 shipped with git snapshots of gnome-shell and mutter very close to 3.32.1 due to release schedule constraints, which was surely a reasonable approach given the tight schedule involved. Of course, 3.32.2 is still available now.)

    I also checked gnome-settings-daemon, gnome-session, and gdm. All of these are up-to-date in 19.04, but it turns out that there have not been any releases for these components since 3.32.0. So 5/8 of the packages I checked are currently outdated, and the three that aren’t had no new versions released since the original 19.04 release date. Now, eight packages is a small and very unscientific review — I haven’t looked at any packages other than the few listed here — but I think you’ll agree this is not a good showing. I leave it as an exercise for the reader to check more packages and see if you find similar results. (You will.)

    Of course, I don’t expect all packages to be updated immediately. It’s reasonable to delay updates by a couple weeks, to allow time for testing. But that’s clearly not what’s happening here. (Update #2: Marco points out that Ubuntu is not shipping gnome-shell and mutter 3.32.2 yet due to specific regressions. So actually, I’m wrong and allowing time for testing is exactly what’s happening here, in these particular cases. Surprise! So let’s not count outdated gnome-shell and mutter against Ubuntu, and say 3/8 of the packages are old instead of 5/8. Still not great results, though.)

    Having outdated dependencies like GLib 2.60.0 instead of 2.60.3 can cause just as serious problems as outdated applications: in Epiphany’s case, there are multiple fixes for name resolution problems introduced since GLib 2.58 that are missing from the GLib 2.60.0 release. When you use an operating system that provides regular, comprehensive stable release updates, like Fedora Workstation, you can be highly confident that you will receive such fixes in a timely manner, but no such confidence is available for Ubuntu users, nor for users of operating systems derived from Ubuntu.

    So Epiphany and Iagno are hardly isolated examples, and these are hardly recent problems. They’re widespread and longstanding issues with Ubuntu packaging.

    Upstream Release Monitoring is Essential

    Performing some one-time package updates is (usually) easy. Now that the Epiphany packages are updated, the question becomes: will they remain updated in Ubuntu going forward? Previously, I had every reason to believe they would not. But for the first time, I am now cautiously optimistic. Look at what Sebastien wrote in his recent post:

    Also while we have tools to track available updates, our reports are currently only for the active distro and not stable series which is a gap and leads us sometime to miss some updates.
    I’ve now hacked up a stable report and reviewed the current output and we will work on updating a few components that are currently outdated as a result.

    It’s no wonder that you can’t reliably provide stable release updates without upstream release monitoring. How can you provide an update if you don’t know that the update is available? It’s too hard for humans to manually keep track of hundreds of packages, especially with limited developer resources, so quality operating systems have an automated process for upstream release monitoring to notify them when updates are available. In Fedora, we use https://release-monitoring.org/ for most packages, which is an easy solution available for other operating systems to use. Without appropriate tooling, offering updates in a timely manner is impractical.

    So now that Sebastien has a tool to check for outdated GNOME packages, we can hope the situation might improve. Let’s hope it does. It would be nice to see a future where Ubuntu users receive quality, stable software updates.

    Dare to Not Package?

    Now, I have no complaints with well-maintained, updated OS packages. The current state of Epiphany updates in Ubuntu is (almost) satisfactory to me (with one major caveat, discussed below). But outdated OS packages are extremely harmful. My post two weeks ago was a sincere request to remove the Epiphany packages from Ubuntu, because they were doing much more harm than good, and, due to extreme lack of trust built up over the course of the past decade, I didn’t trust Ubuntu to fix the problem and keep it fixed. (I am still only “cautiously optimistic” that things might improve, after all: not at all confident.) Bugs that we fixed upstream long ago lingered in the Ubuntu packages, causing our project serious reputational harm. If I could choose between outdated packages and no packages at all, there’s no question that I would greatly prefer the later.

    As long as operating system packages are kept up-to-date — with the latest micro-point release corresponding to the system’s minor GNOME version — then I don’t mind packages. Conscientiously-maintained operating system packages are fine by me. But only if they are conscientiously-maintained and kept up-to-date!

    Not packaging would not be a horrible fate. It would be just fine. The future of Linux application distribution is Flatpak (or, less-likely, snap), and I really don’t mind if we get there sooner rather than later.

    Regarding OpenJPEG

    We have one more issue with Ubuntu’s packaging left unresolved: OpenJPEG. No amount of software updates will fix Epiphany in Ubuntu if it isn’t web-compatible, and to be web-compatible it needs to display JPEG 2000 images. As long as we have Safari without Chromium in our user agent, we have to display JPEG 2000 images, because, sadly, JPEG 2000 is no longer optional for web compatibility. And we cannot change our user agent because that, too, would break web compatibility. We attempted to use user agent quirks only for websites that served JPEG 2000 images, but quickly discovered it was entirely impractical. The only practical way to avoid the requirement to support JPEG 2000 is to give up on WebKit altogether and become yet another Chromium-based browser. Not today!

    Some readers complained that we are at fault for releasing a web browser that depends on OpenJPEG, as if this makes us bad or irresponsible developers. Some of the comments were even surprisingly offensive. Reality is: we have no other options. Zero. The two JPEG 2000 rendering libraries are libjasper and OpenJPEG. libjasper has been removed from both Debian and Ubuntu because it is no longer maintained. That leaves OpenJPEG. Either we use OpenJPEG, or we write our own JPEG 2000 image decoder. We don’t have the resources to do that, so OpenJPEG it is. We also don’t have the resources to fix all the code quality bugs that exist in OpenJPEG. Firefox and Chrome are certainly not going to help us, because they are big enough that they don’t need to support JPEG 2000 at all. So instead, we’ve devoted resources to sandboxing WebKit with bubblewrap. This will mitigate the damage potential from OpenJPEG exploits. Once the sandbox is enabled — which we hope to be ready for WebKitGTK 2.26 — then an OpenJPEG exploit will be minimally-useful unless combined with a bubblewrap sandbox escape. bubblewrap is amazing technology, and I’m confident this was the best choice of where to devote our resources. (Update: To clarify, the bubblewrap sandbox is for the entire web process, not just the OpenJPEG decoder.)

    Of course, it would be good to improve OpenJPEG. I repeat my previous call for assistance with the OpenJPEG code quality issues reported by Ubuntu, but as before, I only expect to hear crickets.

    So unfortunately, we’re not yet at a point where I’m comfortable with Epiphany’s Ubuntu packaging. (Well, the problem is actually in the WebKit packaging. Details.) I insist: distributing Epiphany without support for JPEG 2000 images is harmful and makes Epiphany look bad. Please, Ubuntu, we need you to either build WebKit with OpenJPEG enabled, or else just drop your Epiphany packages entirely, one or the other. Whichever you choose will make me happy. Please don’t accept the status quo!

    WOGUE is no friend of GNOME

    Posted by Richard Hughes on June 09, 2019 08:18 PM

    Alex Diavatis is the person behind the WOGUE account on YouTube. For a while he’s been posting videos about GNOME. I think the latest idea is that he’s trying to “shame” developers into working harder. From the person who’s again on the other end of his rants it’s having the opposite effect.

    We’re all doing our best, and I’m personally balancing about a dozen different plates trying to keep them all spinning. If any of the plates fall on the floor, perhaps helping with triaging bugs, fixing little niggles or just saying something positive might be a good idea. In fact, saying nothing would be better than the sarcasm and making silly videos.

    [Howto] Using Ansible and Ansible Tower with shared Roles

    Posted by Roland Wolters on June 07, 2019 07:32 PM
    <figure class="alignright is-resized">Ansible Logo</figure>

    Roles are a neat way in Ansible to make playbooks and everything related to them re-usable. If used with Tower, they can be even more powerful.

    (I published this post originally at ansible.com/blog .)

    Roles are an essential part of Ansible, and help in structuring your automation content. The idea is to have clearly defined roles for dedicated tasks. During your automation code, the roles will be called by the Ansible Playbooks.

    Since roles usually have a well defined purpose, they make it easy to reuse your code for yourself, but also in your team. And you can even share roles with the global community. In fact, the Ansible community created Ansible Galaxy as a central place to display, search and view Ansible roles from thousands of people.

    So what does a role look like? Basically it is a predefined structure of folders and files to hold your automation code. There is a folder for your templates, a folder to keep files with tasks, one for handlers, another one for your default variables, and so on:

    tasks/ 
    handlers/ 
    files/ 
    templates/ 
    vars/ 
    defaults/ 
    meta/

    In folders which contain Ansible code – like tasks, handlers, vars, defaults – there are main.yml files. Those contain the relevant Ansible bits. In case of the tasks directory, they often include other yaml files within the same directory. Roles even provide ways to test your automation code – in an automated fashion, of course.

    This post will show how roles can be shared with others, be used in your projects and how this works with Red Hat Ansible Tower.

    Share Roles via Repositories

    Roles can be part of your project repository. They usually sit underneath a dedicated roles/ directory. But keeping roles in your own repository makes it hard to share them with others, to be reused and improved by them. If someone works on a different team, or on a different project, they might not have access to your repository – or they may use their own anyway. So even if you send them a copy of your role, they could add it to their own repository, making it hard to exchange improvements, bug fixes and changes across totally different repositories.

    For that reason, a better way is to keep a role in its own repository. That way it can be easily shared and improved. However, to be available to a playbook, the role still needs to be included. Technically there are multiple ways to do that.

    For example there can be a global roles directory outside your project where all roles are kept. This can be referenced in ansible.cfg. However, this requires that all developer setups and also the environment in which the automation is finally executed have the same global directory structure. This is not very practical.

    When Git is used as the version control system, there is also the possibility of importing roles from other repositories via Git submodules, or even using Git subtrees. However, this requires quite some knowledge about advanced Git features by each and everyone using it – so it is far from simple.

    The best way to make shared roles available to your playbooks is to use a function built into Ansible itself: by using the command ansible-galaxy , ansible galaxy can read a file specifying which external roles need to be imported for a successful Ansible run: requirements.yml. It lists external roles and their sources. If needed, it can also point to a specific version:

    # from GitHub
    - src: https://github.com/bennojoy/nginx 
    # from GitHub, overriding the name and specifying a tag 
    - src: https://github.com/bennojoy/nginx 
      version: master 
      name: nginx_role 
    # from Bitbucket 
    - src: git+http://bitbucket.org/willthames/git-ansible-galaxy 
      version: v1.4 # from galaxy 
    - src: yatesr.timezone

    The file can be used via the command ansible-galaxy. It reads the file and downloads all specified roles to the appropriate path:

    ansible-galaxy install -r roles/requirements.yml 
    - extracting nginx to /home/rwolters/ansible/roles/nginx 
    - nginx was installed successfully 
    - extracting nginx_role to 
    /home/rwolters/ansible/roles/nginx_role 
    - nginx_role (master) was installed successfully 
    ...

    The output also highlights when a specific version was downloaded. You will find a copy of each role in your roles/directory – so make sure that you do not accidentally add the downloaded roles to your repository! The best option is to add them to the .gitignore file.

    This way, roles can be imported into the project and are available to all playbooks while they are still shared via a central repository. Changes to the role need to be made in the dedicated repository – which ensures that no light-minded and project specific changes are done in the role.

    At the same time the version attribute in requirements.ymlensures that the used role can be pinned to a certain release tag value, commit hash, or branch name. This is useful in case the development of a role is quickly moving forward, but your project has longer development cycles.

    Using Roles in Ansible Tower

    If you use automation on larger, enterprise scales you most likely will start using Ansible Tower sooner or later. So how do roles work with Ansible Tower? In fact – just like mentioned above. Each time Ansible Tower checks out a project it looks for a roles/requirements.yml. If such a file is present, a new version of each listed role is copied to the local checkout of the project and thus available to the relevant playbooks.

    That way shared roles can easily be reused in Ansible Tower – it is built in right from the start!

    Best Practices and Things to Keep in Mind

    There are a few best practices around sharing of Ansible roles that make your life easier. The first is the naming and location of the roles directory. While it is possible to name the directory any way via the roles_path in ansible.cfg, we strongly recommend to stick to the directory name roles, sitting in the root of your project directory. Do not choose another name for it or move it to some subdirectory.

    The same is true for requirements.yml: have one requirements.yml only, and keep it at roles/requirements.yml. While it is technically possible to have multiple files and spread them across your project, this will not work when the project is imported into Ansible Tower.

    Also, if the roles are not only shared among multiple users, but are also developed with others or not by you at all, it might make sense to pin the role to the actual commit you’ve tested your setup against. That way you will avoid unwanted changes in the role behaviour.

    More Information

    Find, reuse, and share the best Ansible content on Ansible Galaxy.

    Learn more about roles on Ansible Docs.

    Advertisements
    <script type="text/javascript"> __ATA.cmd.push(function() { __ATA.initSlot('atatags-26942-5d05ac1175463', { collapseEmpty: 'before', sectionId: '26942', location: 120, width: 300, height: 250 }); }); </script>

    FPgM report: 2019-23

    Posted by Fedora Community Blog on June 07, 2019 06:53 PM
    Fedora Program Manager weekly report on Fedora Project development and progress

    Here’s your report of what has happened in Fedora Program Management this week. Elections voting is underway!

    I have weekly office hours in #fedora-meeting-1. Drop by if you have any questions or comments about the schedule, Changes, elections, or anything else.

    Announcements

    Help wanted

    • Help with Flock tasks is appreciated. Contact bex to be added to the board.

    Upcoming meetings

    Fedora 31 Status

    Changes

    Submitted to FESCo

    Approved by FESCo

    The post FPgM report: 2019-23 appeared first on Fedora Community Blog.

    PHPUnit 8.2

    Posted by Remi Collet on June 07, 2019 08:30 AM

    RPM of PHPUnit version 8.2 are available in remi repository for Fedora ≥ 27 and for Enterprise Linux (CentOS, RHEL...).

    Documentation :

    emblem-notice-24.pngThis new major version requires PHP ≥ 7.2 and is not backward compatible with previous versions, so the package is designed to be installed beside version 5, 6 and 7.

    Installation, Fedora and Enterprise Linux 8:

    dnf --enablerepo=remi install phpunit8

    Installation, Enterprise Linux 6 and 7:

    yum --enablerepo=remi install phpunit8

    Notice: this tool is an essential component of PHP QA in Fedora. This version will soon be available in Fedora ≥ 29 after the review of php-sebastian-type.

    Contribute to Fedora Magazine

    Posted by Fedora Magazine on June 07, 2019 08:00 AM

    Do you want to share a piece of Fedora news for the general public? Have a good idea for how to do something using Fedora? Do you or someone you know use Fedora in an interesting way?

    We’re always looking for new contributors to write awesome, relevant content. The Magazine is run by the Fedora community — and that’s all of us. You can help too! It’s really easy.Read on to find out how.

    <figure class="aligncenter">help-1</figure>

    What content do we need?

    Glad you asked. We often feature material for desktop users, since there are many of them out there! But that’s not all we publish. We want the Magazine to feature lots of different content for the general public.

    Sysadmins and power users

    We love to publish articles for system administrators and power users who dive under the hood. Here are some recent examples:

    Developers

    We don’t forget about developers, either. We want to help people use Fedora to build and make incredible things. Here are some recent articles focusing on developers:

    Interviews, projects, and links

    We also feature interviews with people using Fedora in interesting ways. We even link to other useful content about Fedora. We’ve run interviews recently with people using Fedora to increase security, administer infrastructure, or give back to the community. You can help here, too — it’s as simple as exchanging some email and working with our helpful staff.

    How do I get started?

    It’s easy to start writing for Fedora Magazine! You just need to have decent skill in written English, since that’s the language in which we publish. Our editors can help polish your work for maximum impact.

    Follow this easy process to get involved.

    The Magazine team will guide you through getting started. The team also hangs out on #fedora-mktg on Freenode. Drop by, and we can help you get started.


    Image courtesy Dustin Lee – originally posted to Unsplash as Untitled

    Configuring Eaton 3S UPS with Fedora 30

    Posted by Lukas "lzap" Zapletal on June 07, 2019 12:00 AM

    Configuring Eaton 3S UPS with Fedora 30

    First off, make sure your UPS is connected. On my system, I had to use a different USB port as it was disconnecting regularly for some reason - probably USB power issues which is funny since this is a power device:

    # dmesg
    # lsusb
    

    Install and configure nut software:

    # dnf -y install nut
    
    # grep ^MODE /etc/ups/nut.conf
    MODE=standalone
    
    # grep -v '^#' /etc/ups/ups.conf
    [eaton3s]
    driver=usbhid-ups
    port=auto
    
    # grep -v '^#' /etc/ups/upsmon.conf | egrep -v '^$'
    MONITOR eaton3s@localhost 1 monuser pass master
    MINSUPPLIES 1
    SHUTDOWNCMD "/sbin/shutdown -h +0"
    POLLFREQ 5
    POLLFREQALERT 5
    HOSTSYNC 15
    DEADTIME 15
    POWERDOWNFLAG /etc/killpower
    NOTIFYFLAG ONBATT SYSLOG+WALL+EXEC
    NOTIFYFLAG ONLINE SYSLOG+WALL+EXEC
    NOTIFYCMD "/etc/ups/shutdown-script"
    RBWARNTIME 43200
    NOCOMMWARNTIME 300
    FINALDELAY 5
    
    # grep -v '^#' /etc/ups/upsd.users | egrep -v '^$'
    [monuser]
    password=pass
    upsmon master
    

    At this point, UPS should respond via USB which you can check with:

    # usbhid-ups -DDD -a eaton3s
    

    Make sure to replace “pass” with some password in both files. Now, the power outage script is where you can implement what you want. In my case, I want to see desktop notification every 10 seconds and after 3 minutes I want my system to poweroff:

    # cat /etc/ups/shutdown-script
    #!/bin/bash
    PATH=/sbin:/usr/sbin:/bin:/usr/bin:/usr/local/sbin:/usr/local/bin
    
    trap "exit 0" SIGTERM
    
    notify() {
            notify-send -u $2 "$0: $1"
    }
    
    if [ "$NOTIFYTYPE" = "ONLINE" ]; then
            notify "power restored" critical
            killall -s SIGTERM `basename $0`
    fi
    
    if [ "$NOTIFYTYPE" = "ONBATT" ]; then
            notify "3 minutes until shutdown" critical
            let "n = 18"
            while [ $n -ne 0 ]
            do
                    sleep 10
                    let "n--"
                    notify "$(( $n * 10 )) seconds until to shutdown" low
            done
            notify "commencing shutdown" critical
            upsmon -c fsd
    fi
    

    On server systems, you want to replace notify-send with probably something like echo $0: $1 | wall to notify console users. Make sure the script is executable:

    # chmod +x /etc/ups/shutdown-script
    

    Start the services, note that one of the three services (nut-driver) is started automatically as a dependency so do not enable it (systemd would complain anyway):

    # systemctl start nut-monitor nut-server
    # systemctl enable nut-driver nut-monitor nut-server
    # systemctl status nut-driver nut-monitor nut-server
    

    To diplay battery remaining capacity and other details you can use this command line tool:

    # upsc eaton3s
    Init SSL without certificate database
    battery.charge: 96
    battery.charge.low: 20
    battery.runtime: 672
    battery.type: PbAc
    device.mfr: EATON
    device.model: Eaton 3S 550
    device.serial: 000000000
    device.type: ups
    driver.name: usbhid-ups
    driver.parameter.pollfreq: 30
    driver.parameter.pollinterval: 2
    driver.parameter.port: auto
    driver.parameter.synchronous: no
    driver.version: 2.7.4
    driver.version.data: MGE HID 1.39
    driver.version.internal: 0.41
    input.transfer.high: 264
    input.transfer.low: 184
    outlet.1.desc: PowerShare Outlet 1
    outlet.1.id: 2
    outlet.1.status: on
    outlet.1.switchable: yes
    outlet.2.desc: PowerShare Outlet 2
    outlet.2.id: 3
    outlet.2.status: off
    outlet.2.switchable: yes
    outlet.desc: Main Outlet
    outlet.id: 1
    outlet.switchable: no
    output.frequency.nominal: 50
    output.voltage: 230.0
    output.voltage.nominal: 230
    ups.beeper.status: enabled
    ups.delay.shutdown: 20
    ups.delay.start: 30
    ups.firmware: 02
    ups.load: 30
    ups.mfr: EATON
    ups.model: Eaton 3S 550
    ups.power.nominal: 550
    ups.productid: ffff
    ups.serial: 000000000
    ups.status: OL
    ups.timer.shutdown: 0
    ups.timer.start: 0
    ups.vendorid: 0463
    

    Interesting parameters which are worth putting onto my i3status bar are probably battery.* and ups.load. Remember to perform a test! A good one is checking if this command works fine:

    # upsmon -c fsd
    

    And then full “integration” test - exit all programs and then pull the wire off the wall and wait :-)

    Good luck!

    06/19 Élections pour le Conseil, FESCo et Mindshare pendant deux semaines

    Posted by Charles-Antoine Couret on June 06, 2019 11:53 PM

    Comme le projet Fedora est communautaire, une partie du collège des organisations suivantes doit être renouvelée : Council, FESCo et Mindshare. Et ce sont les contributeurs qui décident. Chaque candidat a bien sûr un programme et un passif qu'ils souhaitent mettre en avant durant leur mandat pour orienter le projet Fedora dans certaines directions. Je vous invite à étudier les propositions des différents candidats pour cela.

    J'ai voté

    Pour voter, il est nécessaire d'avoir un compte FAS actif et de faire son choix sur le site du scrutin. Vous avez jusqu'au vendredi 20 juin à 2h heure française pour le faire. Donc n'attendez pas trop.

    Par ailleurs, comme pour le choix des fonds d'écran additionnel, vous pouvez récupérer un badge si vous cliquez sur un lien depuis l'interface après avoir participé à un vote.

    Je vais profiter de l'occasion pour résumer le rôle de chacun de ces comités afin de clarifier l'aspect décisionnel du projet Fedora mais aussi visualiser le caractère communautaire de celui-ci.

    Council

    Le Council est ce qu'on pourrait qualifier le grand conseil du projet. C'est donc l'organe décisionnaire le plus élevé de Fedora. Le conseil définit les objectifs à long terme du projet Fedora et participe à l'organisation de celui-ci pour y parvenir. Cela se fait notamment par le biais de discussions ouvertes et transparentes vis à vis de la communauté.

    Mais il gère également l'aspect financier. Cela concerne notamment les budgets alloués pour organiser les évènements, produire les goodies, ou des initiatives permettant de remplir les dits objectifs. Ils ont enfin la charge de régler les conflits personnels importants au sein du projet, tout comme les aspects légaux liés à la marque Fedora.

    Les rôles au sein du conseil sont complexes.

    Ceux avec droit de vote complet

    Tout d'abord il y a le FPL (Fedora Project Leader) qui est le dirigeant du conseil et de facto le représentant du projet. Son rôle est lié à la tenue de l'agenda et des discussions du conseil, mais aussi de représenter le projet Fedora dans son ensemble. Il doit également servir à dégager un consensus au cours des débats. Ce rôle est tenu par un employé de Red Hat et est choisi avec le consentement du conseil en question.

    Il y a aussi le FCAIC (Fedora Community Action and Impact Coordinator) qui fait le lien entre la communauté et l'entreprise Red Hat pour faciliter et encourager la coopération. Comme pour le FPL, c'est un employé de Red Hat qui occupe cette position avec l'approbation du conseil.

    Il y a deux places destinées à la représentation technique et à la représentation plus marketing / ambassadrice du projet. Ces deux places découlent d'une nomination décidée au sein des organes dédiées à ces activités : le FESCo et le Mindshare. Ces places sont communautaires mais ce sont uniquement ces comités qui décident des attributions.

    Il reste deux places communautaires totalement ouvertes et dont tout le monde peut soumettre sa candidature ou voter. Cela permet de représenter les autres secteurs d'activité comme la traduction ou la documentation mais aussi la voix communautaire au sens la plus large possible. C'est pour une de ces places que le vote est ouvert cette semaine !

    Ceux avec le droit de vote partiel

    Un conseiller en diversité est nommé par le FPL avec le soutien du conseil pour favoriser l'intégration au sein du projet des populations le plus souvent discriminées. Son objectif est donc de déterminer les programmes pour régler cette problématique et résoudre les conflits associés qui peuvent se présenter.

    Un gestionnaire du programme Fedora qui s'occupe du planning des différentes versions de Fedora. Il s'assure du bon respect des délais, du suivi des fonctionnalités et des cycles de tests. Il fait également office de secrétaire du conseil. C'est un employé de Red Hat qui occupe ce rôle toujours avec l'approbation du conseil.

    FESCo

    Le FESCo (Fedora Engineering Steering Committee) est un conseil entièrement composé de membres élus et totalement dévoués à l'aspect technique du projet Fedora.

    Ils vont donc traiter en particulier les points suivants :

    • Les nouvelles fonctionnalités de la distribution ;
    • Les sponsors pour le rôle d'empaqueteur (ceux qui pourront donc superviser un débutant) ;
    • La création et la gestion des SIGs (Special Interest Group) pour organiser des équipes autour de certaines thématiques ;
    • La procédure d'empaquetage des paquets.

    Le responsable de ce groupe est tournant. Les 9 membres sont élus pour un an, sachant que chaque élection renouvelle la moitié du collège. Ici 4 places sont à remplacer.

    Mindshare

    Mindshare est une évolution du FAmSCo (Fedora Ambassadors Steering Committee) qu'il remplace. Il est l'équivalent du FESCo sur l'aspect plus humain du projet. Pendant que le FESCo se préoccupera beaucoup plus des empaqueteurs, la préoccupation de ce conseil est plutôt l'ambassadeur et les nouveaux contributeurs.

    Voici un exemple des thèmes dont il a compétence qui viennent du FAmSCo :

    • Gérer l'accroissement des ambassadeurs à travers le mentoring ;
    • Pousser à la création et au développement des communautés plus locales comme la communauté française par exemple ;
    • Réaliser le suivi des évènements auxquels participent les ambassadeurs ;
    • Accorder les ressources aux différentes communautés ou activités, en fonction des besoin et de l'intérêt ;
    • S'occuper des conflits entre ambassadeurs.

    Et ses nouvelles compétences :

    • La communication entre les équipes, notamment entre la technique et le marketing ;
    • Motiver les contributeurs à s'impliquer dans différents groupes de travail ;
    • Gérer l'arrivé de nouveaux contributeurs pour les guider, essayer de favoriser l'inclusion de personnes souvent peu représentées dans Fedora (femmes, personnes non américaines et non européennes, étudiants, etc.) ;
    • Gestion de l'équipe marketing.

    Il y a 9 membres pour gérer ce nouveau comité. Un gérant, 2 proviennent des ambassadeurs, un du design et web, un de la documentation, un du marketing, un de la commops et les deux derniers sont élus. C'est pour un de ces derniers sièges que le scrutin est ouvert.

    Council policy proposal: modify election eligibility

    Posted by Fedora Community Blog on June 06, 2019 08:33 PM

    Inspired by the request that we provide written guidance on time commitment expectations and some conversations from our meeting in December, I have submitted a pull request to implement a policy that anyone running for an elected Fedora Council seat not run for other elected boards at the same time:

    The reasoning is that we have an unspoken (for now) expectation that being on the Council, particularly as an elected representative, will not be a trivial commitment. This is an easier check than trying to determine post-election which body a candidate would rather serve on (and thus having to deal with alternates, etc).

    Please discuss this on the council-discuss mailing list. Per the Council policy change policy, this will be submitted for a Council vote in two weeks.

    The post Council policy proposal: modify election eligibility appeared first on Fedora Community Blog.

    F30-20190605 Updated isos Released

    Posted by Ben Williams on June 06, 2019 01:51 PM

    The Fedora Respins SIG is pleased to announce the latest release of Updated F30-20190605 Live ISOs, carrying the 5.1.6-300 kernel.

    This set of updated isos will save considerable amounts  of updates after install.  ((for new installs.)(New installs of Workstation have 1.2GB of updates)).

    The updated isos now contain subscription-manager to resolve build issues. ( https://bugzilla.redhat.com/show_bug.cgi?id=1713109 )

    A huge thank you goes out to irc nicks vdamewood, dowdle, ledeni, Southern-Gentlem for testing these iso.

    We would also like to thank Fedora- QA  for running the following Tests on our ISOs.:

    https://openqa.fedoraproject.org/tests/overview?distri=fedora&version=30&build=FedoraRespin-30-updates/20190605.0&groupid=1

     

    As always our isos can be found at  http://tinyurl.com/Live-respins .  

    Keeping software in NeuroFedora up to date

    Posted by The NeuroFedora Blog on June 06, 2019 10:55 AM

    Given the large number of software updates we published recently, we thought this is a good chance to explain how the NeuroFedora team (and the Fedora package maintainers team in general) stays on top of all of this software that is constantly being updated and improved.

    Simplified schematic of the package maintenance workflow in Fedora

    As the (simplified) figure shows, there is a well defined process to ensure that we keep our software in good shape---updating it all in a timely manner. The Fedora Infrastructure team maintains lots of tools to enable the community in this aspect too, and all of these tools continuously evolve as the community moves to newer directions.

    In the sections below, we go over the process step by step.

    1. Upstream releases a new version

    Screenshot: upstream releases a new version of indexed-gzip on Github

    It all starts with upstream releasing a new version.

    Upstream are the developers of the software. We at NeuroFedora (and most Linux distributions) are downstream: we take developed software, build and integrate it all, and provide it in easily installable packages to users.

    On Fedora, these packages are provided as rpm archives via the repositories. Other distributions may use other formats.

    2. Anitya notifies us maintainers via Bugzilla

    Screenshot: Anitya detects the new version.

    Anitya runs at https://release-monitoring.org and monitors upstream for new versions. Anitya is able to monitor different upstream release methods such as Github, PyPi, Sourceforge, Gitlab, and so on. When Anitya detects a new version it first checks to see what version of this software we are currently providing in Fedora. If it sees that the Fedora version is older than the new upstream version that it has detected, it files a bug on our community bug tracker at https://bugzilla.redhat.com notifying the maintainers.

    TIP: you can use the Fedora packages application at https://apps.fedoraproject.org/packages to search the full list of software that is currently included in Fedora. If you already know the name of the package, you can use https://bugz.fedoraproject.org/<package name> (replace <package name> with the name of the package, such as nest) to go straight to a package's summary page.

    3. Maintainers test and update the Fedora package

    Screenshot: Anitya files a bug notifying the maintainers

    Once the bug has been filed, the next steps require manual work. There are tools to make it all easier, but this is where we humans come in.

    One of the NeuroFedora package maintainers notices the bug and begins to work on it. All notifications from bugzilla are sent to the neuro-sig mailing list, so the team is usually always aware of these.

    First, we fetch the new source code and test it to see how it has changed in the new release. Is it a minor release with bug-fixes and enhancements, or is it a major release where lots of functionality has changed?

    Sometimes, especially with development libraries, there are API/ABI changes that make the new version incompatible with the older ones. In such cases, we have to see how all other software that depends on these is affected. This is documented in the community policy on updating software. The idea is that when new versions of software are released, as package maintainers, one of our duties is to ensure that the new versions do not break existing systems for our users.

    If there is software that does not work with the latest versions, we maintainers try to help developers update their code. Here are some examples where we've notified maintainers:

    When possible, we do try to provide patches and open pull requests. However, this depends on how much time we maintainers have and of course, it also depends on the complexity of the codebase.

    In general, we try to stay as close to upstream as possible. This page lists the advantages of doing so: https://fedoraproject.org/wiki/Staying_close_to_upstream_projects

    3a. Maintainers update the spec file and build new rpms

    Screenshot: A package maintainer updates the *spec* file.

    If everything seems to work fine after we've managed to fix any issues, we begin to update the Fedora package. The first step here is to update the spec file. The spec file resides in the package's dist-git repository on https://src.fedoraproject.org/rpms/ where other Fedora tools can access it. When this has been updated to build the newest release, we queue up a new build on the Koji build system.

    spec files provide instructions that tell the rpmbuild tool how to build the software, and where all its files should go in the rpm package. More information on the process can be found here. You can see a relatively simple spec file for python-indexed_gzip here. This one for the nest simulator, however, is a lot more complex since we must build it with MPI support also. If you want to see a real scary spec file, though, look at this one for the texlive package. It is auto generated from the texlive sources, but you can imagine how hard it must be to debug.

    Screenshot: A package maintainer queues up a new build.

    The build system handles all our supported arches unless told not to do so. Currently, the Fedora community supports: x86_64, i686, armv7hl, aarch64, ppc64le, s390x. More information on these architectures can be found here.

    4. Quality assurance: the community tests the updated packages

    Once the builds have completed successfully, we push the builds to our Quality Assurance pipeline for testing.

    Screenshot: The Bodhi QA system manages our updates and their testing.

    It isn't enough to get the software to build correctly. We must also ensure that it works correctly. The pipeline lets community members (including users) test these updated packages in a staging repository.

    You can help test these updates. All you need to do is install them, and provide feedback. This is all explained here.

    Screenshot: An update on Bodhi that has been pushed to stable.

    If the packages pass the community's battery of automated checks, and testers provide positive feedback on their functioning, they are then moved from the staging repositories to the stable repositories where they are available to all users.

    These general steps are not limited to the NeuroFedora team. They are followed by all Fedora community. package maintainers

    5. The community uses/develops/extends/shares their work

    When all this is done and the packages available to install via the different package management tools (dnf/Gnome-software/DNFDragora/Discover), other community members are able to use, develop, extend, share and do more with these tools.

    I don't like to use the word "users" because unlike let's say Nvidia who is a "vendor" and provides software to its "users", Fedora is a community with a large portion of people using our software also helping maintain and develop it. Everyone who uses Fedora is already a community member, for they help us achieve our goal---to spread Free/Open source software and awareness about it. This is perhaps worth a whole post in itself, though.

    Join the community!

    Since community projects aren't profit focussed, instead of money, the resource we most need is man-power. Each new contributor increases the sum total man-power possessed by the community and this enables us to work with more software, help more developers, improve all of this software in general---get more work done.

    If you've gotten this far, you would have probably realised that there is quite a bit of work to be done, and there are ample opportunities where you can help in this area of NeuroFedora/Fedora. You can:

    • suggest software for inclusion
    • package and maintain software
    • test packaged software
    • file bugs and report issues
    • help maintain the tools that are required to keep the community ticking

    More information on all of this is included in our documentation here. The easiest way, though, is probably just to have a chat with the team. Catch us anytime on our communication channels.


    NeuroFedora is volunteer driven initiative and contributions in any form always welcome. You can get in touch with us here. We are happy to help you learn the skills needed to contribute to the project. In fact, that is one of the major goals of the initiative---to spread technical knowledge that is necessary to develop software for Neuroscience.

    Bodhi is getting ready for rawhide gating

    Posted by Fedora Community Blog on June 06, 2019 06:58 AM

    For a couple of months, part of the Community Platform Engineering (CPE) team has been focusing on making the changes needed in Bodhi to enable the rawhide gating change proposal.

    Automation Automation

    In order to keep the user experience that packagers currently have with rawhide, we need to automate some part of the update life cycle.

    Creation of an update

    Bodhi will be able to automatically create an update coming from a rawhide build. This is done by a fedora-messaging consumer listening to Koji build tag events. Every time that a build will be tagged with the rawhide candidate tag (for example f31-updates-candidate) a Bodhi update will be created.

    If you are interested in this feature implementation, you can check the GitHub Pull-Request.

    Pushing to stable

    Bodhi is gaining the possibility to automatically push an update to stable based on the time this update has spent in testing. This feature is similar to the current automatic push to stable based on a karma threshold.

    This will allow rawhide to automatically push updates to stable based on the tests results. Indeed rawhide will be configured to have a 0 day required in testing threshold, meaning that if the update tests results are valid Bodhi will mark the update to be pushed to the stable repository.

    This is where the Gating happens, for more details on the implementation, you can check the GitHub Pull-Request

    Tests results improvements

    We also are working on improving the experience around tests results and waiving the tests if needed. Bodhi queries the Greenwave service in order to get the tests results related to an update.

    In order to have more up to date information in Bodhi we are adding a fedora-messaging consumer that will listen to Greenwave message and refresh the tests results of an update (See Pull-Request).

    We also are working on improving the experience around tests results and waiving the tests if needed. Bodhi queries the Greenwave service in order to get the tests results related to an update. In order to have more up to date information in Bodhi we are adding a fedora-messaging consumer that will listen to Greenwave message and refresh the tests results of an update (See Pull-Request).

    What’s next

    While the above changes should make it to the next version of Bodhi (version 4.1), we have started to work on enabling rawhide gating for updates that contains more than a single build.

    We currently are working on giving Bodhi an asynchronous tasks system (Celery) to improve Bodhi’s performance.

    We have also started to work on adding the possibility to create Bodhi updates using a koji side tag (see Pull-Request)

    Following the work

    If you want to follow the work happening in Bodhi related to rawhide gating you can look at our GitHub board. If you want to get involve and help us with making or testing these improvements feel free to contact us on #fedora-apps or #bodhi IRC channel (freenode network).


    Photo by Lewis Ngugi on Unsplash

    The post Bodhi is getting ready for rawhide gating appeared first on Fedora Community Blog.

    Fedora 30 Elections Voting Now Open

    Posted by Fedora Community Blog on June 06, 2019 12:01 AM
    Fedora 26 Supplementary Wallpapers: Vote now!

    Voting in the Fedora 30 elections is now open. Go to the Elections app to cast your vote. Voting closes at 23:59 UTC on Thursday 20 June. Don’t forget to claim your “I Voted” badge when you cast your ballot. Links to candidate interviews are below.

    If you cannot vote because you only have the “CLA done” group in FAS, please file an issue ASAP and I will take care of that for you

    Fedora Council

    There is one seat open on the Fedora Council.

    Fedora Engineering Steering Committee (FESCo)

    There are four seats open on FESCo.

    Mindshare Committee

    There is one seat open on the Mindshare Committee

    The post Fedora 30 Elections Voting Now Open appeared first on Fedora Community Blog.

    Copr's Dist-Git

    Posted by Copr on June 06, 2019 12:00 AM

    Fedora use dist-git as the ultimate source for SPEC files.

    In Copr, we use dist-git to store sources as well. However, our use case is different. In the past, Copr only allowed to build from URL. You provided a URL to your SRC.RPM and Copr downloaded it and built it. This was a problem when the user wanted to resubmit the build. The original URL very often did not exists anymore. Therefore we came with an idea to store the SRC.RPM somewhere. And obviously, the dist-git was the first idea.

    Though there is a big difference between Fedora’s dist-git and Copr’s dist-git. In Fedora, only the owner interacts with dist-git. He maintains the branches, addresses pull requests, resolves conflicts, etc. In Copr, it is different. The biggest user of Copr’s dist-git is Copr itself. Copr uploads new SRC.RPM - either those available from URL, or uploaded via API, or after creating the SRC.RPM from upstream git. While you can clone the dist-git, or see it via cgit, you are not allowed to write to your dist-git. It is internal storage for Copr.

    This has one implication: if you want to send maintainer a patch for his SPEC file, please do not use the SPEC file from Copr’s dist git. The spec file is not the original source (for the maintainer of the project). It can be created on the fly - e.g. using pyp2rpm or gem2spec. You should always contact the owner of the project and ask him, what is his preferred way to contribute to the project.

    Council Election: Interview with Till Maas (till)

    Posted by Fedora Community Blog on June 05, 2019 11:57 PM

    This is a part of the Council Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts on Thursday, 6 June and closes promptly at 23:59:59 UTC on Thursday, 20 June 2019.

    Interview with Till Maas

    • Fedora Account: till
    • IRC: tyll (found in #fedora-devel, #fedora-de, #fedora-meeting-1)
    • Fedora User Wiki Page

    Questions

    Why are you running for the Fedora Council?

    The Fedora Council needs experienced community members to move the project in the good direction. Since 2005 I am part of the Fedora community and participated in various areas and served the Council for the last year.

    Why should people vote for you?

    I take my job as a community representative seriously and will raise concerns or push things forward as necessary.

    What do you want to accomplish as a member of the Fedora Council?

    My goal is to make Fedora a community that it is fun to contribute to. For me this means it is easy to drive innovation, we welcome everyone and support newcomers and existing contributors as much as possible.

    The post Council Election: Interview with Till Maas (till) appeared first on Fedora Community Blog.

    Mindshare Election: Interview with Sumantro Mukherjee (sumantrom)

    Posted by Fedora Community Blog on June 05, 2019 11:55 PM

    This is a part of the Mindshare Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts on Thursday, 6 June and closes promptly at 23:59:59 UTC on Thursday, 20 June 2019.

    Interview with Sumantro Mukherjee

    • Fedora Account: sumantrom
    • IRC: sumantro (found in #fedora-qa #fedora-test-day #fedora-classroom #fedora-india #fedora-meeting #fedora-modularity #fedora-kernel #fedora-mindshare)
    • Fedora User Wiki Page

    Questions

    Is there a specific task or issue you think that Mindshare should address this term?

    As Mindshare is a core group of volunteers to expand and evolve the Fedora Outreach in accordance with the four pillars of Fedora, I believe it can be achieved with an increase in Fedora’s Calendar Events. Events, in general, will require contributors to be able to request for resources in a jiffy. As a part of the current mindshare, I have tried/helped, created and implemented processes which allow event organizers and speakers easily request for resources like budget and swags.

    However, we have the process is in place and the number of event requests has increased, we still need to work on finalizing a standard template for reporting back to mindshare with an event report and some corner cases like that.

    Another important aspect which I have worked on was to help Fedora participate in a program which targeted high school students. Last year, Fedora participated in Google Code-In and we got great participation from high school students one of our winners was interviewed and you can read his experience on the Community Blog.

    Please elaborate on the personal “Why” which motivates you to be a candidate for Mindshare.

    I hail from APAC. It’s vast and diverse. I was a part of Mindahre and FAmSCo and I was thrilled to work with Fedora Ambassadors around APAC and around the world. The biggest motivator for me is to see more newcomers feel welcome and participate in the project. Working with people across the world teaches me a lot and helps me understand community dynamics in a better way. As a part of Fedora QA team, we often have onboarding calls and I feel it’s crucial for the success of the project to evolve the outreach program and that’s the reason I would like to join Mindshare, join hands with the subproject team and expand the outreach activities. One of the very specific will be to participate in Google Season Of Docs which we failed to make the cut this year but I am hopeful with all the learning that we will be in the next time.

    What are your thoughts on the impact (as an individual and then as a Mindshare group) that the group will have on the Fedora Mission?

    As an individual, I will help shape some events and outreach around IoT, Modularity, Silverblue and Fedora Core OS. Being a Fedora QA, team member, I usually lay my hands on everything that comes as a new changeset and host test days around it. I would love to bring that experience to outreach activities and help write boilerplates and run pilot activities to increase the contributors and users to increase Fedora’s footprint in IoT landscape. Since Mediawrite offers out of the box support for ARM and Silverblue it will nice to put most of the already ready pieces to use.

    As a part of Mindshare, I would like to work with Fedora Advocates, Ambys, Contributors and Users to help shape an outreach experience which will help the forthcoming people to make the most of their skillsets and contribute to the project.

    The post Mindshare Election: Interview with Sumantro Mukherjee (sumantrom) appeared first on Fedora Community Blog.

    Mindshare Election: Interview with Luis Bazan (lbazan)

    Posted by Fedora Community Blog on June 05, 2019 11:55 PM

    This is a part of the Mindshare Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts on Thursday, 6 June and closes promptly at 23:59:59 UTC on Thursday, 20 June 2019.

    Interview with Luis Bazan

    • Fedora Account: lbazan
    • IRC: LoKoMurdoK (found in #fedora-latam #fedora #fedora-neuro #fedora-es #fedora-pa (others))
    • Fedora User Wiki Page

    Questions

    Is there a specific task or issue you think that Mindshare should address this term?

    • First of all, work in the futures activities for Fedora LATAM region.
    • In Latam the activities have decreased, they lost their motivation and we need new contributors.

    Please elaborate on the personal “Why” which motivates you to be a candidate for Mindshare.

    • My motivation is all the team, the community, work as a team!
    • I have worked closely with the ambassadors and I like listen their ideas.
    • I’m very excited about this opportunity because I feel that my skills and knowledge are very well lit to be candidate for Mindshare.
    • We could motivate and help more people in and out of the Fedora community to contribute as a team.

    What are your thoughts on the impact (as an individual and then as a Mindshare group) that the group will have on the Fedora Mission?

    • As individual, I think that I can share my ideas, experience and knowledge to contribute to makes council take good decisions in the different aspects of the community.
    • As Mindshare group, we must encourage the community members to share their ideas with the Mindshare group and/or their ambassadors.

    The post Mindshare Election: Interview with Luis Bazan (lbazan) appeared first on Fedora Community Blog.

    FESCo Election: Interview with Igor Gnatenko (ignatenkobrain)

    Posted by Fedora Community Blog on June 05, 2019 11:50 PM

    This is a part of the FESCo Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts on Thursday, 6 June and closes promptly at 23:59:59 UTC on Thursday, 20 June 2019.

    Interview with Igor Gnatenko

    • Fedora Account: ignatenkobrain
    • IRC: ignatenkobrain (found in #fedora-devel #fedora-rust #fedora-admin #fedora-releng #fedora-python #rpm.org #rpm-ecosystem #fedora-modularity)
    • Fedora User Wiki Page

    Questions

    Describe some of the important technical issues you foresee affecting the Fedora community. What insight do you bring to these issues?

    Packagers do not scale. We do a lot of work to solve technical problems (Modularity, Fedora CI, Flatpaks), but we forget where all of the content these technologies enable come from. We don’t have tooling/processes which ease the life of package maintainers helping them maintain their packages and to deal with all these new technologies.

    I have been:

    As a FESCo member I would ensure that the changes/decisions the committee is making, will not regress our packagers’ experience and bring important problems/proposals of packagers to the conversation.

    What objectives or goals should FESCo focus on to help keep Fedora on the cutting edge of open source development?

    There is an objective draft, “Packager Experience,” in the wiki and I think it is a very important one. I am planning to revive it and push it forward myself.

    In order to keep Fedora attractive for developers (Fedora developers, I mean) and bring more of them, we need to simplify their life with more tooling and automation. Without developers we can’t do much of the innovation that is important for our users.

    What are the areas of the distribution and our processes that, in your opinion, need improvement the most? Do you have any ideas how FESCo would be able to help in those “trouble spots”?

    Packaging needs lot of improvements. Starting from the Package Review process and all the way through to putting those packages into modules/flatpaks/containers and testing them in CI. Most things there can and should be automated. I’ve spent quite a lot of time on the openSUSE Conference talking to people who are doing Ruby/Perl packaging, the Open Build Service and how they automate things. You can read my report from that conference.

    Another thing is: we need to create an easy way how to do PoC of changes outside of “production” infrastructure. Be that staging which allows people to do their tests freely or entirely separate infrastructure is just an implementation detail. For some changes we must ask owners to prepare PoC in this infra, do tests, ask broader audience to try it out (if applicable) and only after that approve such changes. It might feel that we slow down initiatives or just nitpicking, but it is a very important step to find out breakages in early stage and see how such changes will affect other people in our community.

    The post FESCo Election: Interview with Igor Gnatenko (ignatenkobrain) appeared first on Fedora Community Blog.

    FESCo Election: Interview with Aleksandra Fedorova (bookwar)

    Posted by Fedora Community Blog on June 05, 2019 11:50 PM

    This is a part of the FESCo Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts on Thursday, 6 June and closes promptly at 23:59:59 UTC on Thursday, 20 June 2019.

    Interview with Aleksandra Fedorova

    Questions

    Describe some of the important technical issues you foresee affecting the Fedora community. What insight do you bring to these issues?

    This is my second run for FESCo, thus my answers do not change much from the last interview.

    We are moving towards more flexible and varied ways of delivering the software. And this flexibility is going to become an issue on its own.

    On a technical side, extended list of deliverables requires extended testing effort which is hard to achieve via semi-manual workflows. As a CI Engineer and member of Fedora CI Working Group I want to make sure that CI effort is aligned with current engineering goals. And while we work on improving the CI infrastructure and user experience we also find the ways to incorporate CI in the development process.

    From the process and policy point of view, as a former DevOps Engineer, I have worked on the “other” side – in cloudy environments flexible to the point of becoming chaotic. And while I believe that there are things Fedora needs to catch up with in terms of modern development practices, there are a lot of things “modern practices” need to catch up with in terms of processes and workflows, which are widely known and established in the Linux distributions world. The idea of maintainership itself, for example.

    I hope that bringing this perspective to FESCo would help us find the right balance between providing the flexible and powerful tooling and keeping the solid foundation for it to be usable.

    What objectives or goals should FESCo focus on to help keep Fedora on the cutting edge of open source development?

    Same as the last interview, I would say that one of the big goals of FESCo and Fedora community in general is to prevent Fedora from becoming stable. In this case by stable I mean “freezed” not “free of bugs”.

    We need it to be easy to change things, even most core features. We need certain loose ends hanging for everyone to come and take care of.

    One of the objectives to achieve that big goal is to provide people with the toolchain and services to make their own unique flavors of Fedora or on top of Fedora. Custom repositories, modules, images, flatpaks… you name it. And we need it to be a self-service available for any contributor to use. We should provide pieces for community to play, build and create.

    What are the areas of the distribution and our processes that, in your opinion, need improvement the most? Do you have any ideas how FESCo would be able to help in those “trouble spots”?

    Unlike the last interview, I now think that the main focus should now go into the discoverability of all the things Fedora. With the lack of proper package database, and with more tools coming we now make it hard for users to actually find out what is available, what needs work and how they could contribute.

    We have built the end-user Fedora experience, with decent tools to install apps on a target machine, but we need to focus on contributor experience, with tooling to manage package lifecycle and package appearances in various bundles: main repo, modules, flatpacks, containers..

    With CI and gating included in the lifecycle, of course 🙂


    The post FESCo Election: Interview with Aleksandra Fedorova (bookwar) appeared first on Fedora Community Blog.

    FESCo Election: Interview with Petr Šabata (psabata)

    Posted by Fedora Community Blog on June 05, 2019 11:50 PM

    This is a part of the FESCo Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts on Thursday, 6 June and closes promptly at 23:59:59 UTC on Thursday, 20 June 2019.

    Interview with Petr Šabata

    Questions

    Describe some of the important technical issues you foresee affecting the Fedora community. What insight do you bring to these issues?

    Fedora’s been accepting major and often breaking changes at a very fast pace in the most recent releases. This is not a bad thing per se as it means we’re constantly moving forward. However, I feel many of these are still not very well understood, documented, or socialized. The community as a whole needs more time to adapt and learn how to work in the new environment and with the new tools. Improving contributor experience with the distribution should be our main goal. Since I’ve been directly or indirectly involved in many of these, I could provide the necessary guidance to help rectify the situation.

    What objectives or goals should FESCo focus on to help keep Fedora on the cutting edge of open source development?

    I would like to see Fedora as the main and the default development and testing platform not only for our projects but for upstreams as well. The new objective focused on improving our images and tailoring them for specific workflows could be the thing we need to get us there. The best thing is we already have the technology and policies to make us successful — we just need to be smart and use & enforce them wisely, focusing on the weakest links first.

    What are the areas of the distribution and our processes that, in your opinion, need improvement the most? Do you have any ideas how FESCo would be able to help in those “trouble spots”?

    At present this would be modularity. The tooling is still lacking in many areas, the processes underdefined, the concept still not widely understood or accepted. Yet, modularity is part of Fedora (and it’s a good thing!) and the remaining issues must be resolved. FESCo’s been already looking into many of those problems, be it content discovery, lifecycles, making modular content available in the standard buildroot, or enabling EPEL 8. The work needs to continue so that the objective can be finally closed within the next release or two.

    The post FESCo Election: Interview with Petr Šabata (psabata) appeared first on Fedora Community Blog.

    FESCo Election: Interview with Fabio Valentini (decathorpe)

    Posted by Fedora Community Blog on June 05, 2019 11:50 PM

    This is a part of the FESCo Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts on Thursday, 6 June and closes promptly at 23:59:59 UTC on Thursday, 20 June 2019.

    Interview with Fabio Valentini

    • Fedora Account: decathorpe
    • IRC: decathorpe (found in #fedora-stewardship #fedora-meeting*)
    • Fedora User Wiki Page

    Questions

    Describe some of the important technical issues you foresee affecting the Fedora community. What insight do you bring to these issues?

    Assuming the past few months provide any insights for the future, I think there are some big issues the Fedora community will need to tackle soon. The biggest one is probably the retirement of almost all packages depending on python2. The golang stack is also in the process of completely modernizing its packaging, with packaging guidelines for Go and new tooling finally being in place. Another one is the increasing modularization of parts of fedora, despite the fact that not all of the necessary tooling and policies are in place yet.

    As part of the work of phasing out python2, I worked on a few packages myself, and have contributed to the details of the current process as a member of the Packaging Committee. I am also a member of the go SIG, where I help with modernizing the almost 800 go packages in Fedora. Concerning the issues around the modularization effort, I took a more active role (and founded the Stewardship SIG).

    What objectives or goals should FESCo focus on to help keep Fedora on the cutting edge of open source development?

    For example, I think fedora Silverblue is a really exciting project, especially for long-term deployments of fedora (with “fearless” upgrades hopefully completely obsoleting the need to reinstall). Additionally, the possibility of finally running sandboxed desktop / user-space applications with flatpak (or other container technologies) might be a big win for Linux desktop security, usability, and ease of application portability / distribution.

    What are the areas of the distribution and our processes that, in your opinion, need improvement the most? Do you have any ideas how FESCo would be able to help in those “trouble spots”?

    When working on the three issues I mentioned above (python2 retirement, golang packaging modernization, Stewardship SIG), some recurring patterns emerged.

    For example, there is little to no tooling support (or official data) for some regular tasks, like querying a list of binary packages built from a given source package, or determining the complete dependency graph of a package (requires and required-by directions) correctly.

    But the worst issue probably was the abundance of effectively unmaintained packages and/or unresponsive maintainers, especially within the Java stack. However, following the “Unresponsive Maintainer” process for all the affected packages would be a tedious, protracted chore for little benefit – getting the packages orphaned. These packages often haven’t been touched in years and are often severely outdated or broken, but continue to be built and included in Fedora.

    I think the policies for retirement of long-term orphaned, FTBFS, FTI packages need to be enforced (more strictly) – even if that means that fedora might lose hundreds of unmaintained and/or broken packages. Still, reducing the package set might be our only hope to maintain our high standards regarding package quality, and the health of Fedora as a distribution and ecosystem overall.

    The post FESCo Election: Interview with Fabio Valentini (decathorpe) appeared first on Fedora Community Blog.

    FESCo Election: Interview with Stephen Gallagher (sgallagh)

    Posted by Fedora Community Blog on June 05, 2019 11:50 PM

    This is a part of the FESCo Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts on Thursday, 6 June and closes promptly at 23:59:59 UTC on Thursday, 20 June 2019.

    Interview with Stephen Gallagher

    • Fedora Account: sgallagh
    • IRC: sgallagh (found in #fedora-devel #fedora-modularity #fedora-server)
    • Fedora User Wiki Page

    Questions

    Describe some of the important technical issues you foresee affecting the Fedora community. What insight do you bring to these issues?

    The biggest issue I see in the immediate future is the need for improving the packager experience for Modularity. It’s a powerful tool for enabling user choice, but it needs some work to get it into shape to make it easier on packagers to generate modules.

    The other issue I see is also Modularity-related: what is Fedora’s position in the ecosystem relative to CentOS once CentOS 8 is released with module support? We need to ensure that Fedora doesn’t lose its position as the development upstream for Red Hat Enterprise Linux and CentOS.

    What objectives or goals should FESCo focus on to help keep Fedora on the cutting edge of open source development?

    FESCo needs to focus on knocking down barriers to entry in Fedora. The harder it is, the more likely that a willing contributor will turn around and abandon their attempt.

    What are the areas of the distribution and our processes that, in your opinion, need improvement the most? Do you have any ideas how FESCo would be able to help in those “trouble spots”?

    As I noted above, I think the biggest trouble point right now is the Modularity packaging experience. FESCo can help by coordinating the deployment of new tools and helping communicate how to use them to our developer community.

    The post FESCo Election: Interview with Stephen Gallagher (sgallagh) appeared first on Fedora Community Blog.

    FESCo Election: Interview with Jeremy Cline (jcline)

    Posted by Fedora Community Blog on June 05, 2019 11:50 PM

    This is a part of the FESCo Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts on Thursday, 6 June and closes promptly at 23:59:59 UTC on Thursday, 20 June 2019.

    Interview with Jeremy Cline

    • Fedora Account: jcline
    • IRC: jcline (found in #fedora-admin, #fedora-apps, #fedora-kernel, #fedora-devel)
    • Fedora User Wiki Page

    Questions

    Describe some of the important technical issues you foresee affecting the Fedora community. What insight do you bring to these issues?

    I think the largest technical issue the Fedora community currently faces is testing. The continuous integration effort was a good start, but there are many challenges that remain unsolved at this juncture. These include (but are not limited to) easy-to-produce-on-your-development-machine artifacts so testing locally is the same as testing in CI, the fast production of those artifacts so CI can report problems in a timely manner, and a great user experience around the testing infrastructure and test results.

    What objectives or goals should FESCo focus on to help keep Fedora on the cutting edge of open source development?

    Fedora needs to ensure it’s the easiest distribution to get involved in. All distributions have problems and there’s a limited number of people interested enough to try to fix those problems. We need to ensure we create as few barriers as we possibly can every step of the way.

    The work around CI/CD, reducing compose time, and gating Rawhide on automated tests are all steps in the right direction. It’s critically important that we focus on the user experience when introducing these new tools and that we not regress in terms of ease of use.

    What are the areas of the distribution and our processes that, in your opinion, need improvement the most? Do you have any ideas how FESCo would be able to help in those “trouble spots”?

    More than anything, I’d love to see the CI/CD experience become smooth enough and offer enough obvious benefits that it’s more common to find packages with a solid set of tests than a package with no tests. Unfortunately I don’t think that’s the case yet.

    I do think FESCo needs to be more cautious about allowing changes that negatively impact the Fedora contributor’s user experience, although I do think the choices that have be made recently were reasonable choices at the time and there wasn’t a great way to know how much trouble they would cause. This is, of course, easy to say with the gift of hindsight.


    The post FESCo Election: Interview with Jeremy Cline (jcline) appeared first on Fedora Community Blog.

    Flock Talk & Session Proposal Reminder

    Posted by Fedora Community Blog on June 05, 2019 04:03 PM
    FLOCK Logo

    It’s hard for me to believe, but it’s been more than five years since we launched the “Fedora.next” initiative. At the end of Fedora’s first decade, we knew it would be important to think, plan, and adjust so the project could continue successfully in the decades to come. Now we’re halfway into the next one, and this Flock conference will be an important time for reflecting on our progress and charting our path for the next five years and beyond.

    Because Flock is focused specifically at our contributors and developers, this is a unique conference and we’re looking for talks and sessions that reflect that.

    We want to see your talk proposals on any topic relevant to Fedora contributors working to shape Fedora. What are you working on and how will you help shape the next five years of Fedora’s future?

    Modularity, Silverblue, IoT, and package gating are among our big efforts right now, but there are a lot of other important things happening in the community. Flock is the opportunity to share what you’re working on and get important feedback from other contributors.  Flock is also an opportunity to participate in hacks and meetings to move projects forward in a high-bandwidth face-to-face manner.

    Even if you don’t have any talk ideas, I hope you can join us in Budapest. If funding is an issue, we have funds available to help with travel expenses. With such a large and distributed community, it’s hard for us to get the crucial face-to-face interactions that build relationships and enable deep conversation. Let’s use this time to build connections between our various teams and to share and socialize big ideas that make Fedora better for everyone.

    The “call for papers” remains open until July 1, with talks being announced in rounds.  It is better to submit earlier while we still have the schedule empty.  Submit your proposals via a Pagure issue in the Flock Repository.  Encourage others to comment on your issue and submit feedback to help you refine it.

    The post Flock Talk & Session Proposal Reminder appeared first on Fedora Community Blog.

    syslog-ng with Elastic Stack 7

    Posted by Peter Czanik on June 05, 2019 11:20 AM

    For many years, anything I wrote about syslog-ng and Elasticsearch was valid for all available versions. Well, not anymore. With version 7 of Elasticsearch, there are some breaking changes. These changes are mostly related to the fact that Elastic is phasing out type support. This effects mapping (as the _default_ keyword is not used any more) and the syslog-ng configuration as well (as even though type() is a mandatory parameter, you should leave it empty).

    This blog post is a rewrite of one of my earlier blog posts (about creating a heat map using syslog-ng + Elasticsearch + Kibana), focusing on the changes and the new elasticsearch-http() destination:

    https://www.syslog-ng.com/community/b/blog/posts/creating-heat-maps-using-new-syslog-ng-geoip2-parser

    Before you begin

    First of all, you need some iptables log messages. In my case, I used logs from my Turris Omnia router. If you do not have iptables logs at hand, there are many sample logs available on the Internet. For example, you can use Bundle 2 from Anton Chuvakin( http://log-sharing.dreamhosters.com/ ) and use loggen to feed it to syslog-ng.

    Then, you also need a recent version of syslog-ng. The elasticsearch-http() destination was introduced in syslog-ng (OSE) version 3.21.1 (and PE version 7.0.14).

    Last but not least, you will also need Elasticsearch and Kibana installed. I used version 7.1.1 of the Elastic Stack, but any version after 7.0.0 should be fine. Using 7.1.0 or later has the added benefit of having basic security built in for free: neither payment nor installing extensions arenecessary.

    Mapping

    Mapping in Elasticsearch defines how a document and the fields it contains are stored and indexed. You do not have to configure mapping by hand for generic syslog data, but you need it for storing geolocations, for example. Starting with Elasticsearch version 7, the use of the _default_ keyword is not supported, so when you try to apply the mapping example from my previous blog post, it will fail with an error message. To avoid this, apply the following to the index called syslog-ng before starting to send log messages:

    {
       "mappings" : {
             "properties" : {
                "geoip2" : {
                   "properties" : {
                      "location2" : {
                         "type" : "geo_point"
                      }
                   }
                }
             }
       }
    }

    Configuring syslog-ng

    The configuration below is slightly different from the one in my original blog post. Here I would like to emphasize only the differences:

    source s_tcp {
      tcp(ip("0.0.0.0") port("514"));
    };
     
    parser p_kv {kv-parser(prefix("kv.")); };
     
    parser p_geoip2 { geoip2( "${kv.SRC}", prefix( "geoip2." ) database( "/usr/share/GeoIP/GeoLite2-City.mmdb" ) ); };
     
    rewrite r_geoip2 {
        set(
            "${geoip2.location.latitude},${geoip2.location.longitude}",
            value( "geoip2.location2" ),
            condition(not "${geoip2.location.latitude}" == "")
        );
    };
     
    destination d_elasticsearch_http {
        elasticsearch-http(
            index("syslog-ng")
            type("")
            url("http://localhost:9200/_bulk")
            template("$(format-json --scope rfc5424 --scope dot-nv-pairs
            --rekey .* --shift 1 --scope nv-pairs
            --exclude DATE --key ISODATE @timestamp=${ISODATE})")
        );
    };
    
    
    log {
        source(s_sys);
        source(s_tcp);
        if (match("s_tcp" value("SOURCE"))) {
            parser(p_kv);
            parser(p_geoip2);
            rewrite(r_geoip2);
        };
        destination(d_elasticsearch_http);
        flags(flow-control);
    };

    The major differences are:

    • As the log statement also includes the local logs, I added an extra filter. This way only iptables logs arriving through the network source are parsed and enriched with geolocation information.

    • Instead of the Java-based Elasticsearch destination, we use the new elasticsearch-http() destination (based on the http() destination written in C).

    • Normally, type() is a mandatory parameter, but for Elasticsearch 7 you can leave it empty. You can do that using quotation marks: type("").

    • As syslog-ng parses many logs by default (Application Adapters), dot-nv-pairs are added to the scope. Shifting is needed as Elasticsearch does not like the fields starting with an underscore (syslog-ng creates name-value pairs starting with a dot, which are turned into underscores when transformed to JSON).

    Note: on the one hand, we have received many positive feedback on the new elasticsearch-http() destination. As it does not need Java any more, it is easier to install and configure. On the other hand, we also have reports of increased network traffic (your mileage may vary).

    You can read more about the changes on the Elasticsearch side (including a roadmap of changes) here: https://www.elastic.co/guide/en/elasticsearch/reference/current/removal-of-types.html

    If you have questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by email or you can even chat with us. For a list of possibilities, check our GitHub page under the “Community” section at https://github.com/balabit/syslog-ng. On Twitter, I am available as @PCzanik.