Fedora People

My talk at PyCon US 2019

Posted by Kushal Das on May 18, 2019 03:04 AM

A couple of weeks back, I gave a talk at PyCon US 2019, “Building reproducible Python applications for secured environments”.

The main idea behind the talk is about the different kind of threats in an application which has dependencies (with regular updates) coming from various upstream projects, and also the final deployable artifact (Debian package in this case) needs to audit-able, and reproducible.

Before my talk on Saturday, I went through the whole idea and different steps we are following, with many of the PyPA (Python Packaging Authority) and other security leads in various organizations.

You can view the talk on Youtube. Feel free to give any feedback over email or Twitter.

FPgM report: 2019-20

Posted by Fedora Community Blog on May 17, 2019 08:10 PM
Fedora Program Manager weekly report on Fedora Project development and progress

Here’s your report of what has happened in Fedora Program Management this week. Elections nominations are open through May 22.

I have weekly office hours in #fedora-meeting-1. Drop by if you have any questions or comments about the schedule, Changes, elections, or anything else.

Announcements

Help wanted

Upcoming meetings

Fedora 31 Status

Changes

Announced

Submitted to FESCo

Approved by FESCo

Rejected by FESCo

Fedora 32 Status

Changes

Approved by FESCo

The post FPgM report: 2019-20 appeared first on Fedora Community Blog.

PHP version 7.2.19RC1 and 7.3.6RC1

Posted by Remi Collet on May 17, 2019 05:10 AM

Release Candidate versions are available in testing repository for Fedora and Enterprise Linux (RHEL / CentOS) to allow more people to test them. They are available as Software Collections, for a parallel installation, perfect solution for such tests (for x86_64 only), and also as base packages.

RPM of PHP version 7.3.6RC1 are available as SCL in remi-test repository and as base packages in the remi-test repository for Fedora 30 or remi-php73-test repository for Fedora 27-29 and Enterprise Linux.

RPM of PHP version 7.2.19RC1 are available as SCL in remi-test repository and as base packages in the remi-test repository for Fedora 28-29 or remi-php72-test repository for Fedora 27 and Enterprise Linux.

 

emblem-notice-24.pngPHP version 7.1 is now in security mode only, so no more RC will be released.

emblem-notice-24.pngInstallation : read the Repository configuration and choose your version.

Parallel installation of version 7.3 as Software Collection:

yum --enablerepo=remi-test install php73

Parallel installation of version 7.2 as Software Collection:

yum --enablerepo=remi-test install php72

Update of system version 7.3:

yum --enablerepo=remi-php73,remi-php73-test update php\*

Update of system version 7.2:

yum --enablerepo=remi-php72,remi-php72-test update php\*

Notice: version 7.3.5RC1 in Fedora rawhide for QA.

emblem-notice-24.pngEL-7 packages are built using RHEL-7.6.

emblem-notice-24.pngRC version is usually the same as the final version (no change accepted after RC, exception for security fix).

Software Collections (php72, php73)

Base packages (php)

Alerte plateformes de traduction en danger !

Posted by Jean-Baptiste Holcroft on May 17, 2019 12:00 AM
Dans le monde du logiciel libre, beaucoup parlent anglais, c’est la langue Une, celle qui nous fait passer franchir les frontières pour rencontrer l’autre. Sauf que cette langue, c’est une barrière pour la majorité. L’élite la maîtrise, les autres la subissent, directement via les contenus et outils, ou indirectement par la propagation de mots empruntés à l’anglais sans chercher à trouver d’explication simple sans mot complexe. Cette forme d’expression est aussi une barrière à la compréhension et à la propagation des savoirs.

Use custom domain name with Blog-O-Matic

Posted by Pablo Iranzo Gómez on May 16, 2019 08:29 PM

As a recipe, if you want to enable a custom domain name on blog-o-matic a special file needs to be created on the ‘GitHub Pages’ served ‘master’ branch.

In order to do so, edit pelicanconf.py and add the following differences:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
diff --git a/pelicanconf.py b/pelicanconf.py
index 680abcb..fc3dd8f 100644
--- a/pelicanconf.py
+++ b/pelicanconf.py
@@ -46,13 +46,16 @@ AMAZON_ONELINK = "b63a2115-85f7-43a9-b169-5f4c8c275655"


 # Extra files customization
-EXTRA_PATH_METADATA = {}
+EXTRA_PATH_METADATA = {
+    'extra/CNAME': {'path': 'CNAME'},
+}
+

 EXTRA_TEMPLATES_PATHS = [
     "plugins/revealmd/templates",  # eg: "plugins/revealmd/templates"
 ]

-STATIC_PATHS = [ 'images' ]
+STATIC_PATHS = [ 'images' , 'extra']


 ## ONLY TOUCH IF YOU KNOW WHAT YOU'RE DOING!

This will copy the CNAME file created in content/extra/CNAME to the resulting ‘master’ branch as /CNAME.

This file is interpreted by Github pages server as the domain name to listen for, so your website will start to be available from it (supposing that you followed usual requirements):

1
2
3
4
yourcustomdomain.es.    1   IN  A   185.199.108.153
yourcustomdomain.es.    1   IN  A   185.199.109.153
yourcustomdomain.es.    1   IN  A   185.199.110.153
yourcustomdomain.es.    1   IN  A   185.199.111.153

Please, do review if the serving servers have been updated on github pages!

Using nmcli to set nameservers

Posted by Adam Young on May 16, 2019 07:20 PM

Using a customer nameserver often requires disabling the DHCP based resolv.conf modifications. Here is what I got to work.

I needed to modify the connection associated with the Ethernet device. In my case, this was called “System eth0”. The two values I needed to change were: ipv4.ignore-auto-dns (from no to yes) and the ipv4.dns value, setting the desired DNS servers.

Here were the commands.

nmcli conn modify "System eth0" ipv4.ignore-auto-dns yes
nmcli conn modify "System eth0" ipv4.dns  "192.168.24.7 8.8.8.8"
systemctl restart NetworkManager

Below that are the commands I used to check the pre-and-post states.

[root@idm ~] nmcli conn show "System eth0" | grep dns:
connection.mdns:                        -1 (default)
ipv4.dns:                               128.31.27.57,8.8.8.8
ipv4.ignore-auto-dns:                   no
ipv6.dns:                               --
ipv6.ignore-auto-dns:                   no
[root@idm ~] nmcli conn modify "System eth0" ipv4.ignore-auto-dns yes
[root@idm ~] nmcli conn modify "System eth0" ipv4.dns  "192.168.24.7 8.8.8.8"
[root@idm ~] nmcli conn show "System eth0" | grep dns:
connection.mdns:                        -1 (default)
ipv4.dns:                               192.168.24.7,8.8.8.8
ipv4.ignore-auto-dns:                   yes
ipv6.dns:                               --
ipv6.ignore-auto-dns:                   no
[root@idm ~]# systemctl restart NetworkManager
[root@idm ~]# cat /etc/resolv.conf 
# Generated by NetworkManager
search demo.redhatfsi.com
nameserver 192.168.24.7
nameserver 8.8.8.8

Kiwi TCMS 6.9

Posted by Kiwi TCMS on May 16, 2019 01:20 PM

We're happy to announce Kiwi TCMS version 6.9! This is a small improvement and bug-fix update which introduces our first telemetry report: testing breakdown. You can explore everything at https://public.tenant.kiwitcms.org!

Supported upgrade paths:

5.3   (or older) -> 5.3.1
5.3.1 (or newer) -> 6.0.1
6.0.1            -> 6.1
6.1              -> 6.1.1
6.1.1            -> 6.2 (or newer)

Docker images:

kiwitcms/kiwi       latest  a01eaeddf213    1.001 GB
kiwitcms/kiwi       6.2     7870085ad415    957.6 MB
kiwitcms/kiwi       6.1.1   49fa42ddfe4d    955.7 MB
kiwitcms/kiwi       6.1     b559123d25b0    970.2 MB
kiwitcms/kiwi       6.0.1   87b24d94197d    970.1 MB
kiwitcms/kiwi       5.3.1   a420465852be    976.8 MB

Changes since Kiwi TCMS 6.8

Improvements

  • Update mysqlclient from 1.4.2 to 1.4.2.post1
  • Ship with prism.js so it can be used for syntax highlighting
  • Add Testing Breakdown telemetry
  • Mark more strings for translations
  • Add delete_user() function which can delete data across Postgre schemas (if kiwitcms-tenants add-on is installed)

API

  • Remove deprecated TestCaseRun. API methods. Use the new TestExecution. methods introduced in v6.7. Fixes Issue #889

Bug fixes

  • Fix typos in documentation (@Prome88)
  • Fix TemplateParseError in email templates when removing test cases. On-delete email notification is now sent properly

Refactoring

  • Add more tests around TestRun/TestExecution menu permissions
  • Minor pylint fixes

Translations

Join us at OpenExpo in Madrid

Kiwi TCMS is exhibitor at OpenExpo Europe on June 20th in Madrid. We will be hosting an info booth and 2 technical presentations delivered by Anton Sankov and Alex Todorov.

Next week we are going to announce 100% discount tickets for our guests at the conference. If you are interested in coming subscribe to our newsletter and don't miss the announcement!

How to upgrade

If you are using Kiwi TCMS as a Docker container then:

cd Kiwi/
git pull
docker-compose down
docker pull kiwitcms/kiwi
docker pull centos/mariadb
docker-compose up -d
docker exec -it kiwi_web /Kiwi/manage.py migrate

Don't forget to backup before upgrade!

WARNING: kiwitcms/kiwi:latest and docker-compose.yml will always point to the latest available version! If you have to upgrade in steps, e.g. between several intermediate releases, you have to modify the above workflow:

# starting from an older Kiwi TCMS version
docker-compose down
docker pull kiwitcms/kiwi:<next_upgrade_version>
edit docker-compose.yml to use kiwitcms/kiwi:<next_upgrade_version>
docker-compose up -d
docker exec -it kiwi_web /Kiwi/manage.py migrate
# repeat until you have reached latest

Happy testing!

Announcing Ben Cotton as new Community Blog editor-in-chief

Posted by Fedora Community Blog on May 16, 2019 08:00 AM

Today, I am excited to announce Ben Cotton will take on the role as Fedora Community Blog (CommBlog) editor-in-chief starting for Fedora 30. Ben is currently the Fedora Program Manager at Red Hat. In that time, Ben has served as a CommBlog editor and has done a lot of work behind the scenes to keep the Blog operating smoothly. Please join me in giving a warm welcome to Ben as he enters this new position!

The CommOps team launched the CommBlog in November 2015. When the blog launched, it was an idea: we knew the Fedora community does amazing things, but we thought we could do a better job at telling those stories. The CommBlog was one solution to this problem. Since then, the CommBlog became a hub of news specifically for Fedora contributors. Over 550 articles were published, accumulating a total of 232,020 page views to date. Our success is supported by the hundreds of Fedora contributors who have published their stories on the CommBlog. Thank you to all of our authors!

When I took on the role of editor-in-chief from Remy DeCausemaker in June 2016, I didn’t know what I was getting into. Over time, I came to appreciate the diverse community behind Fedora and the efforts of those who not only contribute to Fedora, but bring Fedora and open source to local communities around the world. The event report-style articles span five continents with countless countries included, from the annual festivities of FLISoL in Latin America, Fedora Women’s Days around the world, and numerous events across India.

I’m happy to pass the torch onto Ben and I have no doubt his leadership will positively impact the future of the CommBlog. Congratulations Ben!

The post Announcing Ben Cotton as new Community Blog editor-in-chief appeared first on Fedora Community Blog.

Building Smaller Container Images

Posted by Fedora Magazine on May 16, 2019 08:00 AM

Linux Containers have become a popular topic, making sure that a container image is not bigger than it should be is considered as a good practice. This article give some tips on how to create smaller Fedora container images.

microdnf

Fedora’s DNF is written in Python and and it’s designed to be extensible as it has wide range of plugins. But Fedora has an alternative base container image which uses an smaller package manager called microdnf written in C. To use this minimal image in a Dockerfile the FROM line should look like this:

FROM registry.fedoraproject.org/fedora-minimal:30

This is an important saving if your image does not need typical DNF dependencies like Python. For example, if you are making a NodeJS image.

Install and Clean up in one layer

To save space it’s important to remove repos meta data using dnf clean all or its microdnf equivalent microdnf clean all. But you should not do this in two steps because that would actually store those files in a container image layer then mark them for deletion in another layer. To do it properly you should do the installation and cleanup in one step like this

FROM registry.fedoraproject.org/fedora-minimal:30 
RUN microdnf install nodejs && microdnf clean all

Modularity with microdnf

Modularity is a way to offer you different versions of a stack to choose from. For example you might want non-LTS NodeJS version 11 for a project and old LTS NodeJS version 8 for another and latest LTS NodeJS version 10 for another. You can specify which stream using colon

# dnf module list  
# dnf module install nodejs:8

The dnf module install command implies two commands one that enables the stream and one that install nodejs from it.

# dnf module enable nodejs:8 
# dnf install nodejs

Although microdnf does not offer any command related to modularity, it is possible to enable a module with a configuation file, and libdnf (which microdnf uses) seems to support modularity streams. The file looks like this

/etc/dnf/modules.d/nodejs.module 
[nodejs]
name=nodejs
stream=8
profiles=
state=enabled

A full Dockerfile using modularity with microdnf looks like this:

FROM registry.fedoraproject.org/fedora-minimal:30 
RUN \
echo -e "[nodejs]\nname=nodejs\nstream=8\nprofiles=\nstate=enabled\n" > /etc/dnf/modules.d/nodejs.module && \
microdnf install nodejs zopfli findutils busybox && \
microdnf clean all

Multi-staged builds

In many cases you might have tons of build-time dependencies that are not needed to run the software for example building a Go binary, which statically link dependencies. Multi-stage build are an efficient way to separate the application build and the application runtime.

For example the Dockerfile below builds confd a Go application.

# building container 
FROM registry.fedoraproject.org/fedora-minimal AS build
RUN mkdir /go && microdnf install golang && microdnf clean all
WORKDIR /go
RUN export GOPATH=/go; CGO_ENABLED=0 go get github.com/kelseyhightower/confd

FROM registry.fedoraproject.org/fedora-minimal
WORKDIR /
COPY --from=build /go/bin/confd /usr/local/bin
CMD ["confd"]

The multi-stage build is done by adding AS after the FROM instruction and by having another FROM from a base container image then using COPY –from= instruction to copy content from the build container to the second container.

This Dockerfile can then be built and run using podman

$ podman build -t myconfd .
$ podman run -it myconfd

Synchronizing Keystones Via the API

Posted by Adam Young on May 15, 2019 06:14 PM

When building a strategy for computing, we need to think large scale. I’ve been trying to frame the discussion in terms of a million nodes in a dozen data centers. How is OpenStack going to be able to handle this?


Requirements

Lets get a few more of the requirements recorded before tackling the solutions.

Scale

In practice today, TripleO tops out at about 600 nodes, and Keystone is limited to one deployment, so people are running many separate distinct Keystone servers for each distinct TripleO cluster.

The Cells abstraction, which should be GA in the Train release of TripleO, should up the numbers. Assuming a single Nova cell can manage about 500 nodes, and an OpenStack cluster is composed of, say 20 cells, that means that we have a bout 10000 nodes per OpenStack cluster. To get to a million nodes, we need 100 of them. Even if we up the number of nodes to 100,000 managed in a single OpenStack deployment, we are going to need to synchronize 10 of them.

Keystone is involved in all operations in OpenStack; if Keystone is not available, OpenStack is not available. Galera scales to roughly 15 Nodes maximum. If there are 3 nodes at a given site, that scales to 5 sites. So it is likely that we are going to have to run more than one Keystone servers.

Different Versions

We know that upgrades are painful, essential, and inevitable. We need be to upgrading systems without impacting overall behavior and reliability. Often this means green/blue deployments of new features, or a canary cluster that is used before a feature is rolled out everywhere. If we need to upgrade all of the Keystone servers in lock step, we can’t do that. The fact that we are going to have two or more Keystone Servers running at different versions means that a database level sync will not work; N and N+1 versions of Keystone are going to have different schemas.

Identifiers

Each resource in Keystone has a unique identifier. For the majority of resources, the identifiers are currently generated as UUIDs. In addition, the identifiers are assigned by the system, and are not something an end user can specify when creating the resource. The theory is that this would prevent identifier squatting, where a user creates a resource with a specified ID in order to deny that ID to another user, or hijack the use of the identifier for some other reason. In practice it means that two Keystone deployments will have different identifiers for resources that should be common, such as role, project, or user groups.

This identifier skew means that to track something for audit purposes you can only correlate on the resource name. But resource names are modifiable.

Predictable Identifiers

We’ve been batting around a few ideas that would have been wonderful if Keystone had implemented them early on. The biggest is to generate Identifiers based on a the unique data fed in to them. For example, a Federated User has the ID generated from a hash of the username and the domain id. This work if the data fed in is immutable. If, however, the string is mutable, and is changed, then the hash no longer matches the string and the approach is not usable for synchronization. This is the case for projects, roles, groups, and non-Federated users.

Settable Identifiers

The limiting fact for using the API to duplicate data from one Keystone system to another is the generation of the identifier. Since a new record always gets a new identifier, and the the value for the identifier can only be generated, the API does not allow matching of records.

However, allowing all users to specify the identifiers when creating records would create the potential for squatting on the identifier, and also prevent synchronization.

Thus, for normal record generation, the identifiers should be generated by the system, and explicit identifier specification should be reserved for the synchronization use case only.

With the advent of System scoped roles, we can split the RBAC enforcement on the creation APIs. Normal creation should require a project or domain scoped token. Synchronization should require a system scoped token.

Synchronization

Assuming we are using a hub-and-spoke model, all changes that go into the hub need to be reflected in the spokes. This is a non-trivial problem to solve.

Persistence

The first approach might be so use a notification listener that would then generate an API call to each of the spokes upon creation. However, if implemented naively, messages will get dropped when remote systems are unavailable. In order to guarantee delivery, notifications must be kept in a queue prior to execution of the remote creates, and only removed once the creates have successfully completed. Since notifications are ephemeral, they will not be replayed if the notification listener is not available at creation time either. Thus, a process needs to sweep through the hub to regenerate any missed notifications Since most resources do not have a creation timestamp, there is no way to filter this list for events that happened after a certain point-in-time.

Two Way Synchronization

When a user wants to get work done, they are going to need to make changes. Assuming that role assignments are done centrally, the only changes a user should be making in the spokes are specific to workflows inside the services at the spoke. For most cases, the user can be content with changes made at the center. However, for delegation, the user is going to want to make changes and see the effect of those changes within the scope of a given workflow; waiting for eventually consistent semantics is going to impact productivity.

Assuming the user needs to create an application credential in a spoke Keystone, the question then becomes “should we synchronize this credential with the central one.” While the immediate reaction would seem to be “no” we often find use cases that are not obvious but nevertheless essential to a broad class of users. In those cases, synchronization from spoke to hub is going to follow the same pattern as hub to spoke, with a later “trickle down” to the other spokes.

Conflict Resolution

The issue is what to do when two different systems have different definitions of a resource. This could happen due to a network partition, or due to rogue automation creating resources in two different spokes. With mutable resources, it could also happen with creation in one spoke and modification in another or hub.

There are two pieces to conflict resolution: Which agent is responsible for are resolving the conflict and what data should be considered the correct version. An example of a resolution strategy would be: a single agent performs all synchronization, drive from the hub. The Hub version is considered definitive unless a spoke has been designated definitive for a specific subset of data. This strategy has the advantage of providing for local edits, but makes it more difficult to orchestrate them in the center. Contrast this with a strategy, still driven by a central agent, but that also considers the hub version to be definitive. Any changes made at the spoke will be overwritten by changes in the center.

Deleting Resources

Keystone does not perform soft deletes; once a resource is gone, it is gone forever. Thus, if a resource is deleted out of one Keystone datasource, and no record of that deletion has been captured, there is no way to reconcile between that Keystone server and a separate one short of a complete list/diff of all instances of the resource. This is, obviously a very expensive operation. It also requires the application of the conflict resolution strategy to determine if the resource should be deleted in one server, or created in the other.

Conclusion

Synchronization between Keystones brings in all of the issues of an eventually-consistent database model. Building mechanisms into the Keystone API to support synchronization is going to be necessary, but not sufficient, to building out a scalable cloud infrastructure.

How to fix the “token” parameter not in video info for unknown reason error

Posted by Luca Ciavatta on May 15, 2019 01:59 PM

If the ‘token parameter’ issue has occurred using youtube-dl on Linux, you’re probably using an outdated version of the command. Here how to fix it under Fedora and Ubuntu. You have tried to download video or audio from Youtube through the youtube-dl command and you got the following result: [youtube] 1234567890A: Downloading webpage [youtube] 1234567890A: Downloading video info webpage ERROR:[...]

The post How to fix the “token” parameter not in video info for unknown reason error appeared first on CIALU.NET.

Git magic: split repository into two

Posted by Tomas Tomecek on May 15, 2019 09:00 AM

We’ve just hit this point in our project (packit) that we want to split it to two repositories: CLI tool and a web service.

I was thinking of keeping the git history for files I want to move to the other repository. There is a ton tutorials and guides how to do such a thing. This is what worked for me:

  1. Clone the original repository.

  2. Have a list of files I want to remove:

    $ git ls-files >../r
    

    Now edit ../r by removing entries from the list for files you want to keep.

  3. Use git filter-branch which will remove files from git history we don’t want in our new repo (using the ../r list):

    $ git filter-branch -f --tree-filter "cat $PWD/../r | xargs rm -rf" --prune-empty HEAD
    

That’s it, enjoy git history without files you don’t care about.

Cockpit 194

Posted by Cockpit Project on May 15, 2019 12:00 AM

Cockpit is the modern Linux admin interface. We release regularly. Here are the release notes from version 194.

Firewall: Add services to a specific zone

The firewall page now supports zones.

Firewall zone list

Services can now be added to and removed from specific zones.

Adding services to zones

Additionally, dialogs on the Firewall page now show error messages.

Firewall error message

Redesigned on/off switch

This release of Cockpit features a redesigned on/off switch. Compared with the old version, the new switch is easier to use (with a bigger clickable area), uses iconography to avoid translation problems, is accessible, and is keyboard-navigatable. The new design corresponds to the design of the switch from PatternFly 4.

Switch On

Switch On

The switch also features a native-like keyboard focus indicator: a blue glow for Chrome/WebKit-based browsers and a dotted outline for Firefox.

Switch with keyboard focus, Epiphany

Switch with keyboard focus, Firefox

Try it out

Cockpit 194 is available now:

Senior Job in Red Hat graphics team

Posted by Dave Airlie on May 14, 2019 09:07 PM
We have a job in our team, it's a pretty senior role, definitely want people with lots of experience. Great place to work,ignore any possible future mergers :-)

https://global-redhat.icims.com/jobs/68911/principal-software-engineer/job?mobile=false&width=1526&height=500&bga=true&needsRedirect=false&jan1offset=600&jun1offset=600

Untitled Post

Posted by Zach Oglesby on May 14, 2019 06:40 PM

Currently reading: The Hunt for Red October by Tom Clancy 📚

Disabling Fedora ABRT notification

Posted by Robbi Nespu on May 14, 2019 06:00 PM

I dont like ABRT because it spamming my fedora system notification. It kinda disturbing me, so because of the MCE ABRT notification keep spamming, I suggest to set OnlyFatalMCE = yes in /etc/abrt/plugins/oops.conf (you need root access to modify this file).

If needed, try to disable some ABRT related service. You can find it with this command

$ sudo systemctl -t service | grep abrt

for example, I got 4 systemd services:

abrt-journal-core.service                                                                 loaded active running Creates ABRT problems from coredumpctl messages                              
abrt-oops.service                                                                         loaded active running ABRT kernel log watcher                                                      
abrt-xorg.service                                                                         loaded active running ABRT Xorg log watcher                                                        
abrtd.service        

Now let stop it (for current session) and disable it (for next reboot)

$ sudo systemctl stop abrt-journal-core.service 
$ sudo systemctl disable  abrt-journal-core.service

$ sudo systemctl stop abrt-oops.service
$ sudo systemctl disable abrt-oops.service

$ sudo systemctl stop abrt-xorg.service
$ sudo systemctl disable abrt-xorg.service

$ sudo systemctl stop abrtd.service
$ sudo systemctl disable abrtd.service

Now reboot your workstation. You should not getting anymore ABRT notification anymore.

Note: This my current step, it maybe incorrect :stuck_out_tongue: and I do not encourge you to disabling automatic problem reporting (ABRT) for no reason because ABRT is a set of tools to help you to detect and report crashes. It’s main purpose is to ease the process of reporting an issue and finding a solution.

The Wayland Itches project

Posted by Hans de Goede on May 14, 2019 02:16 PM
Now that GNOME3 on Wayland is the default in Fedora I've been trying to use this as my default desktop, but until recently I've kept falling back to GNOME3 on Xorg because of various small issues.

To fix this I've switched to using GNOME3 on Wayland as day to day desktop now and I'm working on fixing any issues which this causes as I hit them, aka "The Wayland Itches project". So far I've hit and fixed the following issues:

1. TopIcons

The TopIcons extension, which I depend on for some of my workflow, was not working well under Wayland with GNOME-3.30, only the top row of icons was clickable. This was fixed in GNOME-3.32, but with GNOME-3.32 using TopIcons was causing gnome-shell to go into a loop leading to a very high CPU load. The day I wanted to start looking into fixing this I was chatting to Carlos Garnacho and he pointed out to me that this was fixed a couple of days ago in gnome-shell. The fix for this is in gnome-shell 3.32.2 .

2. Hotkeys/desktop shortcuts not working in VirtualBox Virtual Machines

When running a VirtualBox VM under GNOME3 on Wayland, hotkeys such as alt+tab go to the GNOME3 desktop, rather then being forwarded to the VM as happens under Xorg. This can be fixed by changing 2 settings:

  gsettings set org.gnome.mutter.wayland xwayland-allow-grabs true
  gsettings set org.gnome.mutter.wayland xwayland-grab-access-rules "['VirtualBox Machine']"

This is a decent workaround, but we want things to "just work" of course, so we have been working on some changes to make this just work in the next GNOME version.

3. firefox-wayland

I've been also trying to use firefox-wayland as my day to day browser, this has lead to me filing three firefox bugs and I've switched back to regular
firefox (x11) for now.


If you have any Wayland Itches yourself, please drop me an email at hdegoede@redhat.com explaining them in as much detail as you can and I will see what I can do. Note that I typically get a lot of emails when asking for feedback like this, so I cannot promise that I will reply to every email; but I will be reading them all.

Running Fedora CoreOS nightly ISO and qcow2 images in libvirt

Posted by Sinny Kumari on May 13, 2019 02:58 PM

Fedora CoreOS (FCOS) is the upcoming OS which contains best of both Fedora Atomic Host and Container Linux. There is no official release yet but we do have nightly images available to try. There are various image artifacts being produced but in this blog we will focus installing and running ISO and qcow2.

Download latest nightly image

FCOS nightly images are built using Jenkins pipleine in CentOS CI . Download latest ISO or qcow2 nightly image which you like.

wget https://ci.centos.org/artifacts/fedora-coreos/prod/builds/latest/fedora-coreos-30.113-qemu.qcow2.gz https://ci.centos.org/artifacts/fedora-coreos/prod/builds/latest/fedora-coreos-30.113-installer.iso

While writing this blog latest nightly ISO image name is fedora-coreos-30.113-installer.iso and qcow2 image name is fedora-coreos-30.113-installer.iso . I will be using these names in this blog but when you try replace it with the media name which you downloaded.

Ignition Config

Similar to Container Linux, FCOS also requires ignition config to perform initial configuration during first boot. We will create fcos.ign ignition config file which defines to create a user test with password test, authorized ssh key and adding user test to wheel group to provide sudo access.

$ cat fcos.ign
{
  "ignition": {
     "version": "3.0.0"
  },

  "passwd": {
    "users": [
      {
        "name": "test",
        "passwordHash": "$y$j9T$dahelkQ2GUy2EfzW4Qu/m/$eApizQ.vHFyGJRel.1wNbKd8PLZ5soT0vBiIp4ieBM1",
        "sshAuthorizedKeys": [
          "ssh-rsa your_public_ssh_key"
        ],
        "groups": [ "wheel" ]
      }
    ]
  }
}

 

passwordHash was created using mkpasswd . Add you public ssh key(s) which you want to add in sshAuthorizedKeys.

Running qcow2 image

qcow2 image which we download is in compressed form, let’s first extract it

gunzip fedora-coreos-30.113-qemu.qcow2.gz

This will give us qcow2 image fedora-coreos-30.113-qemu.qcow2.

qemu-system-x86_64 -accel kvm -name fcos-qcow -m 2048 -cpu host -smp 2 -nographic -netdev user,id=eth0,hostname=coreos -device virtio-net-pci,netdev=eth0 -drive file=/path/to/fedora-coreos-30.113-qemu.qcow2 -fw_cfg name=opt/com.coreos/config,file=/path/to/fcos.ign

On login prompt, enter test as user name and  test as  password as we added in the ignition config and you should be inside a FCOS system!

Installing and Running ISO image

ISO image fedora-coreos-30.113-installer.iso contains initramfs.img and vmlinuz to boot the kernel. We will also need fedora-coreos-30.113-metal-bios.raw.gz or fedora-coreos-30.113-metal-bios.raw.gz from latest nightly repo depending upon what kind of installation you want. For our example, we will download fedora-coreos-30.113-metal-bios.raw.gz image. Installer will need metal-bios image and ignition config during the install process which we will pass as kernel command args. Let’s host fedora-coreos-30.113-metal-bios.raw.gz and fcos.ign files locally.

We can use either virt-install or qemu directly:

With virt-install

virt-install --name fcos-iso --ram 4500 --vcpus 2 --disk pool=vm,size=10 --accelerate --cdrom /path/to/fedora-coreos-30.107-installer.iso --network default

With qemu:

First, create a disk image for installation  –

qemu-img create -f qcow2 fcos.qcow2 10G

Now, run –

qemu-system-x86_64 -accel kvm -name fcos-iso -m 2048 -cpu host -smp 2 -netdev user,id=eth0,hostname=coreos -device virtio-net-pci,netdev=eth0 -drive file=/path/to/fcos.qcow2,format=qcow2 -cdrom /path/to/fedora-coreos-30.113-installer.iso

 

When ISO boots, in grub menu use <TAB> to add additional args in kernel command  line . They are coreos.inst.install_dev (installation device),  coreos.inst.image_url (installer image url) and coreos.inst.ignition_url (ignition config url). It looks something like below:

fcos

Once, additional parameter is added press <ENTER> to continue the installation. Once installation completes successfully and vm reboots, it will prompt for user login which will be same as we mentioned in running qcow section.

Additional update information along with instruction for PXE boot is available in coreos-installer README

Conclusion

To remain one step ahead, you can also run latest FCOS directly from coreos-assembler whose instruction is available in its README .

FCOS is in devlopment state and new features are getting added everyday to make it better. Give a try and get a feel of how early FCOS looks like!

 

Donating 5 minutes of your time to help the LVFS

Posted by Richard Hughes on May 13, 2019 11:53 AM

For about every 250 bug reports I recieve I get an email offering to help. Most of the time the person offering help isn’t capable of diving right in the trickiest parts of the code and just wanted to make my life easier. Now I have a task that almost anyone can help with…

For the next version of the LVFS we deploy we’re going to be showing what was changed between each firmware version. Rather than just stating the firmware has changed from SHA1:DEAD to SHA1:BEEF and some high level update description provided by the vendor, we can show the interested user the UEFI modules that changed. I’m still working on the feature and without more data it’s kinda, well, dull. Before I can make the feature actually useful to anyone except a BIOS engineer, I need some help finding out information about the various modules.

In most cases it’s simply googling the name of the module and writing 1-2 lines of a summary about the module. In some cases the module isn’t documented at all, and that’s fine — I can go back to the vendors and ask them for more details about the few we can’t do ourselves. What I can’t do is ask them to explain all 150 modules in a specific firmware release, and I don’t scale to ~2000 Google queries. With the help of EDK2 I’ve already done 213 myself but now I’ve run out of puff.

So, if you have a spare 5 minutes I’d really appreciate some help. The shared spreadsheet is here, and any input is useful. Thanks!

Manage business documents with OpenAS2 on Fedora

Posted by Fedora Magazine on May 13, 2019 08:00 AM

Business documents often require special handling. Enter Electronic Document Interchange, or EDI. EDI is more than simply transferring files using email or http (or ftp), because these are documents like orders and invoices. When you send an invoice, you want to be sure that:

1. It goes to the right destination, and is not intercepted by competitors.
2. Your invoice cannot be forged by a 3rd party.
3. Your customer can’t claim in court that they never got the invoice.

The first two goals can be accomplished by HTTPS or email with S/MIME, and in some situations, a simple HTTPS POST to a web API is sufficient. What EDI adds is the last part.

This article does not cover the messy topic of formats for the files exchanged. Even when using a standardized format like ANSI or EDIFACT, it is ultimately up to the business partners. It is not uncommon for business partners to use an ad-hoc CSV file format. This article shows you how to configure Fedora to send and receive in an EDI setup.

Centralized EDI

The traditional solution is to use a Value Added Network, or VAN. The VAN is a central hub that transfers documents between their customers. Most importantly, it keeps a secure record of the documents exchanged that can be used as evidence in disputes. The VAN can use different transfer protocols for each of its customers

AS Protocols and MDN

The AS protocols are a specification for adding a digital signature with optional encryption to an electronic document. What it adds over HTTPS or S/MIME is the Message Disposition Notification, or MDN. The MDN is a signed and dated response that says, in essence, “We got your invoice.” It uses a secure hash to identify the specific document received. This addresses point #3 without involving a third party.

The AS2 protocol uses HTTP or HTTPS for transport. Other AS protocols target FTP and SMTP. AS2 is used by companies big and small to avoid depending on (and paying) a VAN.

OpenAS2

OpenAS2 is an open source Java implemention of the AS2 protocol. It is available in Fedora since 28, and installed with:

$ sudo dnf install openas2
$ cd /etc/openas2

Configuration is done with a text editor, and the config files are in XML. The first order of business before starting OpenAS2 is to change the factory passwords.

Edit /etc/openas2/config.xml and search for ChangeMe. Change those passwords. The default password on the certificate store is testas2, but that doesn’t matter much as anyone who can read the certificate store can read config.xml and get the password.

What to share with AS2 partners

There are 3 things you will exchange with an AS2 peer.

AS2 ID

Don’t bother looking up the official AS2 standard for legal AS2 IDs. While OpenAS2 implements the standard, your partners will likely be using a proprietary product which doesn’t. While AS2 allows much longer IDs, many implementations break with more than 16 characters. Using otherwise legal AS2 ID chars like ‘:’ that can appear as path separators on a proprietary OS is also a problem. Restrict your AS2 ID to upper and lower case alpha, digits, and ‘_’ with no more than 16 characters.

SSL certificate

For real use, you will want to generate a certificate with SHA256 and RSA. OpenAS2 ships with two factory certs to play with. Don’t use these for anything real, obviously. The certificate file is in PKCS12 format. Java ships with keytool which can maintain your PKCS12 “keystore,” as Java calls it. This article skips using openssl to generate keys and certificates. Simply note that sudo keytool -list -keystore as2_certs.p12 will list the two factory practice certs.

AS2 URL

This is an HTTP URL that will access your OpenAS2 instance. HTTPS is also supported, but is redundant. To use it you have to uncomment the https module configuration in config.xml, and supply a certificate signed by a public CA. This requires another article and is entirely unnecessary here.

By default, OpenAS2 listens on 10080 for HTTP and 10443 for HTTPS. OpenAS2 can talk to itself, so it ships with two partnerships using http://localhost:10080 as the AS2 URL. If you don’t find this a convincing demo, and can install a second instance (on a VM, for instance), you can use private IPs for the AS2 URLs. Or install Cjdns to get IPv6 mesh addresses that can be used anywhere, resulting in AS2 URLs like http://[fcbf:fc54:e597:7354:8250:2b2e:95e6:d6ba]:10080.

Most businesses will also want a list of IPs to add to their firewall. This is actually bad practice. An AS2 server has the same security risk as a web server, meaning you should isolate it in a VM or container. Also, the difficulty of keeping mutual lists of IPs up to date grows with the list of partners. The AS2 server rejects requests not signed by a configured partner.

OpenAS2 Partners

With that in mind, open partnerships.xml in your editor. At the top is a list of “partners.” Each partner has a name (referenced by the partnerships below as “sender” or “receiver”), AS2 ID, certificate, and email. You need a partner definition for yourself and those you exchange documents with. You can define multiple partners for yourself. OpenAS2 ships with two partners, OpenAS2A and OpenAS2B, which you’ll use to send a test document.

OpenAS2 Partnerships

Next is a list of “partnerships,” one for each direction. Each partnership configuration includes the sender, receiver, and the AS2 URL used to send the documents. By default, partnerships use synchronous MDN. The MDN is returned on the same HTTP transaction. You could uncomment the as2_receipt_option for asynchronous MDN, which is sent some time later. Use synchronous MDN whenever possible, as tracking pending MDNs adds complexity to your application.

The other partnership options select encryption, signature hash, and other protocol options. A fully implemented AS2 receiver can handle any combination of options, but AS2 partners may have incomplete implementations or policy requirements. For example, DES3 is a comparatively weak encryption algorithm, and may not be acceptable. It is the default because it is almost universally implemented.

If you went to the trouble to set up a second physical or virtual machine for this test, designate one as OpenAS2A and the other as OpenAS2B. Modify the as2_url on the OpenAS2A-to-OpenAS2B partnership to use the IP (or hostname) of OpenAS2B, and vice versa for the OpenAS2B-to-OpenAS2A partnership. Unless they are using the FedoraWorkstation firewall profile, on both machines you’ll need:

# sudo firewall-cmd --zone=public --add-port=10080/tcp

Now start the openas2 service (on both machines if needed):

# sudo systemctl start openas2

Resetting the MDN password

This initializes the MDN log database with the factory password, not the one you changed it to. This is a packaging bug to be fixed in the next release. To avoid frustration, here’s how to change the h2 database password:

$ sudo systemctl stop openas2
$ cat >h2passwd <<'DONE'
#!/bin/bash
AS2DIR="/var/lib/openas2"
java -cp "$AS2DIR"/lib/h2* org.h2.tools.Shell \
-url jdbc:h2:"$AS2DIR"/db/openas2 \
-user sa -password "$1" <<EOF
alter user sa set password '$2';
exit
EOF
DONE
$ sudo sh h2passwd ChangeMe yournewpasswordsetabove
$ sudo systemctl start openas2

Testing the setup

With that out of the way, let’s send a document. Assuming you are on OpenAS2A machine:

$ cat >testdoc <<'DONE'
This is not a real EDI format, but is nevertheless a document.
DONE
$ sudo chown openas2 testdoc
$ sudo mv testdoc /var/spool/openas2/toOpenAS2B
$ sudo journalctl -f -u openas2
... log output of sending file, Control-C to stop following log
^C

OpenAS2 does not send a document until it is writable by the openas2 user or group. As a consequence, your actual business application will copy, or generate in place, the document. Then it changes the group or permissions to send it on its way, to avoid sending a partial document.

Now, on the OpenAS2B machine, /var/spool/openas2/OpenAS2A_OID-OpenAS2B_OID/inbox shows the message received. That should get you started!


Photo by Beatriz Pérez Moya on Unsplash.

[Howto] Using include directive with ssh client configuration

Posted by Roland Wolters on May 13, 2019 03:27 AM
<figure class="alignright is-resized"></figure>

A SSH client configuration makes accessing servers much easier and mo convenient. Until recently the configuration was done in one single file which could be problematic. But newer versions support includes to read configuration from multiple places.

SSH is the default way to access servers remotely – Linux and other UNIX systems, and since recently Windows as well.

One feature of the OpenSSH client is to configure often used parameters for SSH connections in a central config file, ~/.ssh/config. This comes in especially handy when multiple remote servers require different parameters: varying ports, other user names, different SSH keys, and so on. It also provides the possibility to define aliases for host names to avoid the necessity to type in the FQDN each time. Since such a configuration is directly read by the SSH client other tools wich are using the SSH client in the background – like Ansible – can benefit from the configuration as well.

A typical configuration of such a config file can look like this:

Host webapp
    HostName webapp.example.com
    User mycorporatelogin
    IdentityFile ~/.ssh/id_corporate
    IdentitiesOnly yes
Host github
    HostName github.com
    User mygithublogin
    IdentityFile ~/.ssh/id_ed25519
Host gitlab
    HostName gitlab.corporate.com
    User mycorporatelogin
    IdentityFile ~/.ssh/id_corporate
Host myserver
    HostName myserver.example.net
    Port 1234
    User myuser
    IdentityFile ~/.ssh/id_ed25519
Host aws-dev
    HostName 12.24.33.66
    User ec2-user
    IdentityFile ~/.ssh/aws.pem
    IdentitiesOnly yes
    StrictHostKeyChecking no
Host azure-prod
    HostName 4.81.234.19
    User azure-prod
    IdentityFile ~/.ssh/azure_ed25519

While this is very handy and helps a lot to maintain sanity even with very different and strange SSH configurations, a single huge file is hard to manage.

Cloud environments for example change constantly, so it makes sense to update/rebuild the configuration regularly. There are scripts out there, but they either just overwrite existing configuration, or do entirely work on an extra file which is referenced in each SSH client call with ssh -F .ssh/aws-config, or they require to mark sections in the .ssh/config like "### AZURE-SSH-CONFIG BEGIN ###". All attempts are either clumsy or error prone.

Another use case is where parts of the SSH configuration is managed by configuration management systems or by software packages for example by a company – again that requires changes to a single file and might alter or remove existing configuration for your others services and servers. After all, it is not uncommon to use your more-or-less private Github account for your company work so that you have mixed entries in your .ssh/config.

The underneath problem of managing more complex software configurations in single files is not unique to OpenSSH, but more or less common across many software stacks which are configured in text files. Recently it became more and more common to write software in a way that configuration is not read as a single file, but that all files from a certain directory are read in. Examples for this include:

  • sudo with the directory /etc/sudoers.d/
  • Apache with /etc/httpd/conf.d
  • Nginx with /etc/nginx/conf.d/
  • Systemd with /etc/systemd/system.conf.d/*
  • and so on…

Initially such an approach was not possible with the SSH client configuration in OpenSSH but there was a bug reported even including a patch quite some years ago. Luckily, almost three years ago OpenSSH version 7.3 was released and that version did come with the solution:

* ssh(1): Add an Include directive for ssh_config(5) files.


https://www.openssh.com/txt/release-7.3

So now it is possible to add one or even multiple files and directories from where additional configuration can be loaded.

Include

Include the specified configuration file(s). Multiple pathnames may be specified and each pathname may contain glob(7) wildcards and, for user configurations, shell-like `~’ references to user home directories. Files without absolute paths are assumed to be in ~/.ssh if included in a user configuration file or /etc/ssh if included from the system configuration file. Include directive may appear inside a Match or Host block to perform conditional inclusion.

https://www.freebsd.org/cgi/man.cgi?ssh_config(5)

The following .ssh/config file defines a sub-directory from where additional configuration can be read in:

$ cat ~/.ssh/config 
Include ~/.ssh/conf.d/*

Underneath ~/.ssh/conf.d there can be additional files, each containing one or more host definitions:

$ ls ~/.ssh/conf.d/
corporate.conf
github.conf
myserver.conf
aws.conf
azure.conf
$ cat ~/.ssh/conf.d/aws.conf
Host aws-dev
    HostName 12.24.33.66
    User ec2-user
    IdentityFile ~/.ssh/aws.pem
    IdentitiesOnly yes
    StrictHostKeyChecking no

This feature made managing SSH configuration for me much easier, and I only have few use cases and mainly require it to keep a simple overview over things. For more flexible (aka cloud based) setups this is crucial and can make things way easier.

Note that the additional config files should only contain host definitions! General SSH configuration should be inside ~/.ssh/config and should be before the include directive: any configuration provided after a “Host” keyword is interpreted as part of that exact host definition – until the next host block or until the next “Match” keyword.

Episode 145 - What do security and fire have in common?

Posted by Open Source Security Podcast on May 13, 2019 12:42 AM
Josh and Kurt talk about fire. We discuss the history of fire prevention and how it mirrors many of things we see in security. There are lessons there for us, we just hope it doesn't take 2000 years like it did for proper fire prevention to catch on.


<iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="https://html5-player.libsyn.com/embed/episode/id/9752450/height/90/theme/custom/thumbnail/yes/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

Show Notes


    Contribute at the Fedora Test Week for kernel 5.1

    Posted by Fedora Magazine on May 12, 2019 05:29 PM

    The kernel team is working on final integration for kernel 5.1. This version was just recently released, and will arrive soon in Fedora. This version has many security fixes included. As a result, the Fedora kernel and QA teams have organized a test week from Monday, May 13, 2019 through Saturday, May 18, 2019. Refer to the wiki page for links to the test images you’ll need to participate. Read below for details.

    How does a test week work?

    A test day/week is an event where anyone can help make sure changes in Fedora work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed before, this is a perfect way to get started.

    To contribute, you only need to be able to do the following things:

    • Download test materials, which include some large files
    • Read and follow directions step by step

    The wiki page for the kernel test day has a lot of good information on what and how to test. After you’ve done some testing, you can log your results in the test day web application. If you’re available on or around the day of the event, please do some testing and report your results.

    Happy testing, and we hope to see you on test day.

    Hardware Random Number Generator PRG320

    Posted by Fabian Affolter on May 12, 2019 12:48 PM

    /dev/urandom/ is a pseudo random number generator used by for Fedora and other linux-based operating systems. This little Python snippet will get you some data:

    import binascii
    import os
    
    data = os.urandom(10)
    
    print(data)
    print(str(binascii.hexlify(bytearray(data))))
    print(int(bytearray(data).hex(), 16))
    

    /dev/random/ and /dev/urandom/ are the usual sources of cryptographic random numbers. The quality of the produced data may suffer if the entropy is not big enough or other circumstances like running in a virtual machine.

    Some of my co-workers at audius take it really serious when it comes to random numbers. The team that is working on HSM (Hardware Security Module) appliances is especially keen to get cryptographically secure random numbers.

    Don’t understand me wrong, they don’t have to deal with the missing entropy or randomness on virtual machines as all their stuff is designed to run on physical hardware but they wanted more.

    The solution of this issue is to use a hardware device that is generating the random numbers. Usually they are using the PRG320-51 by IBB Ingenieurbüro Bergmann but for playing around we also have access to some PRG320-4. They are working in the same way, only the type of connection is different. The PRG320-4 has an USB Type A connector.

    <figure class="wp-block-embed-youtube wp-block-embed is-type-video is-provider-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio">
    <iframe allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen="allowfullscreen" frameborder="0" height="360" src="https://www.youtube.com/embed/PdNtLP_b2fM?feature=oembed" width="640"></iframe>
    </figure>

    On Fedora the device is recognized immediately. It will show up as “Future Technology Devices International”.

    $ lsusb
    [...]
    Bus 001 Device 033: ID 0403:6001 Future Technology Devices International, Ltd ...
    Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

    The numbers are generated and accessible by a serial interface with the settings 921600 8N1. Connecting with minicom to /dev/ttyUSB0 will give you the raw output. Not very handy if you don’t want to feed the output into a tool to generate keys or what ever.

    The little script will get you one number.

    <script src="https://gist.github.com/fabaff/461bae72a0558f34eee1e405c76a147a.js"></script>

    The output looks like:

    {
      "hex": "b'f99fe85ebefe77776aa42fe1e...3740'",
      "int": 2992748900635379...5312,
      "raw": "b\"\\xf9\\x9f\\xe8^\\xbe\...x877@\""
    }

    For further details check the rng-tools. rndg is a daemon that can help you to check and feed random data from hardware device to kernel entropy pool.

    FPgM report: 2019-19

    Posted by Fedora Community Blog on May 10, 2019 07:17 PM
    Fedora Program Manager weekly report on Fedora Project development and progress

    Here’s your report of what has happened in Fedora Program Management this week. Elections nominations are open through May 22.

    I have weekly office hours in #fedora-meeting-1. Drop by if you have any questions or comments about the schedule, Changes, elections, or anything else.

    Announcements

    Help wanted

    Upcoming meetings

    Fedora 31 Status

    Changes

    Announced

    Submitted to FESCo

    Approved by FESCo

    Rejected by FESCo

    Fedora 32 Status

    Changes

    Approved by FESCo

    The post FPgM report: 2019-19 appeared first on Fedora Community Blog.

    Check storage performance with dd

    Posted by Fedora Magazine on May 10, 2019 08:00 AM

    This article includes some example commands to show you how to get a rough estimate of hard drive and RAID array performance using the dd command. Accurate measurements would have to take into account things like write amplification and system call overhead, which this guide does not. For a tool that might give more accurate results, you might want to consider using hdparm.

    To factor out performance issues related to the file system, these examples show how to test the performance of your drives and arrays at the block level by reading and writing directly to/from their block devices. WARNING: The write tests will destroy any data on the block devices against which they are run. Do not run them against any device that contains data you want to keep!

    Four tests

    Below are four example dd commands that can be used to test the performance of a block device:

    1. One process reading from $MY_DISK:
      # dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache
    2. One process writing to $MY_DISK:
      # dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct
    3. Two processes reading concurrently from $MY_DISK:
      # (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache &); (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache skip=200 &)
    4. Two processes writing concurrently to $MY_DISK:
      # (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct &); (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct skip=200 &)

    – The iflag=nocache and oflag=direct parameters are important when performing the read and write tests (respectively) because without them the dd command will sometimes show the resulting speed of transferring the data to/from RAM rather than the hard drive.

    – The values for the bs and count parameters are somewhat arbitrary and what I have chosen should be large enough to provide a decent average in most cases for current hardware.

    – The null and zero devices are used for the destination and source (respectively) in the read and write tests because they are fast enough that they will not be the limiting factor in the performance tests.

    – The skip=200 parameter on the second dd command in the concurrent read and write tests is to ensure that the two copies of dd are operating on different areas of the hard drive.

    16 examples

    Below are demonstrations showing the results of running each of the above four tests against each of the following four block devices:

    1. MY_DISK=/dev/sda2 (used in examples 1-X)
    2. MY_DISK=/dev/sdb2 (used in examples 2-X)
    3. MY_DISK=/dev/md/stripped (used in examples 3-X)
    4. MY_DISK=/dev/md/mirrored (used in examples 4-X)

    A video demonstration of the these tests being run on a PC is provided at the end of this guide.

    Begin by putting your computer into rescue mode to reduce the chances that disk I/O from background services might randomly affect your test results. WARNING: This will shutdown all non-essential programs and services. Be sure to save your work before running these commands. You will need to know your root password to get into rescue mode. The passwd command, when run as the root user, will prompt you to (re)set your root account password.

    $ sudo -i
    # passwd
    # setenforce 0
    # systemctl rescue

    You might also want to temporarily disable logging to disk:

    # sed -r -i.bak 's/^#?Storage=.*/Storage=none/' /etc/systemd/journald.conf
    # systemctl restart systemd-journald.service

    If you have a swap device, it can be temporarily disabled and used to perform the following tests:

    # swapoff -a
    # MY_DEVS=$(mdadm --detail /dev/md/swap | grep active | grep -o "/dev/sd.*")
    # mdadm --stop /dev/md/swap
    # mdadm --zero-superblock $MY_DEVS

    Example 1-1 (reading from sda)

    # MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 1)
    # dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache
    200+0 records in
    200+0 records out
    209715200 bytes (210 MB, 200 MiB) copied, 1.7003 s, 123 MB/s

    Example 1-2 (writing to sda)

    # MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 1)
    # dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct
    200+0 records in
    200+0 records out
    209715200 bytes (210 MB, 200 MiB) copied, 1.67117 s, 125 MB/s

    Example 1-3 (reading concurrently from sda)

    # MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 1)
    # (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache &); (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache skip=200 &)
    200+0 records in
    200+0 records out
    209715200 bytes (210 MB, 200 MiB) copied, 3.42875 s, 61.2 MB/s
    200+0 records in
    200+0 records out
    209715200 bytes (210 MB, 200 MiB) copied, 3.52614 s, 59.5 MB/s

    Example 1-4 (writing concurrently to sda)

    # MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 1)
    # (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct &); (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct skip=200 &)
    200+0 records out
    209715200 bytes (210 MB, 200 MiB) copied, 3.2435 s, 64.7 MB/s
    200+0 records in
    200+0 records out
    209715200 bytes (210 MB, 200 MiB) copied, 3.60872 s, 58.1 MB/s

    Example 2-1 (reading from sdb)

    # MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 2)
    # dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache
    200+0 records in
    200+0 records out
    209715200 bytes (210 MB, 200 MiB) copied, 1.67285 s, 125 MB/s

    Example 2-2 (writing to sdb)

    # MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 2)
    # dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct
    200+0 records in
    200+0 records out
    209715200 bytes (210 MB, 200 MiB) copied, 1.67198 s, 125 MB/s

    Example 2-3 (reading concurrently from sdb)

    # MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 2)
    # (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache &); (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache skip=200 &)
    200+0 records in
    200+0 records out
    209715200 bytes (210 MB, 200 MiB) copied, 3.52808 s, 59.4 MB/s
    200+0 records in
    200+0 records out
    209715200 bytes (210 MB, 200 MiB) copied, 3.57736 s, 58.6 MB/s

    Example 2-4 (writing concurrently to sdb)

    # MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 2)
    # (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct &); (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct skip=200 &)
    200+0 records in
    200+0 records out
    209715200 bytes (210 MB, 200 MiB) copied, 3.7841 s, 55.4 MB/s
    200+0 records in
    200+0 records out
    209715200 bytes (210 MB, 200 MiB) copied, 3.81475 s, 55.0 MB/s

    Example 3-1 (reading from RAID0)

    # mdadm --create /dev/md/stripped --homehost=any --metadata=1.0 --level=0 --raid-devices=2 $MY_DEVS
    # MY_DISK=/dev/md/stripped
    # dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache
    200+0 records in
    200+0 records out
    209715200 bytes (210 MB, 200 MiB) copied, 0.837419 s, 250 MB/s

    Example 3-2 (writing to RAID0)

    # MY_DISK=/dev/md/stripped
    # dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct
    200+0 records in
    200+0 records out
    209715200 bytes (210 MB, 200 MiB) copied, 0.823648 s, 255 MB/s

    Example 3-3 (reading concurrently from RAID0)

    # MY_DISK=/dev/md/stripped
    # (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache &); (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache skip=200 &)
    200+0 records in
    200+0 records out
    209715200 bytes (210 MB, 200 MiB) copied, 1.31025 s, 160 MB/s
    200+0 records in
    200+0 records out
    209715200 bytes (210 MB, 200 MiB) copied, 1.80016 s, 116 MB/s

    Example 3-4 (writing concurrently to RAID0)

    # MY_DISK=/dev/md/stripped
    # (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct &); (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct skip=200 &)
    200+0 records in
    200+0 records out
    209715200 bytes (210 MB, 200 MiB) copied, 1.65026 s, 127 MB/s
    200+0 records in
    200+0 records out
    209715200 bytes (210 MB, 200 MiB) copied, 1.81323 s, 116 MB/s

    Example 4-1 (reading from RAID1)

    # mdadm --stop /dev/md/stripped
    # mdadm --create /dev/md/mirrored --homehost=any --metadata=1.0 --level=1 --raid-devices=2 --assume-clean $MY_DEVS
    # MY_DISK=/dev/md/mirrored
    # dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache
    200+0 records in
    200+0 records out
    209715200 bytes (210 MB, 200 MiB) copied, 1.74963 s, 120 MB/s

    Example 4-2 (writing to RAID1)

    # MY_DISK=/dev/md/mirrored
    # dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct
    200+0 records in
    200+0 records out
    209715200 bytes (210 MB, 200 MiB) copied, 1.74625 s, 120 MB/s

    Example 4-3 (reading concurrently from RAID1)

    # MY_DISK=/dev/md/mirrored
    # (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache &); (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache skip=200 &)
    200+0 records in
    200+0 records out
    209715200 bytes (210 MB, 200 MiB) copied, 1.67171 s, 125 MB/s
    200+0 records in
    200+0 records out
    209715200 bytes (210 MB, 200 MiB) copied, 1.67685 s, 125 MB/s

    Example 4-4 (writing concurrently to RAID1)

    # MY_DISK=/dev/md/mirrored
    # (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct &); (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct skip=200 &)
    200+0 records in
    200+0 records out
    209715200 bytes (210 MB, 200 MiB) copied, 4.09666 s, 51.2 MB/s
    200+0 records in
    200+0 records out
    209715200 bytes (210 MB, 200 MiB) copied, 4.1067 s, 51.1 MB/s

    Restore your swap device and journald configuration

    # mdadm --stop /dev/md/stripped /dev/md/mirrored
    # mdadm --create /dev/md/swap --homehost=any --metadata=1.0 --level=1 --raid-devices=2 $MY_DEVS
    # mkswap /dev/md/swap
    # swapon -a
    # mv /etc/systemd/journald.conf.bak /etc/systemd/journald.conf
    # systemctl restart systemd-journald.service
    # reboot

    Interpreting the results

    Examples 1-1, 1-2, 2-1, and 2-2 show that each of my drives read and write at about 125 MB/s.

    Examples 1-3, 1-4, 2-3, and 2-4 show that when two reads or two writes are done in parallel on the same drive, each process gets at about half the drive’s bandwidth (60 MB/s).

    The 3-x examples show the performance benefit of putting the two drives together in a RAID0 (data stripping) array. The numbers, in all cases, show that the RAID0 array performs about twice as fast as either drive is able to perform on its own. The trade-off is that you are twice as likely to lose everything because each drive only contains half the data. A three-drive array would perform three times as fast as a single drive (all drives being equal) but it would be thrice as likely to suffer a catastrophic failure.

    The 4-x examples show that the performance of the RAID1 (data mirroring) array is similar to that of a single disk except for the case where multiple processes are concurrently reading (example 4-3). In the case of multiple processes reading, the performance of the RAID1 array is similar to that of the RAID0 array. This means that you will see a performance benefit with RAID1, but only when processes are reading concurrently. For example, if a process tries to access a large number of files in the background while you are trying to use a web browser or email client in the foreground. The main benefit of RAID1 is that your data is unlikely to be lost if a drive fails.

    Video demo

    <figure class="wp-block-embed-youtube aligncenter wp-block-embed is-type-video is-provider-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio">
    <iframe allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen="allowfullscreen" frameborder="0" height="347" src="https://www.youtube.com/embed/wbLX239-ysQ?feature=oembed" width="616"></iframe>
    <figcaption>Testing storage throughput using dd</figcaption></figure>

    Troubleshooting

    If the above tests aren’t performing as you expect, you might have a bad or failing drive. Most modern hard drives have built-in Self-Monitoring, Analysis and Reporting Technology (SMART). If your drive supports it, the smartctl command can be used to query your hard drive for its internal statistics:

    # smartctl --health /dev/sda
    # smartctl --log=error /dev/sda
    # smartctl -x /dev/sda

    Another way that you might be able to tune your PC for better performance is by changing your I/O scheduler. Linux systems support several I/O schedulers and the current default for Fedora systems is the multiqueue variant of the deadline scheduler. The default performs very well overall and scales extremely well for large servers with many processors and large disk arrays. There are, however, a few more specialized schedulers that might perform better under certain conditions.

    To view which I/O scheduler your drives are using, issue the following command:

    $ for i in /sys/block/sd?/queue/scheduler; do echo "$i: $(<$i)"; done

    You can change the scheduler for a drive by writing the name of the desired scheduler to the /sys/block/<device name>/queue/scheduler file:

    # echo bfq > /sys/block/sda/queue/scheduler

    You can make your changes permanent by creating a udev rule for your drive. The following example shows how to create a udev rule that will set all rotational drives to use the BFQ I/O scheduler:

    # cat << END > /etc/udev/rules.d/60-ioscheduler-rotational.rules
    ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/rotational}=="1", ATTR{queue/scheduler}="bfq"
    END

    Here is another example that sets all solid-state drives to use the NOOP I/O scheduler:

    # cat << END > /etc/udev/rules.d/60-ioscheduler-solid-state.rules
    ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/rotational}=="0", ATTR{queue/scheduler}="none"
    END

    Changing your I/O scheduler won’t affect the raw throughput of your devices, but it might make your PC seem more responsive by prioritizing the bandwidth for the foreground tasks over the background tasks or by eliminating unnecessary block reordering.


    Photo by James Donovan on Unsplash.

    glibc 2.28 cleanup – no more memory leaks

    Posted by Mark J. Wielaard on May 09, 2019 10:23 AM

    glibc already released 2.29, but I was still on a much older version and hadn’t noticed 2.28 (which is the version that is in RHEL8) has a really nice fix for people who obsess about memory leaks.

    When running valgrind to track memory leaks you might have noticed that there are sometimes some glibc data structures left.

    These are often harmless, small things that are needed during the whole lifetime of the process. So it is normally fine to not explicitly clean that up. Since the memory is reclaimed anyway when the process dies.

    But when tracking memory leaks they are slightly annoying. When you want to be sure you don’t have any leaks in your program it is distracting to have to ignore and filter out some harmless leaks.

    glibc already had a mechanism to help memory trackers like valgrind memcheck. If you call the secret __libc_freeres function from the last exiting thread, glibc would dutifully free all memory. Which is what valgrind does for you (unless you want to see all the memory left and use --run-libc-freeres=no).

    But it didn’t work for memory allocated by pthreads (libpthreads.so) or dlopen (libdl.so). So sometimes you would still see some stray “garbage” left even if you were sure to have released all memory in your own program.

    Carlos O’Donell has fixed this:

    Bug 23329 – The __libc_freeres infrastructure is not properly run across DSO boundaries.

    So upgrade to glibc 2.28+ and really get those memory leaks to zero!

    All heap blocks were freed -- no leaks are possible

    Fedora rawhide – fixed bugs 2019/04

    Posted by Filipe Rosset on May 09, 2019 04:04 AM

    Updated ansifilter to new 2.13 upstream version
    https://src.fedoraproject.org/rpms/ansifilter/c/a30532543f9b81fce139dabb11cfdb2d7b02ffc8?branch=master

    Bug 1672124 – 64tass-1.54.1900 is available
    https://bugzilla.redhat.com/show_bug.cgi?id=1672124

    Bug 1694427 – bindfs-1.13.11 is available
    https://bugzilla.redhat.com/show_bug.cgi?id=1694427

    Updated highlight to new 3.50 upstream version
    https://src.fedoraproject.org/rpms/highlight/c/ae8cfa138f249d4afd9ca6709a9f46940dff2668?branch=master

    Bug 1684723 – dgit-8.4 is available
    https://bugzilla.redhat.com/show_bug.cgi?id=1684723

    Bug 1674926 – feh-3.1.3 is available
    https://bugzilla.redhat.com/show_bug.cgi?id=1674926

    Bug 1670216 – worker-3.15.4 is available
    https://bugzilla.redhat.com/show_bug.cgi?id=1670216

    Bug 1676217 – worker: FTBFS in Fedora rawhide/f30
    https://bugzilla.redhat.com/show_bug.cgi?id=1676217

    Bug 1697143 – python-empy-3.3.4 is available
    https://bugzilla.redhat.com/show_bug.cgi?id=1697143

    Bug 1698572 – homebank-5.2.4 is available
    https://bugzilla.redhat.com/show_bug.cgi?id=1698572

    Bug 1699446 – vrq-1.0.132 is available
    https://bugzilla.redhat.com/show_bug.cgi?id=1699446

    Bug 1603942 – f2fs-tools: FTBFS in Fedora rawhide
    https://bugzilla.redhat.com/show_bug.cgi?id=1603942

    Bug 1674870 – f2fs-tools: FTBFS in Fedora rawhide/f30
    https://bugzilla.redhat.com/show_bug.cgi?id=1674870

    Bug 1641276 – [abrt] sakura: sakura_child_exited(): sakura killed by SIGSEGV
    https://bugzilla.redhat.com/show_bug.cgi?id=1641276

    Bug 1698150 – Sakura terminal segfaults on original tab close
    https://bugzilla.redhat.com/show_bug.cgi?id=1698150

    Bug 1700049 – aime-8.20190416 is available
    https://bugzilla.redhat.com/show_bug.cgi?id=1700049

    Bug 1566287 – xmedcon-0.15.0 is available
    https://bugzilla.redhat.com/show_bug.cgi?id=1566287

    Bug 1674965 – gif2png: FTBFS in Fedora rawhide/f30
    https://bugzilla.redhat.com/show_bug.cgi?id=1674965

    Bug 1674653 – apvlv: FTBFS in Fedora rawhide/f30
    https://bugzilla.redhat.com/show_bug.cgi?id=1674653

    Bug 1509726 – ktechlab-0.40.1 is available
    https://bugzilla.redhat.com/show_bug.cgi?id=1509726

    Bug 1604524 – ktechlab: FTBFS in Fedora rawhide
    https://bugzilla.redhat.com/show_bug.cgi?id=1604524

    Bug 1675239 – ktechlab: FTBFS in Fedora rawhide/f30
    https://bugzilla.redhat.com/show_bug.cgi?id=1675239

    Bug 1674890 – fmtools: FTBFS in Fedora rawhide/f30
    https://bugzilla.redhat.com/show_bug.cgi?id=1674890

    Bug 1512890 – fmtools-tkradio requires deprecated /bin/env
    https://bugzilla.redhat.com/show_bug.cgi?id=1512890

    Bug 1702469 – ansifilter-2.14 is available
    https://bugzilla.redhat.com/show_bug.cgi?id=1702469

    Jekyll, rvm, bundler and Fedora

    Posted by Robbi Nespu on May 08, 2019 04:10 PM

    Jekyll is simple static site generator with Ruby. It takes html templates and posts written in Markdown to generate static website which is ready to deploy on your favorite server.

    I use Jekyll for my blog, somehow I prefer to use RVM (Ruby Version Manager) instead of installing Ruby directly. It much easier to maintaince and trouble shotting if incompatible gems or ruby happen.

    Ok, you need some package installed because some version of ruby binary are not available for Fedora via RVM, for example below:

    $ rvm install ruby
    Searching for binary rubies, this might take some time.
    No binary rubies available for: fedora/30/x86_64/ruby-2.6.3.
    Continuing with compilation. Please read 'rvm help mount' to get more information on binary rubies.
    

    So, rvm will help you to compile the package on your system but unfortunately RVM unable to properly download this package automatically. So let install this package manually.

    $ sudo dnf install autoconf automake bison gcc-c++ libffi-devel libtool libyaml-devel readline-devel sqlite-devel zlib-devel openssl-devel
    

    We are going to install RVM

    $ gpg2 --keyserver hkp://pool.sks-keyservers.net --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3 7D2BAF1CF37B13E2069D6956105BD0E739499BDB
    $ \curl -sSL https://get.rvm.io | bash -s stable
    $ rvm install ruby
    

    To start using RVM you need to run source $HOME/.rvm/scripts/rvm in all your open shell windows in rare cases you need to reopen all shell windows.

    $ rvm -v
    rvm 1.29.8 (latest) by Michal Papis, Piotr Kuczynski, Wayne E. Seguin [https://rvm.io]
    

    Next, install gems for Jekyll and Bundler

    $ gem install jekyll
    $ gem install bundler 
    

    Done! You may install new Jekyll site and test it locally :)

    Extra: Inside Jekyll project directory, run bundler update --bundler && bundler install to update and install the gems needed, then use bundle exec jekyll serve -d public --incremental --verbose --watch to run Jekyll on localhost.

    Happy Birthday CentOS! 15 Years of Contributions

    Posted by Jeff Sheltren on May 08, 2019 02:00 PM
    We’re reminded this week of how long-lasting and impactful our contributions can be as we celebrate CentOS turning 15 years old this week! As a contributor to CentOS since the beginning, Tag1’s very own Jeff Sheltren was interviewed by TheCentOSProject to reflect on his involvement. Summary below the video clip: Jeff first started working with CentOS when migrating away from Solaris while working at University of California, Santa Barbara (UCSB) (2004/2005) Jeff wrote one of the first scripts to define custom yum groups to maintain package install repos. Invited to the QA Team on CentOS 5, Jeff reviewed packages through each release cycle—helping to ensure consistency and stability. By CentOS 7, the larger scope of the release drove the need for more public communications around upcoming changes. Jeff led the QA team in external communications, documenting and soliciting outside feedback for the first time in CentOS’ history with a public QA release . At Tag1, we’re happy to be a part of great open source communities and projects like CentOS. To be open source at heart is to embrace the culture of asking and answering questions in an open forum, and to contribute is to lead others into a better...
    jordan Wed, 05/08/2019 - 07:00

    Elections nominations now open

    Posted by Fedora Community Blog on May 08, 2019 01:07 PM

    Today we are starting the Nomination & Campaign period during which we accept nominations to the “steering bodies” of the following teams:

    This period is open until 2019-05-22 at 23:59:59 UTC.

    Candidates may self-nominate. If you nominate someone else, please check with them to ensure that they are willing to be nominated before submitting their name.

    The steering bodies are currently selecting interview questions for the candidates.

    Nominees submit their questionnaire answers via a private Pagure issue. The Election Wrangler or their backup will publish the interviews to the Community Blog before the start of the voting period. Fedora Podcast episodes will be recorded and published as well.

    Please note that the interview is mandatory for all nominees. Nominees not having their interview ready by end of the Interview period (2019-05-29) will be disqualified and removed from the election.

    As part of the campaign people may also ask questions to specific candidates on the appropriate mailing list.

    The full schedule of the elections is available on the Elections schedule. For more information about the elections, process see the wiki.

    The post Elections nominations now open appeared first on Fedora Community Blog.

    Kernel 5.1 Test Week starting 2019-05-13

    Posted by Fedora Community Blog on May 08, 2019 10:17 AM
    Fedora 30 Kernel 5.1Test Day

    Monday, 2019-05-13 through Friday 2019-05-18 is the Kernel 5.1 Test Week!

    Test Days/Weeks are planned events when a specific feature is tested and the community tries to report issues. Kernel being one of most crucial bit of the system, testing it for a significant amount of time is very necessary!

    Why Test Kernel?

    The kernel team is working on Kernel 5.1.  This version was just recently released, and will arrive soon in Fedora.
    This version will also be the shipping Kernel for Fedora 29, 30 and upcoming pre-release Fedora 31. So it’s to see whether it’s working well enough and catch any remaining issues.
    It’s also pretty easy to join in: all you’ll need is an iso (which you can grab from the wiki page).

    We need your help!

    All the instructions are on the wiki page, so please read through and come help us test! As always, the event will be in #fedora-test-day on Freenode IRC.

    Share this!

    Help promote the Test Day and share the article in your own circles! Use any of the buttons below to help spread the word.

    The post Kernel 5.1 Test Week starting 2019-05-13 appeared first on Fedora Community Blog.

    Check out the new AskFedora

    Posted by Fedora Magazine on May 08, 2019 08:00 AM

    If you’ve been reading the Community blog, you’ll already know: AskFedora has moved to Discourse! Read on for more information about this exciting platform.

    Discourse? Why Discourse?

    The new AskFedora is a Discourse instance hosted by Discourse, similar to discussion.fedoraproject.org. However, where discussion.fedoraproject.org is meant for development discussion within the community, AskFedora is meant for end-user troubleshooting.

    The Discourse platform focuses on conversations. Not only can you ask questions and receive answers, you can have complete dialogues with others. This is especially fitting since troubleshooting includes lots of bits that are neither questions nor answers. Instead, there are lots of suggestions, ideas, thoughts, comments, musings, none of which necessarily are the one true answer, but all of which are required steps that together lead us to the solution.

    Apart from this fresh take on discussions, Discourse comes with a full set of features that make interacting with each other very easy.

    Login using your Fedora account

    Users accounts on the new AskFedora are managed by the Fedora account system only. A Fedora account gives you access to all of the infrastructure used by the Fedora community. This includes:

    This decision was made mainly to combat the spam and security issues previously encountered with the various social media login services.

    So, unlike the current Askbot setup where you could login using different social media services, you will need to create a Fedora Account to use the new Discourse based instance. Luckily, creating a Fedora Account is very easy!

    1. Go to https://admin.fedoraproject.org/accounts/user/new
    2. Choose a username, enter your name, and a valid e-mail address, a security question.
    3. Do the “captcha” to confirm that you are indeed a human, and confirm that you are older than 13 years of age.

    That’s it! You now have a Fedora account.

    Get started!

    If you are using the platform for the first time, you should start with the “New users! Start here!” category. Here, we’ve put short summaries on how to use the platform effectively. This includes information on how to use Discourse, its many features that make it a great platform, notes on how to ask and respond to queries, subscribing and unsubscribing from categories, and lots more.

    For the convenience of the global Fedora community, these summaries are available in all the languages that the community supports. So, please do take a minute to go over these introductory posts.

    Discuss, learn, teach, have fun!

    Please login, ask and discuss your queries and help each other out. As always, suggestions and feedback are always welcome. You can post these in the “Site feedback” category.

    As a last note, please do remember to “be excellent to each other.” The Fedora Code of Conduct applies to all of us!

    Acknowledgements

    The Fedora community does everything together, so many volunteers joined forces and gave their resources to make this possible. We are most grateful to the Askbot developers who have hosted AskFedora till now, the Discourse team for hosting it now, and all the community members who helped set it up, and everyone that helps keep the Fedora community ticking along!

    The case of the fedmsg hang

    Posted by Randy Barlow on May 07, 2019 04:32 PM

    Yesterday I was doing some testing with Bodhi's staging environment to try to reproduce a bug and test a proposed fix for it. The bug was in a library that Bodhi's repository composer uses to test whether the composed repository meets basic sanity checks. The composer runs this at the …

    RHEL 8 repository

    Posted by Remi Collet on May 07, 2019 03:55 PM

    Red Hat Enterprise Linux 8 is released and of course my repository is already open and fully populated.

    As EPEL is not yet ready, you must enable "remi" repository which provides lot of packages usually available in  EPEL

    Repository installation :

    # dnf install https://rpms.remirepo.net/enterprise/remi-release-8.rpm
    # dnf install dnf-utils
    # dnf config-manager --set-enabled remi

    And, for example, to install PHP 7.3

    # dnf module install php:remi-7.3
    # php --version
    PHP 7.3.5 (cli) (built: Apr 30 2019 08:37:17) ( NTS )
    Copyright (c) 1997-2018 The PHP Group
    Zend Engine v3.3.5, Copyright (c) 1998-2018 Zend Technologies

    Notice, if you want to install various version simultaneously, the Software Collections of PHP 5.6, 7.0, 7.1, 7.2 et 7.3 are also available.

    # dnf install php56
    # module load php56
    # php --version
    PHP 5.6.40 (cli) (built: Apr 30 2019 11:08:08) 
    Copyright (c) 1997-2016 The PHP Group
    Zend Engine v2.6.0, Copyright (c) 1998-2016 Zend Technologies
    

    Time to open a good bottle of Champagne !

     

    Okular: improved PDF annotation tool

    Posted by Rajeesh K Nambiar on May 07, 2019 01:40 PM

    Okular, KDE’s document viewer has very good support for annotating/reviewing/commenting documents. Okular supports a wide variety of annotation tools out-of-the-box (enable the ‘Review’ tool [F6] and see for yourself) and even more can be configured (such as the ‘Strikeout’ tool) — right click on the annotation tool bar and click ‘Configure Annotations’.

    One of the annotation tools me and my colleagues frequently wanted to use is a line with arrow to mark an indent. Many PDF annotating software have this tool, but Okular was lacking it.

    So a couple of weeks ago I started looking into the source code of okular and poppler (which is the PDF library used by Okular) and noticed that both of them already has support for the ‘Line Ending Style’ for the ‘Straight Line’ annotation tool (internally called the TermStyle). Skimming through the source code for a few hours and adding a few hooks in the code, I could add an option to configure the line ending style for ‘Straight Line’ annotation tool. Many line end styles are provided out of the box, such as open and closed arrows, circle, diamond etc.

    An option to the ‘Straight Line’ tool configuration is added to choose the line ending style:

    <figure class="wp-block-image"><figcaption>New ‘Line Ending Style’ for the ‘Straight Line’ annotation tool.</figcaption></figure>

    Here’s the review tool with ‘Open Arrow’ ending in action:

    <figure class="wp-block-image"><figcaption>‘Arrow’ annotation tool in action.</figcaption></figure>

    Once happy with the outcome, I’ve created a review request to upstream the improvement. A number of helpful people reviewed and commented. One of the suggestions was to add icon/shape of the line ending style in the configuration options so that users can quickly preview what the shape will look like without having to try each one. The first attempt to implement this feature was by adding Unicode symbols (instead of a SVG or internally drawn graphics) and it looked okay. Here’s a screen shot:

    <figure class="wp-block-image"><figcaption>‘Line End’ with symbols preview.</figcaption></figure>

    But it had various issues — some symbols are not available in Unicode and the localization of these strings without some context would be difficult. So, for now it is decided to drop the symbols.

    For now, this feature works only on PDF documents. The patch is committed today and will be available in the next version of Okular.

    Lenovo Ideapad and Yoga laptops and wifi on/off switches

    Posted by Hans de Goede on May 07, 2019 07:25 AM

    Once upon a time a driver was written for the Lenovo Ideapad firmware interface for handling special keys and rfkill functionality. This driver was written on an Ideapad laptop with a slider on the side to turn wifi on/off, a so called hardware rfkill switch. Sometime later a Yoga model using the same firmware interface showed up, without a hardware rfkill switch. It turns out that in this case the firmware interface reports the non-present switch as always in the off position, causing NetworkManager to not even try to use the wifi effectively breaking wifi.

    So I added a dmi blacklist for models without a hardware rfkill switch. The same firmware interface is still used on new Ideapad and Yoga models and since most modern laptops typically do not have such a switch this dmi blacklist has been growing and growing. Just in the 5.1 kernel alone 5 new models were added. Worse as mentioned not being on the list for a model without the hardware switch leads to non working wifi, pretty much leading to any new Ideapad model not working with Linux until added to the list.

    To fix this I've submitted a patch upstream turning the huge blacklist into a whitelist. This whitelist is empty for now, meaning that we define all models as not having a rfkill switch. This does lead to a small regression on models which do actually have a hardware rfkill switch, before this commit they would correctly report the wifi being disabled by the hw switch and e.g. the GNOME3 UI would report "wifi disabled in hardware", where as now users will just get an empty list of available wifi networks when the switch is in the off position. But this is a small price to pay to make sure that as of yet unknown and new Ideapad models do not have non-working wifi because of this issue.

    As said the whitelist for models which do actually have a hardware rfkill switch is currently empty, so I need your help to fill it. If you have an Ideapad or Yoga laptop with a wifi on/off slider switch on it. Please run "rfkill list" if this contains "ideapad_wlan" in the output then you are using the ideapad-laptop driver. In this case please check that the "Hard blocked" setting for the "ideapad_wlan" rfkill device properly shows no / yes based on the switch position. If this works your model should be added to the new whitelist. For this please run: "sudo dmidecode &> dmidecode.log" and send me an email at hdegoede@redhat.com with the dmidecode.log attached.

    Note the patch to change the list to a whitelist has been included in the Fedora kernels starting with kernel 5.0.10-300, so if you have an
    Ideapad or Yoga running Fedora and you do see "ideapad_wlan" in the "rfkill list" output, but the "Hard blocked" setting does not respond, try with a kernel older then 5.0.10-300, let me know if you need help with this.

    Stories from the amazing world of release-monitoring.org #5

    Posted by Fedora Community Blog on May 07, 2019 06:54 AM

    The desk in the wizard tower was full of manuscripts, reports from workers and written complaints from outside entities (users). It was a long week. I waved my hand to add more light to this room. Simple spell that is helping me to concentrate.

    There was a figure in the door. “Come inside, traveler. I’m glad you are here. There are plenty of new things I want to share with you.”

    Lost in the sorting

    One of the complaints we are receiving often is that the news we are obtaining from the other realms are not sorted properly. This means that you can be notified by a messenger about new version, when in fact this isn’t a new version. This is sometimes causing confusion and filling up the complaints pile.

    Currently we allow only Restructured Partial Magic (RPM) way of sorting, which is not usable for every project out there.

    To prevent this in the future, we decided to teach two new ways of sorting to our workers. One is based on the time machine (calendar version scheme) and second on simple mathematics (semantic version scheme).

    First we started with the sorting based on the simple mathematics. This was really easy to do, because there is already a magic book (library) that contains everything we need. So it only took a few hours to teach the workers the spells.

    Much worse was the situation with time machine. Time machines have no strict standard, everybody can make their own. How we can work in this mess? After some thinking about this, we found a solution. We introduced the pattern for the time machine that will allow the outside entities (users) specify the configuration of the project’s time machine. To see how this works, look at the illustration bellow.

    <figure class="wp-block-image"><figcaption>Defining time machine pattern</figcaption></figure> <figure class="wp-block-image"><figcaption>Time machine sorting in action</figcaption></figure>

    Big changes

    The Conclave of mages decided that release-monitoring.org is one of the priorities for this year. What this means for us? This means that we will get more mages working on release-monitoring.org, that will help us get missing features and fixing bugs more quickly.

    Target of this initiative is to switch release-monitoring.org to maintain mode. This means that I will be more free to work with other mages on other things that are important for whole Conclave and the release-monitoring.org will be transformed to state that should be free of most of the issues we are facing right now. You can look at the plan for both Anitya and the-new-hotness.

    Post scriptum

    This is all for now from the world of release-monitoring.org. Do you like this world and want to join our conclave of mages? Seek me (mkonecny) in the magical yellow pages (IRC freenode #fedora-apps) and ask how can you help. Or visit the Bugcronomicon (GitHub issues on Anitya or the-new-hotness) directly and pick something to work on.

    The post Stories from the amazing world of release-monitoring.org #5 appeared first on Fedora Community Blog.

    Fedora 30 : Kite now works with Linux.

    Posted by mythcat on May 06, 2019 05:35 PM
    The development team comes with these new features for this Linux tool named Kite:
    Code Faster in Python with Line-of-Code Completions Kite integrates with your IDE and uses machine learning to give you useful code completions for Python. Start coding faster today.
    This tool integrates with all the top Python IDEs - Atom, Pycharm, Sublime, VS Code and Vim.
    The install process is simple:
    [mythcat@desk ~]$ bash -c "$(wget -q -O - https://linux.kite.com/dls/linux/current)"

    This script will install Kite!

    We hope you enjoy! If you run into any issues, please reach out at support@kite.com or feedback@kite.com

    - The Kite Team

    Press enter to continue...
    Downloading kite-installer binary using wget...
    Checking to see if all dependencies are installed....

    Kite watches your workspace to be notified when python files change on disk. This allows us to
    provide the latest information & completions from your coding environment. However, for larger
    workspaces, Kite can exceed the default limits on inotify watches, which can result in a degraded experience.

    We can fix this by placing a file in /etc/sysctl.d/ to increase this limit.
    Configure inotify now? (you might be asked for your sudo password) [Y/n] Y
    Creating /etc/sysctl.d/30-kite.conf...
    [sudo] password for mythcat:
    Running ./kite-installer install
    [installer] no previous kite installation found
    [installer] latest version is 2.20190503.3, downloading now...
    [installer] verifying checksum
    [installer] validating signature
    [installer] installing version 2.20190503.3
    [installer] installed ~/.config/autostart/kite-autostart.desktop
    [installer] installed ~/.config/systemd/user/kite-updater.service
    [installer] installed ~/.config/systemd/user/kite-updater.timer
    [installer] installed ~/.local/share/applications/kite-copilot.desktop
    [installer] installed ~/.local/share/applications/kite.desktop
    [installer] installed ~/.local/share/icons/hicolor/128x128/apps/kite.png
    [installer] installed ~/.local/share/kite/kited
    [installer] installed ~/.local/share/kite/uninstall
    [installer] installed ~/.local/share/kite/update
    [installer] activating kite-updater systemd service
    [installer] registering kite:// protocol handler
    [installer] kite is installed! launching now! happy coding! :)
    Removing kite-installer
    After install you need to use your email to login into Kite account.
    The last step is the integrations, and Kite will install this plugin for you. If you use the vim editor, then is a good idea to take a look here.

    Fedora 30 - After install setup

    Posted by Robbi Nespu on May 06, 2019 04:00 PM

    My incomplete list of todo after installing Fedora 30

    Well maybe you already read my previous post Fedora 28 setup (After install) on my previous blog host. After awhile I upgrade to Fedora 29 but using KDE spin and somehow, I not satisfied with the UX because I too much familiar with Gnome, now reformatted my laptop booted with a clean Fedora 30 Workstation (Gnome).

    Lot of thing need to be configure again after clean formating, lets start from scratch again. Let do it!

    Todo

    1. Change the hostname

    $ hostnamectl status # view current hostname
    $ hostnamectl set-hostname --static "robbinespu" # set up new hostname
    

    note : Need to reboot after change the host name

    2. DNF tweak

    Use delta and fastest mirror (edit /etc/dnf/dnf.conf file)

        [main]
        gpgcheck=1
        installonly_limit=3
        clean_requirements_on_remove=True
        fastestmirror=true
        deltarpm=true
    

    3. Restore bashrc and metadata files

    FYI I already have export and backup the selected dot files and few metadata files via this trick1, now I need to import from repository into my workstation using the same tutorial1.

    4. Install RPM fusion repository and get most recent update

    $ sudo dnf update --refresh
    $ sudo dnf install https://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm 
    $ sudo dnf install https://download1.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-$(rpm -E %fedora).noarch.rpm
    

    5. Enabling SSH

    Since sometimes I need to access this machine remotely

    $ sudo systemctl start sshd
    $ sudo systemctl enable sshd
    

    6. Performance monitoring tools for Linux

    Htop monitoring tool

    $ sudo dnf install sysstat htop glances
    

    There is also some application like CCleaner on windows called Stacer but there is no rpm file for newest released version but they replaced with app image

    $ cd /tmp
    $ wget https://github.com/oguzhaninan/Stacer/releases/download/v1.0.9/Stacer-x86_64.AppImage
    $ sudo chmod a+x stacer*.AppImage
    $ ./stacer*.AppImage
    

    7. Entertaiment stuff : Player and codecs

    $ sudo dnf install youtube-dl vlc
    
    $ sudo dnf install \
    gstreamer-plugins-base \
    gstreamer1-plugins-base \
    gstreamer-plugins-bad \
    gstreamer-plugins-ugly \
    gstreamer1-plugins-ugly \
    gstreamer-plugins-good-extras \
    gstreamer1-plugins-good \
    gstreamer1-plugins-good-extras \
    gstreamer1-plugins-bad-freeworld \
    ffmpeg \
    gstreamer-ffmpeg
    

    8. Gnome plugin and addons

    Some of plugin that I activate and use are alternatetab, application menu, caffein, dash to dock, impatience, netspeed, place status indicator, services systemd, status area horizontal spacing and top icon plus

    To manage, edit, add and delete application launcher, I use menulibre which are FreeDesktop compliant menu editor

    $ sudo dnf install menulibre
    

    9. Clipboard Manager

    As developer, I love clipboard manager and my favourite clipboard manager for windows and linux are CopyQ.

    $ sudo dnf install copyq -y
    

    10. Gnome tweak tool

    Some features are hidden and not available on standard setting such as autostart-up (you need this to autostary CopyQ when your machine are ON).

    Auto start CopyQ when boot Fedora

    You need to install gnome tweak tool to allow users to have these hidden setting.

    $ sudo dnf install gnome-tweak-tool
    

    11. Office tools

    I prefer to use OnlyOffice for editting and viewing spreadsheet

    $ sudo rpm --import "http://keyserver.ubuntu.com/pks/lookup?op=get&search=0x8320CA65CB2DE8E5"
    $ sudo bash -c 'cat > /etc/yum.repos.d/onlyoffice.repo << 'EOF'
    [onlyoffice]
    name=onlyoffice repo
    baseurl=http://download.onlyoffice.com/repo/centos/main/noarch/
    gpgcheck=1
    enabled=1
    EOF'
    $ sudo dnf install onlyoffice-desktopeditors
    

    You also may use WPS community, grab the latest RPM packages from WPS official website

    12. Compression and archiever tools

    $ sudo dnf install unzip p7zip
    

    END

    Well this actually are incomplete list of my todo. I will improve this list from time to time. Insya Allah. Thank you Fedora team for making it wonderful Linux distro :thumbsup:

    Use udica to build SELinux policy for containers

    Posted by Fedora Magazine on May 06, 2019 08:00 AM

    While modern IT environments move towards Linux containers, the need to secure these environments is as relevant as ever. Containers are a process isolation technology. While containers can be a defense mechanism, they only excel when combined with SELinux.

    Fedora SELinux engineering built a new standalone tool, udica, to generate SELinux policy profiles for containers by automatically inspecting them. This article focuses on why udica is needed in the container world, and how it makes SELinux and containers work better together. You’ll find examples of SELinux separation for containers that let you avoid turning protection off because the generic SELinux type container_t is too tight. With udica you can easily customize the policy with limited SELinux policy writing skills.

    SELinux technology

    SELinux is a security technology that brings proactive security to Linux systems. It’s a labeling system that assigns a label to all subjects (processes and users) and objects (files, directories, sockets, etc.). These labels are then used in a security policy that controls access throughout the system. It’s important to mention that what’s not allowed in an SELinux security policy is denied by default. The policy rules are enforced by the kernel. This security technology has been in use on Fedora for several years. A real example of such a rule is:

    allow httpd_t httpd_log_t: file { append create getattr ioctl lock open read setattr };

    The rule allows any process labeled as httpd_t to create, append, read and lock files labeled as httpd_log_t. Using the ps command, you can list all processes with their labels:

    $ ps -efZ | grep httpd
    system_u:system_r:httpd_t:s0 root 13911 1 0 Apr14 ? 00:05:14 /usr/sbin/httpd -DFOREGROUND
    ...

    To see which objects are labeled as httpd_log_t, use semanage:

    # semanage fcontext -l | grep httpd_log_t
    /var/log/httpd(/.)? all files system_u:object_r:httpd_log_t:s0
    /var/log/nginx(/.)? all files system_u:object_r:httpd_log_t:s0
    ...

    The SELinux security policy for Fedora is shipped in the selinux-policyRPM package.

    SELinux vs. containers

    In Fedora, the container-selinux RPM package provides a generic SELinux policy for all containers started by engines like podman or docker. Its main purposes are to protect the host system against a container process, and to separate containers from each other. For instance, containers confined by SELinux with the process type container_t can only read/execute files in /usr and write to container_file_t files type on host file system. To prevent attacks by containers on each other, Multi-Category Security (MCS) is used.

    Using only one generic policy for containers is problematic, because of the huge variety of container usage. On one hand, the default container type (container_t) is often too strict. For example:

    • Fedora SilverBlue needs containers to read/write a user’s home directory
    • Fluentd project needs containers to be able to read logs in the /var/log directory

    On the other hand, the default container type could be too loose for certain use cases:

    • It has no SELinux network controls — all container processes can bind to any network port
    • It has no SELinux control on Linux capabilities — all container processes can use all capabilities

    There is one solution to handle both use cases: write a custom SELinux security policy for the container. This can be tricky, because SELinux expertise is required. For this purpose, the udica tool was created.

    Introducing udica

    Udica generates SELinux security profiles for containers. Its concept is based on the “block inheritance” feature inside the common intermediate language (CIL) supported by SELinux userspace. The tool creates a policy that combines:

    • Rules inherited from specified CIL blocks (templates), and
    • Rules discovered by inspection of container JSON file, which contains mountpoints and ports definitions

    You can load the final policy immediately, or move it to another system to load into the kernel. Here’s an example, using a container that:

    • Mounts /home as read only
    • Mounts /var/spool as read/write
    • Exposes port tcp/21

    The container starts with this command:

    # podman run -v /home:/home:ro -v /var/spool:/var/spool:rw -p 21:21 -it fedora bash

    The default container type (container_t) doesn’t allow any of these three actions. To prove it, you could use the sesearch tool to query that the allow rules are present on system:

    # sesearch -A -s container_t -t home_root_t -c dir -p read 

    There’s no allow rule present that lets a process labeled as container_t access a directory labeled home_root_t (like the /home directory). The same situation occurs with /var/spool, which is labeled var_spool_t:

    # sesearch -A -s container_t -t var_spool_t -c dir -p read

    On the other hand, the default policy completely allows network access.

    # sesearch -A -s container_t -t port_type -c tcp_socket
    allow container_net_domain port_type:tcp_socket { name_bind name_connect recv_msg send_msg };
    allow sandbox_net_domain port_type:tcp_socket { name_bind name_connect recv_msg send_msg };

    Securing the container

    It would be great to restrict this access and allow the container to bind just on TCP port 21 or with the same label. Imagine you find an example container using podman ps whose ID is 37a3635afb8f:

    # podman ps -q
    37a3635afb8f

    You can now inspect the container and pass the inspection file to the udica tool. The name for the new policy is my_container.

    # podman inspect 37a3635afb8f > container.json
    # udica -j container.json my_container
    Policy my_container with container id 37a3635afb8f created!

    Please load these modules using:
    # semodule -i my_container.cil /usr/share/udica/templates/{base_container.cil,net_container.cil,home_container.cil}

    Restart the container with: "--security-opt label=type:my_container.process" parameter

    That’s it! You just created a custom SELinux security policy for the example container. Now you can load this policy into the kernel and make it active. The udica output above even tells you the command to use:

    # semodule -i my_container.cil /usr/share/udica/templates/{base_container.cil,net_container.cil,home_container.cil}

    Now you must restart the container to allow the container engine to use the new custom policy:

    # podman run --security-opt label=type:my_container.process -v /home:/home:ro -v /var/spool:/var/spool:rw -p 21:21 -it fedora bash

    The example container is now running in the newly created my_container.process SELinux process type:

    # ps -efZ | grep my_container.process
    unconfined_u:system_r:container_runtime_t:s0-s0:c0.c1023 root 2275 434 1 13:49 pts/1 00:00:00 podman run --security-opt label=type:my_container.process -v /home:/home:ro -v /var/spool:/var/spool:rw -p 21:21 -it fedora bash
    system_u:system_r:my_container.process:s0:c270,c963 root 2317 2305 0 13:49 pts/0 00:00:00 bash

    Seeing the results

    The command sesearch now shows allow rules for accessing /home and /var/spool:

    # sesearch -A -s my_container.process -t home_root_t -c dir -p read
    allow my_container.process home_root_t:dir { getattr ioctl lock open read search };
    # sesearch -A -s my_container.process -t var_spool_t -c dir -p read
    allow my_container.process var_spool_t:dir { add_name getattr ioctl lock open read remove_name search write }

    The new custom SELinux policy also allows my_container.process to bind only to TCP/UDP ports labeled the same as TCP port 21:

    # semanage port -l | grep 21 | grep ftp
    ftp_port_t tcp 21, 989, 990
    # sesearch -A -s my_container.process -c tcp_socket -p name_bind
    allow my_container.process ftp_port_t:tcp_socket name_bind;

    Conclusion

    The udica tool helps you create SELinux policies for containers based on an inspection file without any SELinux expertise required. Now you can increase the security of containerized environments. Sources are available on GitHub, and an RPM package is available in Fedora repositories for Fedora 28 and later.


    Photo by Samuel Zeller on Unsplash.

    Création d’une clé USB multiboot bios et UEFI

    Posted by Didier Fabert (tartare) on May 06, 2019 06:00 AM

    Il existe plein d’utilitaires géniaux qui font ça très bien, mais est-ce vraiment si compliqué de le faire à la main ? On n’a besoin que des outils déjà installés sur n’importe quel distribution GNU/Linux:

    • un outil de partitionnement: parted
    • un chargeur de démarrage: Grub2
    • Un éditeur de texte: vi, emacs, ou même kwrite, gedit …

    Si le poste à partir duquel on prépare la clé n’est pas en UEFI, installation des paquets manquants

    dnf install grub2-efi-x64-modules

    Procédure

    L’ensemble des commandes sont à lancer dans un shell superutilisateur: sudo -s

    1. Partitionnement et formatage de la clé (/dev/sdd)

      parted -s /dev/sdd mklabel msdos
      parted -s /dev/sdd mkpart primary 1MiB 551MiB
      parted -s /dev/sdd set 1 esp on
      parted -s /dev/sdd set 1 boot on
      mkfs.fat -F32 /dev/sdd1
      parted -s /dev/sdd mkpart primary 551MiB 100%
      mkfs.ext4 /dev/sdd2
      
    2. Création de points de montage temporaires et montage des partitions
      mkdir /media/efi /media/data
      mount /dev/sdd1 /media/efi
      mount /dev/sdd2 /media/data
      
    3. Installation des chargeurs de démarrage (bios et uefi)
      grub2-install --target=i386-pc --recheck --boot-directory="/media/data/boot" /dev/sdd
      grub2-install --target=x86_64-efi --recheck --removable --efi-directory="/media/efi" --boot-directory="/media/data/boot"
      
    4. Création du répertoire d’accueil des isos
      mkdir /media/data/boot/iso
      chown 1000:1000 /media/data/boot/iso
      
    5. Téléchargement des ISOs et copie dans /media/data/boot/iso
    6. Modification (ou création) du fichier /media/data/boot/grub2/grub.cfg
      set timeout=30
      set color_normal=cyan/blue
      set color_highlight=white/blue
      
      menuentry "Fedora-Workstation-KDE-Live-x86_64-29-1.2" {
          isofile="/boot/iso/Fedora-KDE-Live-x86_64-29-1.2.iso"
          loopback loop "${isofile}"
          linux (loop)/isolinux/vmlinuz iso-scan/filename="${isofile}" root=live:CDLABEL=Fedora-KDE-Live-29-1-2 rd.live.image quiet
          initrd (loop)/isolinux/initrd.img
      }
      
      menuentry "Fedora-Workstation-KDE-Live-x86_64-30-1.2" {
          isofile="/boot/iso/Fedora-KDE-Live-x86_64-30-1.2.iso"
          loopback loop "${isofile}"
          linux (loop)/isolinux/vmlinuz iso-scan/filename="${isofile}" root=live:CDLABEL=Fedora-KDE-Live-30-1-2 rd.live.image quiet
          initrd (loop)/isolinux/initrd.img
      }
    7. Démontage, nettoyage et test de la clé en démarrant dessus
      umount /media/efi /media/data
      rmdir /media/efi /media/data

    Improve gcore and support dumping ELF headers

    Posted by Sergio Durigan Junior on May 06, 2019 04:00 AM

    Back in 2016, when life was simpler, a Fedora GDB user reported a bug (or a feature request, depending on how you interpret it) saying that GDB's gcore command did not respect the COREFILTER_ELF_HEADERS flag, which instructs it to dump memory pages containing ELF headers. As you may or may not remember, I have already written about the broader topic of revamping GDB's internal corefile dump algorithm; it's an interesting read and I recommend it if you don't know how Linux (or GDB) decides which mappings to dump to a corefile.

    Anyway, even though the bug was interesting and had to do with a work I'd done before, I couldn't really work on it at the time, so I decided to put it in the TODO list. Of course, the "TODO list" is actually a crack where most things fall through and are usually never seen again, so I was blissfully ignoring this request because I had other major priorities to deal with. That is, until a seemingly unrelated problem forced me to face this once and for all!

    What? A regression? Since when?

    As the Fedora GDB maintainer, I'm routinely preparing new releases for Fedora Rawhide distribution, and sometimes for the stable versions of the distro as well. And I try to be very careful when dealing with new releases, because a regression introduced now can come and bite us (i.e., the Red Hat GDB team) back many years in the future, when it's sometimes too late or too difficult to fix things. So, a mandatory part of every release preparation is to actually run a regression test against the previous release, and make sure that everything is working correctly.

    One of these days, some weeks ago, I had finished running the regression check for the release I was preparing when I noticed something strange: a specific, Fedora-only corefile test was FAILing. That's a no-no, so I started investigating and found that the underlying reason was that, when the corefile was being generated, the build-id note from the executable was not being copied over. Fedora GDB has a local patch whose job is to, given a corefile with a build-id note, locate the corresponding binary that generated it. Without the build-id note, no binary was being located.

    Coincidentally or not, at the same I started noticing some users reporting very similar build-id issues on the freenode's #gdb channel, and I thought that this bug had a potential to become a big headache for us if nothing was done to fix it right now.

    I asked for some help from the team, and we managed to discover that the problem was also happening with upstream gcore, and that it was probably something that binutils was doing, and not GDB. Hmm...

    Ah, so it's ld's fault. Or is it?

    So there I went, trying to confirm that it was binutils's fault, and not GDB's. Of course, if I could confirm this, then I could also tell the binutils guys to fix it, which meant less work for us :-).

    With a lot of help from Keith Seitz, I was able to bisect the problem and found that it started with the following commit:

    commit f6aec96dce1ddbd8961a3aa8a2925db2021719bb
    Author: H.J. Lu <hjl.tools@gmail.com>
    Date:   Tue Feb 27 11:34:20 2018 -0800
    
        ld: Add --enable-separate-code
    

    This is a commit that touches the linker, which is part of binutils. So that means this is not GDB's problem, right?!? Hmm. No, unfortunately not.

    What the commit above does is to simply enable the use of --enable-separate-code (or -z separate-code) by default when linking an ELF program on x86_64 (more on that later). On a first glance, this change should not impact the corefile generation, and indeed, if you tell the Linux kernel to generate a corefile (for example, by doing sleep 60 & and then hitting C-\), you will notice that the build-id note is included into it! So GDB was still a suspect here. The investigation needed to continue.

    What's with -z separate-code?

    The -z separate-code option makes the code segment in the ELF file to put in a completely separated segment than data segment. This was done to increase the security of generated binaries. Before it, everything (code and data) was put together in the same memory region. What this means in practice is that, before, you would see something like this when you examined /proc/PID/smaps:

    00400000-00401000 r-xp 00000000 fc:01 798593                             /file
    Size:                  4 kB
    KernelPageSize:        4 kB
    MMUPageSize:           4 kB
    Rss:                   4 kB
    Pss:                   4 kB
    Shared_Clean:          0 kB
    Shared_Dirty:          0 kB
    Private_Clean:         0 kB
    Private_Dirty:         4 kB
    Referenced:            4 kB
    Anonymous:             4 kB
    LazyFree:              0 kB
    AnonHugePages:         0 kB
    ShmemPmdMapped:        0 kB
    Shared_Hugetlb:        0 kB
    Private_Hugetlb:       0 kB
    Swap:                  0 kB
    SwapPss:               0 kB
    Locked:                0 kB
    THPeligible:    0
    VmFlags: rd ex mr mw me dw sd
    

    And now, you will see two memory regions instead, like this:

    00400000-00401000 r--p 00000000 fc:01 799548                             /file
    Size:                  4 kB
    KernelPageSize:        4 kB
    MMUPageSize:           4 kB
    Rss:                   4 kB
    Pss:                   4 kB
    Shared_Clean:          0 kB
    Shared_Dirty:          0 kB
    Private_Clean:         4 kB
    Private_Dirty:         0 kB
    Referenced:            4 kB
    Anonymous:             0 kB
    LazyFree:              0 kB
    AnonHugePages:         0 kB
    ShmemPmdMapped:        0 kB
    Shared_Hugetlb:        0 kB
    Private_Hugetlb:       0 kB
    Swap:                  0 kB
    SwapPss:               0 kB
    Locked:                0 kB
    THPeligible:    0
    VmFlags: rd mr mw me dw sd
    00401000-00402000 r-xp 00001000 fc:01 799548                             /file
    Size:                  4 kB
    KernelPageSize:        4 kB
    MMUPageSize:           4 kB
    Rss:                   4 kB
    Pss:                   4 kB
    Shared_Clean:          0 kB
    Shared_Dirty:          0 kB
    Private_Clean:         0 kB
    Private_Dirty:         4 kB
    Referenced:            4 kB
    Anonymous:             4 kB
    LazyFree:              0 kB
    AnonHugePages:         0 kB
    ShmemPmdMapped:        0 kB
    Shared_Hugetlb:        0 kB
    Private_Hugetlb:       0 kB
    Swap:                  0 kB
    SwapPss:               0 kB
    Locked:                0 kB
    THPeligible:    0
    VmFlags: rd ex mr mw me dw sd
    

    A few minor things have changed, but the most important of them is the fact that, before, the whole memory region had anonymous data in it, which means that it was considered an anonymous private mapping (anonymous because of the non-zero Anonymous amount of data; private because of the p in the r-xp permission bits). After -z separate-code was made default, the first memory mapping does not have Anonymous contents anymore, which means that it is now considered to be a file-backed private mapping instead.

    GDB, corefile, and coredump_filter

    It is important to mention that, unlike the Linux kernel, GDB doesn't have all of the necessary information readily available to decide the exact type of a memory mapping, so when I revamped this code back in 2015 I had to create some heuristics to try and determine this information. If you're curious, take a look at the linux-tdep.c file on GDB's source tree, specifically at the functions dump_mapping_p and linux_find_memory_regions_full.

    When GDB is deciding which memory regions should be dumped into the corefile, it respects the value found at the /proc/PID/coredump_filter file. The default value for this file is 0x33, which, according to core(5), means:

    Dump memory pages that are either anonymous private, anonymous
    shared, ELF headers or HugeTLB.
    

    GDB had the support implemented to dump almost all of these pages, except for the ELF headers variety. And, as you can probably infer, this means that, before the -z separate-code change, the very first memory mapping of the executable was being dumped, because it was marked as anonymous private. However, after the change, the first mapping (which contains only data, no code) wasn't being dumped anymore, because it was now considered by GDB to be a file-backed private mapping!

    Finally, that is the reason for the difference between corefiles generated by GDB and Linux, and also the reason why the build-id note was not being included in the corefile anymore! You see, the first memory mapping contains not only the program's data, but also its ELF headers, which in turn contain the build-id information.

    gcore, meet ELF headers

    The solution was "simple": I needed to improve the current heuristics and teach GDB how to determine if a mapping contains an ELF header or not. For that, I chose to follow the Linux kernel's algorithm, which basically checks the first 4 bytes of the mapping and compares them against \177ELF, which is ELF's magic number. If the comparison succeeds, then we just assume we're dealing with a mapping that contains an ELF header and dump it.

    In all fairness, Linux just dumps the first page (4K) of the mapping, in order to save space. It would be possible to make GDB do the same, but I chose the faster way and just dumped the whole mapping, which, in most scenarios, shouldn't be a big problem.

    It's also interesting to mention that GDB will just perform this check if:

    • The heuristic has decided not to dump the mapping so far, and;
    • The mapping is private, and;
    • The mapping's offset is zero, and;
    • There is a request to dump mappings with ELF headers (i.e., coredump_filter).

    Linux also makes these checks, by the way.

    The patch, finally

    I submitted the patch to the mailing list, and it was approved fairly quickly (with a few minor nits).

    The reason I'm writing this blog post is because I'm very happy and proud with the whole process. It wasn't an easy task to investigate the underlying reason for the build-id failures, and it was interesting to come up with a solution that extended the work I did a few years ago. I was also able to close a few bug reports upstream, as well as the one reported against Fedora GDB.

    The patch has been pushed, and is also present at the latest version of Fedora GDB for Rawhide. It wasn't possible to write a self-contained testcase for this problem, so I had to resort to using an external tool (eu-unstrip) in order to guarantee that the build-id note is correctly present in the corefile. But that's a small detail, of course.

    Anyway, I hope this was an interesting (albeit large) read!

    Episode 144 - The security of money, which one is best?

    Posted by Open Source Security Podcast on May 06, 2019 12:05 AM
    Josh and Kurt talk about the security of money. Not how to keep it secure, but the security issues around using cash, credit, and bitcoin. We also talk about Banksy's clever method for proving something is original.


    <iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="https://html5-player.libsyn.com/embed/episode/id/9652460/height/90/theme/custom/thumbnail/yes/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

    Show Notes


      [Howto] Adding SSH keys to Ansible Tower via tower-cli [Update]

      Posted by Roland Wolters on May 05, 2019 10:49 PM
      <figure class="alignright is-resized">Ansible Logo</figure>

      The tool tower-cli is often used to pre-configure Ansible Tower in a scripted way. It provides a convenient way to boot-strap a Tower configuration. But adding SSH keys as machine credentials is far from easy.

      Boot-strapping Ansible Tower can become necessary for testing and QA environments where the same setup is created and destroyed multiple times. Other use cases are when multiple Tower installations need to be configured in the same way or share at least a larger part of the configuration.

      One of the necessary tasks in such setups is to create machine credentials in Ansible Tower so that Ansible is able to connect properly to a target machine. In a Linux environment, this is often done via SSH keys.

      However, tower-cli calls the Tower API in the background – and JSON POST data need to be in one line. But SSH keys come in multiple lines, so providing the file via a $(cat ssh_file) does not work:

      tower-cli credential create --name "Example Credentials" \
                           --organization "Default" --credential-type "Machine" \
                           --inputs="{\"username\":\"ansible\",\"ssh_key_data\":\"$(cat .ssh/id_rsa)\",\"become_method\":\"sudo\"}"

      Multiple workarounds can be found on the net, like manually editing the file to remove the new lines or creating a dedicated variables file containing the SSH key. There is even a bug report discussing that.

      But for my use case I needed to read an existing SSH file directly, and did not want to add another manual step or create an additional variables file. The trick is a rather complex piece of SED:

      $(sed -E ':a;N;$!ba;s/\r{0,1}\n/\\n/g' /home/ansible/.ssh/id_rsa)

      This basically reads in the entire file (instead of just line by line), removes the new lines and replaces them with \n. To be precise:

      • we first create a label "a"
      • append the next line to the pattern space ("N")
      • find out if this is the last line or not ("$!"), and if not
      • branch back to label a ("ba")
      • after that, we search for the new lines ("\r{0,1}")
      • and replace them with the string for a new line, "\n"

      Note that this needs to be accompanied with proper line endings and quotation marks. The full call of tower-cli with the sed command inside is:

      tower-cli credential create --name "Example Credentials" \
                           --organization "Default" --credential-type "Machine" \
                           --inputs="{\"username\":\"ansible\",\"ssh_key_data\":\"$(sed -E ':a;N;$!ba;s/\r{0,1}\n/\\n/g' /home/ansible/.ssh/id_rsa)\n\",\"become_method\":\"sudo\"}"

      Note all the escaped quotations marks.

      Update

      Another way to add the keys is to provide yaml in the shell command:

      tower-cli credential create --name "Example Credentials" \
                           --organization "Default" --credential-type "Machine" \
                           --inputs='username: ansible
      become_method: sudo
      ssh_key_data: |
      '"$(sed 's/^/    /' /home/ansible/.ssh/id_rsa)"

      This method is appealing since the corresponding sed call is a little bit easier to understand. But make sure to indent the variables exactly like shown above.

      Thanks to the @ericzolf of the Red Hat Automation Community of Practice hinting me to that solution. If you are interested in the Red Hat Communities of Practice, you can read more about them in the blog “Communities of practice: Straight from the open source”.

      My summary of the OpenStack Stein Infrastructure Summit and Train PTG aka Denver III

      Posted by Marios Andreou on May 05, 2019 03:00 PM


      My summary of the OpenStack Stein Infrastructure Summit and Train PTG aka Denver III

      This was the first re-combined event with both summit and project teams gathering happening in the same week and the third consecutive year that OpenStack has descended on Denver. This is also the first Open Infrastructure summit - the foundation is expanding to allow other non openstack projects to use the Open Infrastructure foundation for housing their projects.

      This is a brief summary with pointers of the sessions or rooms I attended in the order they happened. The full summit schedule is here and the PTG schedule is here.

      There is a list of some of the etherpads used in various summit sessions in this wiki page thanks to T. Carrez who let me take a photo of his screen for the URL :).


      Photos


      Summit Day One

      General impression is a slightly reduced attendance - though I should note the last summit I attended was Austin unless I’m mistaken, attending PTG but not summit. There were about ~2000 summit attendees according to one of the keynote speakers. Having said that however J. Bryce gave some interesting numbers in his keynote, hilighting that Stein is the 19th on time release for OpenStack, that OS is still the 3rd largest open source project in the world with 105,000 members across 180 countries and with 65000 merged changes in the last year.

      It was interesting to hear from Deutche Telekom - especially that they are using and contributing to zuul upstream and that they rely on CI for their ever growing deployments. One of the numbers given is they are adding capacity at 400 servers per week.

      Some other interesting points from the keynotes are

      • the increasing use of Ironic as a standalone service outside of OpenStack deployments, for managing the baremetal infrastructure (further hilighting the OpenInfra vs OpenStack only theme),
      • the increasing adoption of zuul for CI and that it is being adopted as a foundation project
      • ericsson brought a 5g network to summit, apparently the first 5G network (?) in the United States that was available at their booth and which uses OpenStack for their infrastructure. There was also a demonstration of the latency differences between 3/4/5G networks involving VR headsets.

      Besides the keynotes I attended the OpenStack Ansible project update - there was a shout out for the TripleO team by Mohammed Nasser who higlighted the excellent cross team collaboration story by the TripleO tempest team and the Ansible project. Finally I attended a talk called “multicloud ci/cd with openstack and kubernetes” where the presented setup a simple ‘hello world’ application across a number of different geographic locations and showed how CI/CD meant he could make a simple change to the app and have it be tested then deployed across the different clouds that run that application.


      Summit Day Two

      I attended the Zuul project BOF (‘birds of a feather’) where it was interesting to hear about various folks that are running Zuul internally - some on older versions and wanting to upgrade.

      I also caught the “Deployment Tools: defined common capabilities” where folks that work on or are knowledgable about the various OpenStack deployment tools including TripleO got together and used this etherpad to try and compile a list of ‘tags’ which the various tools can claim to implement. Examples include containerized (i.e. support for containerized deployments), version support, day 2 operations etc. The first step will be to socialize further distill and then socialize these ‘capabilities’ via the openstack-discuss mailing list.

      The Airship project update was the next session I went to and was quite well attended. In general it was interesting to hear about the similarities in the concepts and approach taken in Airship compared to TripleO. Especially the concept of an ‘undercloud’ and that deployment is driven by yaml files which define the deployment and service configuration values. In Airship these yaml files are known as charts. The equivalence in TripleO is the tripleo heat templates repo which holds the deployment and service configuration for TripleO deployments.

      Finally an interesting session on running zuul ontop of Kubernetes and using Helm Charts. The presenters said they would make the charts used in their deployment would be made available upstream “soon”. This then spawned a side conversation with weshay and sshnaidm about using kubernetes for the TripleO CI squad’s zuul based reproducer. Prompted by weshay we micro-hackfest explored the use of k3s - 5 less than k8s. Taking the docker-compose file we tried to convert it using the kompose tool. We got far enough running the k3s service but stumbled on the lack of support for dependencies in kompose. We could investigate writing some Helm charts to do this but it is still TBD if k3s is a direction we will adopt for the reproducer this cycle or if we will keep podman which replaced docker (sshnaidm++ was working on this).


      Summit Day Three

      On Wednesday the first session I attended was a comparison of TripleO, Kolla and Airship as a deployment tool. The common requirement was support for container based deployments. You can see event details here - apparently there should be a recording though this isn’t available at time of writing. Again it was interesting to hear about the similarities between The Airship and TripleO project approach to config management including the management node ‘undercloud’.

      I then went to the very well attended and well lead (by slagle and emilienm) TripleO project update. Again there should be a recording available at some point via that link but it isn’t there at present time. Besides a general stein update, slagle introduced the concepts of scaling (thousand not hundred) and edge as one of the main use cases for these ‘thousand node deployments’. These concepts were then further discussed in subsequent TripleO sessions noted in following paragraphs.

      The first of these TripleO sessions was the forum that was devoted to scale and lead by slagle - etherpad is here. There is a good list of the identified and discussed “bottleneck services” on the undercloud - including Heat, Ironic, Mistral&Zaqar, Neutron, Keyston and Ansible and the technical challenges around possibly removing these. This was further explored during the PTG.

      Finally I was at the Open Infrastructure project update given by C. Boylan which hilighted the move to opendev.org and then the zuul project update by J. Blair.


      Project Teams Gathering Day 1

      I spent the PTG in the TripleO room Room etherpad and picture

      The etherpad contains notes from the various discussions but I hilight some of the main themes here. As usual there was a brief retrospective on the stein cycle and some of that was captured in this etherpad. This was followed by an operator feedback session - one of the main issues raised was ‘needs more scale’.

      Slagle lead the discussion on Edge which introduced and discussed the requirements for The Distributed Compute Node architecture, where we will have a central deployment for our controllers and compute nodes spread across a number of edge locations. There was participation here from both the Edge working group as well as the Ironic project.

      Then fultonj and gfidente lead the storage squad update (notes on the main tripleo room etherpad. Among other things, there was discussion around ceph deployments ‘at the edge’ and the challenges, as well as the trigerring of tripleo jobs in ceph-ansible pull requests.

      Finally emilien lead the Deployment squad topics (notes on tripleo room etherpad). In particular there was further discussion around making the undercloud ‘lighter’ by considering which services we might remove. For this cycle it is likely that we keep Mistral albeit changing the way we use it so that is only executes ansible, keeping Neutron and os-net-config as is, but making the network configuration be applied more directly by ansible. There was also discussion around the use of Nova and whether we can just use Ironic directly. There will be exploration around the use of metalsmith to provide the information about the nodes in our deployment that we lose by removing Nova.


      Project Teams Gathering Day 2

      Room etherpad and day two picture

      Slagle lead the first session which revisited the “thousand node scale” topic introduced in the tripleo operator forum and captured in the tripleo-forum-scale etherpad.

      The HA session was introduced by bandini and dciabrin (see main room etherpad for notes). Some of the topics raised here were the need for a new workflow for minor deployment configuration changes such as changing a service password, how we can improve the issue posed by a partial/temporary disconection of one of the cluster/controlplane nodes and whether pacemaker should be the default in upstream deployments (this is a topic revisited most summits…) and there was no strong push back on this however this is still to be proposed as a gerrit change so is still TBD.

      The upgrades squad was represented by chem, jfrancoa and ccamacho. There are notes in this upgrades session etherpad. Amongst other topics there was discussion around ‘FFWD II’ which is Queens to Train (and which includes the upgrade from Centos7 to Centos8) as well as a discussion around a completely fresh approach to the upgrades workflow that uses a separate set of nodes for the controlplane. The idea is to replicate the existing controlplane onto 3 new nodes but deploying the target upgrade version. This could mean more than 3 nodes if you have distributed the controlplane services across a number of dedicated nodes like Networker for example. Once the ‘new’ controlplane is ready you would migrate the data from your old controloplane and at that point there would be controlplane outage. However since the target controlplane is ready to go, the hope is that the switch over from old to new controlplane will be a relatively painless process once the details are worked out in this cycle. For the rest of the nodes Compute etc the existing workflow would be used with the tripleoclient running the relevant ansible playbooks to deliver upgrades on per node basis.

      The TripleO CI squad was represented by weshay, quiquell, sshnaidm and myself. The session was introduced by weshay and we had a good discussion lasting well over an hour about numerous topics (captured in the main triplo room etherpad) including the performance gains from moving to standalone jobs, plans around the standalone-upgrade in particular that for stable/stein this should be green and voting now taiga story in progress, the work around rhel7/8 on baremetal and the software factory jobs, using browbeat to monitor changes to the deployment time and possibly alert of even block if this is significant.

      Finally weshay showed off the shiny new zuul-based reproducer (kudos quiquell and sshnaidm). In short you can find the reproducer-quickstart in any TripleO ci job and follow the related reproducer README to have your own zuul and gerrit running the given job using either libvirt or ovb (i.e. on rdocloud). This is the first time the new reproducer was introduced to the wider team and whilst we (TripleO squad) would probably still call this a beta, we think its ready enough for any early adopters that might find this interesting and useful enough to try it out and the CI squad would certainly appreciate any feedback.

      Netdata arrive sur fedora et epel

      Posted by Didier Fabert (tartare) on May 05, 2019 06:00 AM
      <figure class="aligncenter"></figure>

      Netdata est un outil open-source de monitoring en temps réel pour les systèmes GNU/Linux. Il embarque son propre serveur web et affiche les résultats des différentes sondes sur une page web.

      L’installation est maintenant trivial

      • Pour fedora:
        sudo dnf install netdata
      • Pour epel:
        sudo yum install netdata

      Démarrage et activation automatique au démarrage

      sudo systemctl start netdata
      sudo systemctl enable netdata

      Il suffit ensuite de pointer son navigateur web préféré vers l’adresse http://localhost:19999/

      <figure class="wp-block-image"></figure>

      Netdata permet aussi d’afficher les alertes sur son poste et/ou de les envoyer par courrier électronique.

      Par défaut netdata n’accepte que les connexions locales, mais ce paramètre peut être modifié dans le fichier de configuration de netdata (commenter la ligne bind to = localhost du fichier /etc/netdata/netdata.conf et relancer le service netdata) ou mieux servir netdata via un reverse proxy. Je mets ici un exemple de configuration pour apache, sans authentification.

      <VirtualHost *:80>
          ServerName netdata.example.com
          ProxyPass / http://localhost:19999/
          ProxyPassReverse / http://localhost:19999/
          ProxyPreserveHost On
          ProxyRequests Off
          <Location />
              Require all granted
          </Location>
      </VirtualHost>

      L’activation du plugin mysql nécessite la présence de paquets supplémentaires, pour avoir le module python MySQL et d’autoriser netdata à se connecter au SGBD.

      • Pour fedora:
        sudo dnf install python2-mysql python3-mysql
      • Pour epel:
        sudo yum install MySQL-python

      Commandes basiques SQL d’autorisation de connexion au SGBD par netdata. On peut tout à fait spécifier ici un mot de passe (IDENTIFIED BY) qu’il faudra renseigner dans le fichier /etc/netdata/conf.d/python.d/mysql.conf

      CREATE USER 'netdata'@'localhost';
      GRANT USAGE on . to 'netdata'@'localhost';
      FLUSH PRIVILEGES;

      FLISol Panama 2019

      Posted by Davis Álvarez on May 05, 2019 04:44 AM

      What is FLISol?

      FLISoL is the largest Free Software dissemination event in Latin America and is aimed at all types of audiences: students, academics, businessmen, workers, public officials, enthusiasts and even people who do not have much computer knowledge.

      The FLISoL has been celebrated since 2005 and since 2008 it was adopted on the 4th Saturday of April, every year. The admission is free and its main objective is to promote the use of free software, making known to the general public the philosophy, scope, progress and development.

      The event is organized by local Free Software Communities and is developed simultaneously with events which Free Software is installed free of charge and totally legal, on the attendants’s computers.

      In Panama this activity has been celebrated since 2007 in different provinces and universities together with Free Software Communities established in the country, such as FLOSSPA, FSL, Fedora, Pynama, among others. The activity is joined by organizations of students such as the IEEE and professionals passionate about software and free culture.

      FLISol 2019

      University of Panama

      The Faculty of Informatics Electronics and Communication at the University of Panama in the recent years has taken the tradition to vary the headquarters where the event is organized, the students and teachers from their different regional centers move to the established headquarter.

      <iframe allowfullscreen="allowfullscreen" frameborder="0" height="450" src="https://www.google.com/maps/embed?pb=!1m22!1m8!1m3!1d200133.87203211986!2d-79.43403612655193!3d9.07369354820891!3m2!1i1024!2i768!4f13.1!4m11!3e0!4m3!3m2!1d8.9845314!2d-79.53111179999999!4m5!1s0x0%3A0x8323865f2c48f189!2sUniversidad+de+Panam%C3%A1%2C+Centro+Regional+Panam%C3%A1+Este!3m2!1d9.1703086!2d-79.1046417!5e0!3m2!1sen!2spa!4v1556900939478!5m2!1sen!2spa" style="border: 0;" width="1200"></iframe>

      This year was Panama East Regional University Center, in Chepo, the house of the FLISol organized by the University of Panama. A lot of students and teachers traveled to Chepo from differences provinces of Panama, to celebrate the activity.

      In Chepo, there were 3 conference rooms from 8:00 until 12:30, also a laboratory specifically for workshops.

      Hundreds of students who came to the place lived a real festival, the atmosphere of joy, enthusiasm and curiosity was evident among the attendees.

      <figure aria-describedby="caption-attachment-837" class="wp-caption aligncenter" id="attachment_837" style="width: 800px">
      FLISol University Of Panama
      <figcaption class="wp-caption-text" id="caption-attachment-837">FLISol University Of Panama</figcaption></figure>

      It is worth mentioning the cordial and kind treatment offered by the hosts, from the breakfast received to the chicken soup offered at the lunchtime, the attention was amazing!

      I had the opportunity to collaborate in the organization and also bring a conference, as ambassador of the Fedora Project, I spoke about “Fedora Project, Get Involved in the International Community”.

      <figure aria-describedby="caption-attachment-840" class="wp-caption aligncenter" id="attachment_840" style="width: 800px">
      Fedora Project, Get Involved in the International Community
      <figcaption class="wp-caption-text" id="caption-attachment-840">Fedora Project, Get Involved in the International Community</figcaption></figure>

      In Chepo other conferences were offered, Ismael Valderrma spoke about “Pedagogical Innovation and the Transformation of the Educational Open-Board Scenario”, the representation of Mozilla Community was Luis Manuel Segundo, Ernesto Morales of WordPress Panama spoke about “WordPress Professional Use” and Ludwig Villalobos of NacionBits offered an Arduino workshop. There were many other conferences and workshops, at the end of this article you will find the link where you can see all the topics covered.

      Some images of the event in Chepo

      Interamerican University of Panamá

      The students of the Interamerican University of Panama together with the Free Software Community FLOSSPA organized the event in Panama City, starting at 9:00 and ending at 15:00.

      <figure aria-describedby="caption-attachment-845" class="wp-caption aligncenter" id="attachment_845" style="width: 601px">
      FLISol Interamerican University of Panama
      <figcaption class="wp-caption-text" id="caption-attachment-845">FLISol Interamerican University of Panama</figcaption></figure>

      A marathon of conferences was held at the Interamerican University of Panama, it is relevant to mention that there were speakers from Colombia and Peru celebrating  the 15th anniversary of FLISol.

      By the year 2020, the University of Panama will be celebrating the FLISol at its regional center in San Miguelito, it promises to be a very attended event. If you would like to be part of the organization or speak about some topic, do not hesitate to contact me for to continue promoting this noble cause.

      If you want to know the agenda of both headquarters of the event, go to the official link of FLISol Panamá 2019.

      The post FLISol Panama 2019 appeared first on Davis Álvarez.

      Vimwiki diary template

      Posted by Jakub Kadlčík on May 05, 2019 12:00 AM

      Vimwiki is a plugin for managing personal wiki in your Vim environment. It provides a simple way to organize notes and create links between them, manage todo lists, write a diary and have many other useful features. In this article, we are going to focus solely on the diary and how to make its usage much smoother.

      Don’t think of a diary in the conventional sense as a notebook that someone opens every night in bed and writes “My dear diary, why am I still alone?” to a new page while crying and eating ice cream. Of course, this is a valid way to use a diary, but there is much more to it. Technically, the diary in Vimwiki is a set of files named YYYY-MM-DD.wiki stored in the ~/vimwiki/diary/ directory. It is not at all concerned with their content, so it can be whatever fits your needs. Personally, I prefer to separate each page in my diary into multiple sections. First, I have my daily checklist, which is a set of actions that I have to do on a daily basis but tend to forget them, then having a section with things, that I have to do on that specific day and lastly having some space for taking quick notes, that will be categorized and written down in greater context later.

      The problem with Vimwiki is, that it always creates diary pages empty. There was an RFE requesting entry templates, but it was in my opinion prematurely closed, suggesting ultisnips as a way to go. I don’t like this solution at all, so fortunately, there is a better option, that was originally recommended to me by @brennen. I made some minor improvements, but he is the one who truly deserves the credit. So, how to make templates for vim wiki diary pages?

      This is an animated gif, click on it! Please ... it was a lot of work.

      First, we need to create a script that prints the desired template to the standard output. Why script and not just having a template content in a text file? That’s because for each day we want to at least generate its date to the title. Use what programing language you prefer. It can look like this.

      #!/usr/bin/python
      import sys
      import datetime
      
      template = """# {date}
      
      ## Daily checklist
      
      * [ ] Take a vitamin C
      * [ ] Eat your daily carrot!
      
      ## Todo
      
      ## Notes"""
      
      date = (datetime.date.today() if len(sys.argv) < 2
              # Expecting filename in YYYY-MM-DD.foo format
              else sys.argv[1].rsplit(".", 1)[0])
      print(template.format(date=date))
      

      Save the script as ~/.vim/bin/generate-vimwiki-diary-template. Don’t forget to make it executable.

      chmod +x ~/.vim/bin/generate-vimwiki-diary-template
      

      Now, we need to configure Vim to use it when creating diary pages. This requires no additional plugin. Just put this line in your ~/.vimrc.

      au BufNewFile ~/vimwiki/diary/*.md :silent 0r !~/.vim/bin/generate-vimwiki-diary-template '%'
      

      If needed, change the ~/vimwiki/diary/*.md to an appropriate format, depending on whether you use .wiki or .md. And that’s really all of it, please come to chat on #vimwiki at freenode.net and let us know, what do you think!