Fedora People

Leaving Fedora Infrastructure

Posted by Stephen Smoogen on April 16, 2021 08:51 PM

 In June 2009, I was given the opportunity to work in Fedora Infrastructure as Mike McGrath's assistant so that he could take some vacation. At the time I was living in New Mexico and had worked at the University of New Mexico for several years. I started working remote for the first time in my life, and had to learn all the nuances of IRC meetings and typing clearly and quickly. With the assistance of Seth Vidal, Luke Macken, Ricky Zhou, and many others I got quickly into 'the swing of things' with only 2 or 3 times taking all of Fedora offline because of a missed ; in a dns config file. I

For the last 4300+ days, I have worked with multiple great and wonderful system administrators and programmers to keep the Fedora Infrastructure running and growing so that the number of systems using 'deliverables' has grow into the millions of systems. I am highly indebted to everyone from volunteers to paid Red Hatters who has helped me grow. I want to especially thank Kevin Fenzi, Rick Elrod, and Pierre Yves-Chibon for the insights I have needed. 

Over the years, we have maintained a constantly spinning set of plates which allow for packagers to commit changes, build software, produce deliverables, and start all over again. We have moved our build systems physically at least 3 times, once across the North American continent. We have dealt with security breaches, mass password changes, and the undead project of replacing the 'Fedora Account System' which had been going on since before I started. [To the team which finished that monumental task in the last 3 months, we are all highly indebted. There may be pain-points but they did a herculean task.]

All in all, it has been a very good decade of working on a project that many have said would be 'gone' by next release. However, it is time for me to move onto other projects, and find new challenges that excite me. Starting next week, I will be moving to a group working with a strong focus on embedded hardware. I have been interested in embedded in some form or another since the 1970's. My first computer memories were of systems my dad showed me which would have been in an A-6 plane. From there I remember my dad taking me to see a friend who repaired PDP systems for textile mills and let me work on my first Unix running on a Dec Rainbow. Whenever I came home from those visits, I would have a smile and hum of excitement which would not leave me for days. I remember having that humm when in 1992, a student teacher showed me MCC Linux running on an i386 which we had been repairing from spare parts. I could do everything and anything on that box for a fraction of the price of the big Unix boxes I had to pay account time for. And recently I went to a set of talks on embedded projects and found myself with the same hum. It was a surprise for me but I found myself more and more interested in it as the weeks have gone by.

I was offered a chance to move over, and I decided to take it. I will still be in the Fedora community but will not be able to work much on Infrastructure issues. If I have tasks that you are waiting for, please let me know, and I will finish them either by myself or by doing a full handoff to someone else in Infrastructure. Thank you all for your help and patience over these last 11+ years. 

Friday’s Fedora Facts: 2021-15

Posted by Fedora Community Blog on April 16, 2021 07:41 PM

Here’s your weekly Fedora report. Read what happened this week and what’s coming up. Your contributions are welcome (see the end of the post)! The Final freeze is underway. The F34 Final Go/No-Go meeting is Thursday.

I have weekly office hours on Wednesdays in the morning and afternoon (US/Eastern time) in #fedora-meeting-1. Drop by if you have any questions or comments about the schedule, Changes, elections, or anything else. See the upcoming meetings for more information.



<figure class="wp-block-table">
elementary Developer Weekendvirtual26-27 Junecloses ~20 Apr
All Things Openvirtual17-19 Octcloses 30 Apr
Akademyvirtual18-25 Junecloses 2 May
openSUSE Virtual Conferencevirtual18-20 Junecloses 4 May
DevConf.USvirtual2-3 Sepcloses 31 May

Help wanted

Upcoming test days

Prioritized Bugs

Upcoming meetings


Fedora Linux 34


Upcoming key schedule milestones:

  • 2021-04-27 — Final release target #1


<figure class="wp-block-table">
Bug IDComponentBug StatusBlocker Status

Fedora Linux 35


<figure class="wp-block-table">
Reduce dependencies on python3-setuptoolsSystem-WideApproved
RPM 4.17System-WideFESCo #2593
Smaller Container Base Image (remove sssd-client, util-linux, shadow-utils)Self-ContainedFESCo #2594
Erlang 24Self-ContainedFESCo #2595
Switching Cyrus Sasl from BerkeleyDB to GDBMSystem-WideFESCo #2596
Debuginfod By DefaultSelf-ContainedFESCo #2597
Package information on ELF objectsSystem-WideAnnounced

Changes approved, rejected, or withdrawn will be removed from this table the next week. See the ChangeSet page for a full list of approved changes.


Have something you want included? You can file an issue or submit a pull request in the fedora-pgm/pgm_communication repo.

The post Friday’s Fedora Facts: 2021-15 appeared first on Fedora Community Blog.

Use the DNF local plugin to speed up your home lab

Posted by Fedora Magazine on April 16, 2021 08:00 AM


If you are a Fedora Linux enthusiast or a developer working with multiple instances of Fedora Linux then you might benefit from the DNF local plugin. An example of someone who would benefit from the DNF local plugin would be an enthusiast who is running a cluster of Raspberry Pis. Another example would be someone running several virtual machines managed by Vagrant. The DNF local plugin reduces the time required for DNF transactions. It accomplishes this by transparently creating and managing a local RPM repository. Because accessing files on a local file system is significantly faster than downloading them repeatedly, multiple Fedora Linux machines will see a significant performance improvement when running dnf with the DNF local plugin enabled.

I recently started using this plugin after reading a tip from Glenn Johnson (aka glennzo) in a 2018 fedoraforum.org post. While working on a Raspberry Pi based Kubernetes cluster running Fedora Linux and also on several container-based services, I winced with every DNF update on each Pi or each container that downloaded a duplicate set of rpms across my expensive internet connection. In order to improve this situation, I searched for a solution that would cache rpms for local reuse. I wanted something that would not require any changes to repository configuration files on every machine. I also wanted it to continue to use the network of Fedora Linux mirrors. I didn’t want to use a single mirror for all updates.

Prior art

An internet search yields two common solutions that eliminate or reduce repeat downloads of the same RPM set – create a private Fedora Linux mirror or set up a caching proxy.

Fedora provides guidance on setting up a private mirror. A mirror requires a lot of bandwidth and disk space and significant work to maintain. A full private mirror would be too expensive and it would be overkill for my purposes.

The most common solution I found online was to implement a caching proxy using Squid. I had two concerns with this type of solution. First, I would need to edit repository definitions stored in /etc/yum.repo.d on each virtual and physical machine or container to use the same mirror. Second, I would need to use http and not https connections which would introduce a security risk.

After reading Glenn’s 2018 post on the DNF local plugin, I searched for additional information but could not find much of anything besides the sparse documentation for the plugin on the DNF documentation web site. This article is intended to raise awareness of this plugin.

About the DNF local plugin

The online documentation provides a succinct description of the plugin: “Automatically copy all downloaded packages to a repository on the local filesystem and generating repo metadata”. The magic happens when there are two or more Fedora Linux machines configured to use the plugin and to share the same local repository. These machines can be virtual machines or containers running on a host and all sharing the host filesystem, or separate physical hardware on a local area network sharing the file system using a network-based file system sharing technology. The plugin, once configured, handles everything else transparently. Continue to use dnf as before. dnf will check the plugin repository for rpms, then proceed to download from a mirror if not found. The plugin will then cache all rpms in the local repository regardless of their upstream source – an official Fedora Linux repository or a third-party RPM repository – and make them available for the next run of dnf.

Install and configure the DNF local plugin

Install the plugin using dnf. The createrepo_c packages will be installed as a dependency. The latter is used, if needed, to create the local repository.

sudo dnf install python3-dnf-plugin-local

The plugin configuration file is stored at /etc/dnf/plugins/local.conf. An example copy of the file is provided below. The only change required is to set the repodir option. The repodir option defines where on the local filesystem the plugin will keep the RPM repository.

enabled = true
# Path to the local repository.
# repodir = /var/lib/dnf/plugins/local

# Createrepo options. See man createrepo_c
# This option lets you disable createrepo command. This could be useful
# for large repositories where metadata is priodically generated by cron
# for example. This also has the side effect of only copying the packages
# to the local repo directory.
enabled = true

# If you want to speedup createrepo with the --cachedir option. Eg.
# cachedir = /tmp/createrepo-local-plugin-cachedir

# quiet = true

# verbose = false

Change repodir to the filesystem directory where you want the RPM repository stored. For example, change repodir to /srv/repodir as shown below.

# Path to the local repository.
# repodir = /var/lib/dnf/plugins/local
repodir = /srv/repodir

Finally, create the directory if it does not already exist. If this directory does not exist, dnf will display some errors when it first attempts to access the directory. The plugin will create the directory, if necessary, despite the initial errors.

sudo mkdir -p /srv/repodir

Repeat this process on any virtual machine or container that you want to share the local repository. See the use cases below for more information. An alternative configuration using NFS (network file system) is also provided below.

How to use the DNF local plugin

After you have installed the plugin, you do not need to change how you use dnf. The plugin will cause a few additional steps to run transparently behind the scenes whenever dnf is called. After dnf determines which rpms to update or install, the plugin will try to retrieve them from the local repository before trying to download them from a mirror. After dnf has successfully completed the requested updates, the plugin will copy any rpms downloaded from a mirror to the local repository and then update the local repository’s metadata. The downloaded rpms will then be available in the local repository for the next dnf client.

There are two points to be aware of. First, benefits from the local repository only occur if multiple machines share the same architecture (for example, x86_64 or aarch64). Virtual machines and containers running on a host will usually share the same architecture as the host. But if there is only one aarch64 device and one x86_64 device there is little real benefit to a shared local repository unless one of the devices is constantly reset and updated which is common when developing with a virtual machine or container. Second, I have not explored how robust the local repository is to multiple dnf clients updating the repository metadata concurrently. I therefore run dnf from multiple machines serially rather than in parallel. This may not be a real concern but I want to be cautious.

The use cases outlined below assume that work is being done on Fedora Workstation. Other desktop environments can work as well but may take a little extra effort. I created a GitHub repository with examples to help with each use case. Click the Code button at https://github.com/buckaroogeek/dnf-local-plugin-examples to clone the repository or to download a zip file.

Use case 1: networked physical machines

The simplest use case is two or more Fedora Linux computers on the same network. Install the DNF local plugin on each Fedora Linux machine and configure the plugin to use a repository on a network-aware file system. There are many network-aware file systems to choose from. Which file system you will use will probably be influenced by the existing devices on your network.

For example, I have a small Synology Network Attached Storage device (NAS) on my home network. The web admin interface for the Synology makes it very easy to set up a NFS server and export a file system share to other devices on the network. NFS is a shared file system that is well supported on Fedora Linux. I created a share on my NAS named nfs-dnf and exported it to all the Fedora Linux machines on my network. For the sake of simplicity, I am omitting the details of the security settings. However, please keep in mind that security is always important even on your own local network. If you would like more information about NFS, the online Red Hat Enable Sysadmin magazine has an informative post that covers both client and server configurations on Red Hat Enterprise Linux. They translate well to Fedora Linux.

I configured the NFS client on each of my Fedora Linux machines using the steps shown below. In the below example, quga.lan is the hostname of my NAS device.

Install the NFS client on each Fedora Linux machine.

$ sudo dnf install nfs-utils

Get the list of exports from the NFS server:

$ showmount -e quga.lan
Export list for quga.lan:
/volume1/nfs-dnf  pi*.lan

Create a local directory to be used as a mount point on the Fedora Linux client:

$ sudo mkdir -p /srv/repodir

Mount the remote file system on the local directory. See man mount for more information and options.

$ sudo mount -t nfs -o vers=4 quga.lan:/nfs-dnf /srv/repodir

The DNF local plugin will now work until as long as the client remains up. If you want the NFS export to be automatically mounted when the client is rebooted, then you must to edit /etc/fstab as demonstrated below. I recommend making a backup of /etc/fstab before editing it. You can substitute vi with nano or another editor of your choice if you prefer.

$ sudo vi /etc/fstab

Append the following line at the bottom of /etc/fstab, then save and exit.

quga.lan:/volume1/nfs-dnf /srv/repodir nfs defaults,timeo=900,retrans=5,_netdev 0 0

Finally, notify systemd that it should rescan /etc/fstab by issuing the following command.

$ sudo systemctl daemon-reload

NFS works across the network and, like all network traffic, may be blocked by firewalls on the client machines. Use firewall-cmd to allow NFS-related network traffic through each Fedora Linux machine’s firewall.

$ sudo firewall-cmd --permanent --zone=public --allow-service=nfs

As you can imagine, replicating these steps correctly on multiple Fedora Linux machines can be challenging and tedious. Ansible automation solves this problem.

In the rpi-example directory of the github repository I’ve included an example Ansible playbook (configure.yaml) that installs and configures both the DNF plugin and the NFS client on all Fedora Linux machines on my network. There is also a playbook (update.yaml) that runs a DNF update across all devices. See this recent post in Fedora Magazine for more information about Ansible.

To use the provided Ansible examples, first update the inventory file (inventory) to include the list of Fedora Linux machines on your network that you want to managed. Next, install two Ansible roles in the roles subdirectory (or another suitable location).

$ ansible-galaxy install --roles-path ./roles -r requirements.yaml

Run the configure.yaml playbook to install and configure the plugin and NFS client on all hosts defined in the inventory file. The role that installs and configures the NFS client does so via /etc/fstab but also takes it a step further by creating an automount for the NFS share in systemd. The automount is configured to mount the share only when needed and then to automatically unmount. This saves network bandwidth and CPU cycles which can be important for low power devices like a Raspberry Pi. See the github repository for the role and for more information.

$ ansible-playbook -i inventory configure.yaml

Finally, Ansible can be configured to execute dnf update on all the systems serially by using the update.yaml playbook.

$ ansible-playbook -i inventory update.yaml

Ansible and other automation tools such as Puppet, Salt, or Chef can be big time savers when working with multiple virtual or physical machines that share many characteristics.

Use case 2: virtual machines running on the same host

Fedora Linux has excellent built-in support for virtual machines. The Fedora Project also provides Fedora Cloud base images for use as virtual machines. Vagrant is a tool for managing virtual machines. Fedora Magazine has instructions on how to set up and configure Vagrant. Add the following line in your .bashrc (or other comparable shell configuration file) to inform Vagrant to use libvirt automatically on your workstation instead of the default VirtualBox.


In your project directory initialize Vagrant and the Fedora Cloud image (use 34-cloud-base for Fedora Linux 34 when available):

$ vagrant init fedora/33-cloud-base

This creates a Vagrant file in the project directory. Edit the Vagrant file to look like the example below. DNF will likely fail with the default memory settings for libvirt. So the example Vagrant file below provides additional memory to the virtual machine. The example below also shares the host /srv/repodir with the virtual machine. The shared directory will have the same path in the virtual machine – /srv/repodir. The Vagrant file can be downloaded from github.

# -*- mode: ruby -*-
# vi: set ft=ruby :

# define repo directory; same name on host and vm
REPO_DIR = "/srv/repodir"

Vagrant.configure("2") do |config|

  config.vm.box = "fedora/33-cloud-base"

  config.vm.provider :libvirt do |v|
    v.memory = 2048
  #  v.cpus = 2

  # share the local repository with the vm at the same location
  config.vm.synced_folder REPO_DIR, REPO_DIR

  # ansible provisioner - commented out by default
  # the ansible role is installed into a path defined by
  # ansible.galaxy_roles-path below. The extra_vars are ansible
  # variables passed to the playbook.
#  config.vm.provision "ansible" do |ansible|
#    ansible.verbose = "v"
#    ansible.playbook = "ansible/playbook.yaml"
#    ansible.extra_vars = {
#      repo_dir: REPO_DIR,
#      dnf_update: false
#    }
#    ansible.galaxy_role_file = "ansible/requirements.yaml"
#    ansible.galaxy_roles_path = "ansible/roles"
#  end

Once you have Vagrant managing a Fedora Linux virtual machine, you can install the plugin manually. SSH into the virtual machine:

$ vagrant ssh

When you are at a command prompt in the virtual machine, repeat the steps from the Install and configure the DNF local plugin section above. The Vagrant configuration file should have already made /srv/repodir from the host available in the virtual machine at the same path.

If you are working with several virtual machines or repeatedly re-initiating a new virtual machine then some simple automation becomes useful. As with the network example above, I use ansible to automate this process.

In the vagrant-example directory on github, you will see an ansible subdirectory. Edit the Vagrant file and remove the comment marks under the ansible provisioner section. Make sure the ansible directory and its contents (playbook.yaml, requirements.yaml) are in the project directory.

After you’ve uncommented the lines, the ansible provisioner section in the Vagrant file should look similar to the following:

  # ansible provisioner 
  # the ansible role is installed into a path defined by
  # ansible.galaxy_roles-path below. The extra_vars are ansible
  # variables passed to the playbook.
  config.vm.provision "ansible" do |ansible|
    ansible.verbose = "v"
    ansible.playbook = "ansible/playbook.yaml"
    ansible.extra_vars = {
      repo_dir: REPO_DIR,
      dnf_update: false
    ansible.galaxy_role_file = "ansible/requirements.yaml"
    ansible.galaxy_roles_path = "ansible/roles"

Ansible must be installed (sudo dnf install ansible). Note that there are significant changes to how Ansible is packaged beginning with Fedora Linux 34 (use sudo dnf install ansible-base ansible-collections*).

If you run Vagrant now (or reprovision: vagrant provision), Ansible will automatically download an Ansible role that installs the DNF local plugin. It will then use the downloaded role in a playbook. You can vagrant ssh into the virtual machine to verify that the plugin is installed and to verify that rpms are coming from the DNF local repository instead of a mirror.

Use case 3: container builds

Container images are a common way to distribute and run applications. If you are a developer or enthusiast using Fedora Linux containers as a foundation for applications or services, you will likely use dnf to update the container during the development/build process. Application development is iterative and can result in repeated executions of dnf pulling the same RPM set from Fedora Linux mirrors. If you cache these rpms locally then you can speed up the container build process by retrieving them from the local cache instead of re-downloading them over the network each time. One way to accomplish this is to create a custom Fedora Linux container image with the DNF local plugin installed and configured to use a local repository on the host workstation. Fedora Linux offers podman and buildah for managing the container build, run and test life cycle. See the Fedora Magazine post How to build Fedora container images for more about managing containers on Fedora Linux.

Note that the fedora_minimal container uses microdnf by default which does not support plugins. The fedora container, however, uses dnf.

A script that uses buildah and podman to create a custom Fedora Linux image named myFedora is provided below. The script creates a mount point for the local repository at /srv/repodir. The below script is also available in the container-example directory of the github repository. It is named base-image-build.sh.

set -x

# bash script that creates a 'myfedora' image from fedora:latest.
# Adds dnf-local-plugin, points plugin to /srv/repodir for local
# repository and creates an external mount point for /srv/repodir
# that can be used with a -v switch in podman/docker

# custom image name

# scratch conf file name

# location of plugin config file

# location of repodir on container

# create scratch plugin conf file for container
# using repodir location as set in container_repodir
cat <<EOF > "$tmp_name"
enabled = true
repodir = $container_repodir
enabled = true
# If you want to speedup createrepo with the --cachedir option. Eg.
# cachedir = /tmp/createrepo-local-plugin-cachedir
# quiet = true
# verbose = false

# pull registry.fedoraproject.org/fedora:latest
podman pull registry.fedoraproject.org/fedora:latest

#start the build
mkdev=$(buildah from fedora:latest)

# tag author
buildah config --author "$USER" "$mkdev"

# install dnf-local-plugin, clean
# do not run update as local repo is not operational
buildah run "$mkdev" -- dnf --nodocs -y install python3-dnf-plugin-local createrepo_c
buildah run "$mkdev" -- dnf -y clean all

# create the repo dir
buildah run "$mkdev" -- mkdir -p "$container_repodir"

# copy the scratch plugin conf file from host
buildah copy "$mkdev" "$tmp_name" "$configuration_name"

# mark container repodir as a mount point for host volume
buildah config --volume "$container_repodir" "$mkdev"

# create myfedora image
buildah commit "$mkdev" "localhost/$custom_name:latest"

# clean up working image
buildah rm "$mkdev"

# remove scratch file
rm $tmp_name

Given normal security controls for containers, you usually run this script with sudo and when you use the myFedora image in your development process.

$ sudo ./base_image_build.sh

To list the images stored locally and see both fedora:latest and myfedora:latest run:

$ sudo podman images

To run the myFedora image as a container and get a bash prompt in the container run:

$ sudo podman run -ti -v /srv/repodir:/srv/repodir:Z myfedora /bin/bash

Podman also allows you to run containers rootless (as an unprivileged user). Run the script without sudo to create the myfedora image and store it in the unprivileged user’s image repository:

$ ./base-image-build.sh

In order to run the myfedora image as a rootless container on a Fedora Linux host, an additional flag is needed. Without the extra flag, SELinux will block access to /srv/repodir on the host.

$ podman run --security-opt label=disable -ti -v /srv/repodir:/srv/repodir:Z myfedora /bin/bash

By using this custom image as the base for your Fedora Linux containers, the iterative building and development of applications or services on them will be faster.

Bonus Points – for even better dnf performance, Dan Walsh describes how to share dnf metadata between a host and container using a file overlay (see https://www. redhat.com/sysadmin/speeding-container-buildah). This technique will work in combination with a shared local repository only if the host and the container use the same local repository. The dnf metadata cache includes metadata for the local repository under the name _dnf_local.

I have created a container file that uses buildah to do a dnf update on a fedora:latest image. I’ve also created a container file to repeat the process using a myfedora image. There are 53 MB and 111 rpms in the dnf update. The only difference between the images is that myfedora has the DNF local plugin installed. Using the local repository cut the elapse time by more than half in this example and saves 53MB of internet bandwidth consumption.

With the fedora:latest image the command and elapsed time is:

# sudo time -f "Elapsed Time: %E" buildah bud -v /var/cache/dnf:/var/cache/dnf:O - f Containerfile.3 .
128 Elapsed Time: 0:48.06

With the myfedora image the command and elapsed time is less than half of the base run. The :Z on the -v volume below is required when running the container on a SELinux-enabled host.

# sudo time -f "Elapsed Time: %E" buildah bud -v /var/cache/dnf:/var/cache/dnf:O -v /srv/repodir:/srv/repodir:Z -f Containerfile.4 .
133 Elapsed Time: 0:19.75

Repository management

The local repository will accumulate files over time. Among the files will be many versions of rpms that change frequently. The kernel rpms are one such example. A system upgrade (for example upgrading from Fedora Linux 33 to Fedora Linux 34) will copy many rpms into the local repository. The dnf repomanage command can be used to remove outdated rpm archives. I have not used the plugin long enough to explore this. The interested and knowledgeable reader is welcome to write an article about the dnf repomanage command for Fedora Magazine.

Finally, I keep the x86_64 rpms for my workstation, virtual machines and containers in a local repository that is separate from the aarch64 local repository for the Raspberry Pis and (future) containers hosting my Kubernetes cluster. I have separated them for reasons of convenience and happenstance. A single repository location should work across all architectures.

An important note about Fedora Linux system upgrades

Glenn Johnson has more than four years experience with the DNF local plugin. On occasion he has experienced problems when upgrading to a new release of Fedora Linux with the DNF local plugin enabled. Glenn strongly recommends that the enabled attribute in the plugin configuration file /etc/dnf/plugins/local.conf be set to false before upgrading your systems to a new Fedora Linux release. After the system upgrade, re-enable the plugin. Glenn also recommends using a separate local repository for each Fedora Linux release. For example, a NFS server might export /volume1/dnf-repo/33 for Fedora Linux 33 systems only. Glenn hangs out on fedoraforum.org – an independent online resource for Fedora Linux users.


The DNF local plugin has been beneficial to my ongoing work with a Fedora Linux based Kubernetes cluster. The containers and virtual machines running on my Fedora Linux desktop have also benefited. I appreciate how it supplements the existing DNF process and does not dictate any changes to how I update my systems or how I work with containers and virtual machines. I also appreciate not having to download the same set of rpms multiple times which saves me money, frees up bandwidth, and reduces the load on the Fedora Linux mirror hosts. Give it a try and see if the plugin will help in your situation!

Thanks to Glenn Johnson for his post on the DNF local plugin which started this journey, and for his helpful reviews of this post.

PHP version 7.4.18RC1 and 8.0.5RC1

Posted by Remi Collet on April 16, 2021 05:12 AM

Release Candidate versions are available in testing repository for Fedora and Enterprise Linux (RHEL / CentOS) to allow more people to test them. They are available as Software Collections, for a parallel installation, perfect solution for such tests, and also as base packages.

RPM of PHP version 8.0.5RC1 are available as SCL in remi-test repository and as base packages in the remi-php80-test repository for Fedora 32-34 and Enterprise Linux.

RPM of PHP version 7.4.18RC1 are available as SCL in remi-test repository and as base packages in the remi-test repository for Fedora 32-34 or remi-php74-test repository for Enterprise Linux.

emblem-notice-24.pngPHP version 7.3 is now in security mode only, so no more RC will be released.

emblem-notice-24.pngInstallation : read the Repository configuration and choose your version.

Parallel installation of version 8.0 as Software Collection:

yum --enablerepo=remi-test install php80

Parallel installation of version 7.4 as Software Collection:

yum --enablerepo=remi-test install php74

Update of system version 8.0:

yum --enablerepo=remi-php80,remi-php80-test update php\*

or, the modular way (Fedora and EL 8):

dnf module reset php
dnf module enable php:remi-8.0
dnf --enablerepo=remi-modular-test update php\*

Update of system version 7.4:

yum --enablerepo=remi-php74,remi-php74-test update php\*

or, the modular way (Fedora and EL 8):

dnf module reset php
dnf module enable php:remi-7.4
dnf --enablerepo=remi-modular-test update php\*

Notice: version 8.0.5RC1 is also in Fedora rawhide for QA.

emblem-notice-24.png oci8 extension now use Oracle Client version 21.1

emblem-notice-24.pngEL-8 packages are built using RHEL-8.3

emblem-notice-24.pngEL-7 packages are built using RHEL-7.9

emblem-notice-24.pngRC version is usually the same as the final version (no change accepted after RC, exception for security fix).

Software Collections ( php74, php80)

Base packages (php)

New badge: Fedora 34 Cloud Test Day !

Posted by Fedora Badges on April 16, 2021 02:45 AM
Fedora 34 Cloud Test DayYou wrangled the clouds for Fedora 34!

Create a git branch archive

Posted by Josef Strzibny on April 16, 2021 12:00 AM

This week I needed to prepare some files for the buyers of my book Deployment from Scratch. I use various git repositories for the content and case studies, and I needed to create archives for the current release quickly.

Luckily, I found out this is much easier than I made it to.

Instead of manually archiving the files from git ls-files with tar and fiddling with branches, I ended up just calling git archive:

$ git archive $branch -o archive.tar
$ git archive $branch --format=tar.gz -o archive.tar.gz

This is a lifesaver if you have different working branches like I do (alpha for alpha pre-release).

And while I am at it…

If you need an archive of a public project already hosted on GitHub, there is a neat way how to get the gzipped tarball too:

$ wget https://github.com/strzibny/invoice_printer/archive/master.tar.gz
$ wget https://github.com/strzibny/invoice_printer/archive/master.zip

UniFi USW Conflicting Information

Posted by Jon Chiappetta on April 15, 2021 12:39 PM

Annoying UI bug showing green gigabit port statuses, however, top overall summary shows CONNECTED|HDX (half-duplex?).

FW: 5.43.35 – NC: 6.0.45

<figure class="wp-block-image size-large"></figure>

ANNOUNCE: gtk-vnc release 1.2.0 available

Posted by Daniel Berrange on April 15, 2021 10:56 AM

I’m pleased to announce a new release of GTK-VNC, version 1.2.0.

https://download.gnome.org/sources/gtk-vnc/1.2/gtk-vnc-1.2.0.tar.xz (213K)
sha256sum: 7aaf80040d47134a963742fb6c94e970fcb6bf52dc975d7ae542b2ef5f34b94a

Changes in this release include

  • Add API to request fixed zoom level
  • Add API to request fixed aspect ratio when scaling
  • Add APIs for client initiated desktop resize
  • Implement “Extended Desktop Resize” VNC extension
  • Implement “Desktop Rename” VNC extension
  • Implement “Last Rect” VNC extension
  • Implement “XVP” (power control) VNC extension
  • Implement VeNCrypt “plain” auth mode
  • Implement alpha cursor VNC extension
  • Use GTK preferred width/height helpers for resizing
  • Fix misc docs/introspection annotation bugs
  • Honour meson warninglevel setting for compiler flags
  • Fix JPEG decoding in low colour depth modes
  • Fix minor memory leaks
  • Add header file macros for checking API version
  • Change some meson options from “bool” to “feature”
  • Validate GLib/GTK min/max symbol versions at build time
  • Avoid recreating framebuffer if size/format is unchanged
  • Emit resize signal after WMVi update
  • Various fixes & enhancements to python demo program
  • Ensure Gir files build against local libs
  • Enable stack protector on more platforms
  • Don’t force disable introspection on windows
  • Relax min x11 deps for older platforms
  • Avoid mutex deadlock on FreeBSD in test suite
  • Stop using deprecated GLib thread APIs
  • Stop using deprecated GLib main loop APIs
  • Stop using deprecated GObject class private data APIs
  • Add fixes for building on macOS
  • Fix deps for building example program
  • Update translations

Thanks to all those who reported bugs and provided patches that went into this new release.

The syslog-ng insider 2021-04: Grafana; Windows agent; BSD;

Posted by Peter Czanik on April 15, 2021 10:13 AM

Dear syslog-ng users,

This is the 90th issue of syslog-ng Insider, a monthly newsletter that brings you syslog-ng-related news.


Grafana, Loki, syslog-ng: jump-starting a new logging stack

Talking to syslog-ng users, I found that many of them plan to take a closer look at Grafana, due to the upheaval around the change of licensing terms for Elastic. Luckily, it is now possible to jump-start the complete, new logging stack – including Grafana, Loki, syslog-ng and tools to monitor this stack – with a single command. All you need to do is to point a couple of syslog clients at the included syslog-ng server and open Grafana in your browser. Of course, this setup is far from being production-ready, but it can speed up preparing a test environment for you. From this blog, you can learn how to install Grafana, Loki, syslog-ng stack, how to forward your log messages there, and how to check the results in Grafana.


When to use the syslog-ng agent for Windows?

You can collect log messages from a Windows host in multiple ways using syslog-ng. For large scale installations the easiest is to use the Windows Event Collector (WEC) component of syslog-ng Premium Edition (PE). This way you don’t have to install any new client software on the Windows side, just point the WEC to the destionation to send their log messages. Please note that WEC only works for Windows EventLog. If you need to collect log messages from text files, you need to install the syslog-ng agent for Windows on your hosts. For example, web servers often log to files instead of Windows EventLog. Let’s review how to do a standalone installation of syslog-ng agent for Windows and then see the differences between using the legacy (RFC3164) and the new (RFC5424) syslog protocol.


Syslog-ng on BSDs

My FOSDEM presentation in the BSD devroom showcased what is new in sudo and syslog-ng and explained how to install or compile these software yourself on FreeBSD. Not only am I a long time FreeBSD user (started with version 1.0 in 1994) I also work on keeping the syslog-ng port in FreeBSD up to date. But soon after my presentation I was asked what I knew about other BSDs. And – while I knew that all BSDs have syslog-ng in their ports system – I realized I had no idea about the shape of those ports. For this article I installed OpenBSD, DragonFlyBSD and NetBSD to check syslog-ng on them. Admittedly, they are not in the best shape: they contain old versions, some do not even start or are unable to collect local log messages.



Your feedback and news, or tips about the next issue are welcome. To read this newsletter online, visit: https://syslog-ng.com/blog/

tmt hint 01: provisioning options

Posted by Fedora Community Blog on April 15, 2021 07:58 AM

After the initial hint describing the very first steps with tmt, let’s have a look at the available test execution options. Recall the user story from the very beginning:

As a tester or developer, I want to easily run tests in my preferred environment.

Do you want to safely run tests without breaking your laptop? Use the default provision method virtual which will execute tests under a virtual machine using libvirt with the help of testcloud:

tmt run

Would you like to execute tests a bit faster and don’t need the full virtualization support? Run them in a container using podman:

tmt run --all provision --how container

Do you have an already prepared box where everything’s ready for testing and you often ssh to it to experiment safely?

tmt run --all provision --how connect --guest my.test.box

Do you know exactly what the tests are doing and feel safe to quickly run them directly on your localhost?

tmt run -a provision -h local

Note that some provision methods require additional dependencies. Installing them is easy as well:

sudo dnf install -y tmt-provision-container
sudo dnf install -y tmt-provision-virtual

See the tmt guide and examples for some more inspiration.

Happy testing! 🙂

The post tmt hint 01: provisioning options appeared first on Fedora Community Blog.

Release 5.7.0

Posted by Bodhi on April 15, 2021 07:12 AM


This is a feature release.


  • Query different Greenwave contexts for critical path updates, allowing for
    stricter policies to apply (:pr:4180).
  • Use Pagure's hascommit new endpoint API to check user's rights to
    create/edit updates. This allow collaborators to push updates for releases
    for which they have commit access. (:pr:4181).

Bug fixes

  • Fixed an error about handling bugs in automatic updates (:pr:4170).
  • Side-tag wheren't emptied when updates for current releases were pushed to
    stable (:pr:4173).
  • Bodhi will avoid sending both 'update can now be pushed' and 'update has been
    pushed' notifications at the same time on updates pushed automatically
  • Clear request status when release goes EOL (:issue:4039).
  • Allow bodhi to not operate automatically on bugs linked to in changelog for
    specific releases (:issue:4094).
  • Use the release git branch name to query PDC for critpath components
  • Avoid using datetime.utcnow() for updateinfo <updated_date> and <issued_date>
    elements, use "date_submitted" instead. (:issue:4189).
  • Updates which already had a comment that they can be pushed to stable were
    not automatically pushed to stable when the stable_days threshold was
    reached (:issue:4042).


The following developers contributed to this release of Bodhi:

  • Adam Saleh
  • Adam Williamson
  • Clement Verna
  • Daniel Alley
  • Mattia Verga
  • Andrea Misuraca

[Howto] My own mail & groupware server, part 4: Nextcloud

Posted by Roland Wolters on April 14, 2021 10:27 PM
<figure class="alignright size-thumbnail"></figure>

Running your own mail and groupware server can be challenging. I recently had to re-create my own setup from the ground and describe the steps in a blog post series. This blog post is #4 of the series and covers the integration of Nextcloud for storage.

Let’s add Nextcloud to the existing mail server. This part will focus on setting it up and configuring it in basic terms. Groupware and webmail will come in a later post! If you are new to this series, don’t forget to read part 1: what, why, how?, and all about the mail server setup itself in the second post, part 2: initial mail server setup. We also added a Git server in part 3: Git server.

Nextcloud as “cloud” server

Today’s online experience does not only cover mails and other groupware functions these days, but also the interaction with files in some online storage. Arguably, for many this is these days sometimes more important than e-mail systems.

Thus adding a service to the mail server providing a “cloud” experience around file management makes sense. The result is lacking cloud functionality in terms of high availability, but provides a rich UI, accessibility from all kinds of devices and integration in various services. It also offers the option to extend the functions further.

Nextcloud is probably the best known solution for self-hosted cloud solutions, and is also used large scale by universities, governments and companies. I also picked it because I had past experience with it and it offers some integrations and add-ons I really like and depend on.

Alternatives worth checking out are owncloud, Seafile and Pydio.

Integration into mailu setup

Nextcloud can be added to an existing mailu setup in three steps:

  1. Let Nginx know about the service
  2. Add a DB and set it up
  3. Add Nextcloud

The proxy bit is easily done by creating the file /data/mailu/overrides/nginx/nc.conf with the following content:

location /nc/ {
  add_header Front-End-Https on;
  proxy_buffering off;
  fastcgi_request_buffering off;
  proxy_pass http://nc/;
  client_max_body_size 0;

We also need a DB. Add this to docker-compose.yml:

  # Nextcloud

    image: postgres:12
    restart: always
      - /data/ncpostgresql:/var/lib/postgresql/data

Make sure to add a proper password here! Next, we have to bring the environment down and up again to add the DB container, and then access the DB and create the right users and database with corresponding privileges:

  • Get the DB up & running: docker compose down and docker compose up
  • access DB container: sudo docker exec -it mailu_ncpostgresql_1 /bin/bash
  • become super user: su - postgres
  • add user nextcloud, add proper password here: create user nextcloud with password '...';
  • add nextcloud database: CREATE DATABASE nextcloud TEMPLATE template0 ENCODING 'UNICODE';
  • change database owner to user nextcloud: ALTER DATABASE nextcloud OWNER TO nextcloud;
  • grant all privileges to nextcloud: GRANT ALL PRIVILEGES ON DATABASE nextcloud TO nextcloud;

Now we can add the Nextcloud container itself. We will add a few environment variables to properly configure the DB access and the initial admin account. Add the following listing to the Docker Compose file:

    image: nextcloud:apache
    restart: always
      POSTGRES_HOST: ncpostgresql
      POSTGRES_USER: nextcloud
      POSTGRES_DB: nextcloud
      REDIS_HOST: redis
      - resolver
      - ncpostgresql
      - /data/nc/main:/var/www/html
      - /data/nc/custom_apps:/var/www/html/custom_apps
      - /data/nc/data:/var/www/html/data
      - /data/nc/config:/var/www/html/config
      - /data/nc/zzz_upload_php.ini:/usr/local/etc/php/conf.d/zzz_upload_php.ini

Nextcloud configuration

Before we launch Nextcloud, we need to configure it properly. As shown in the last line in the previous example, a specific file is needed to define the values for PHP file upload sizes. This is only needed in corner cases (browsers split up files during upload automatically these days), but can help sometimes. Create the file /data/nc/zzz_upload_php.ini:


Next, we need to create the configuration for the actual Nextcloud instance. Stop the Docker Compose setup, and start it up again. That generates the basic config files on the disk, and you can access/data/nc/config/config.php and adjust the following variables (others are left intact):

  'overwritewebroot' => '/nc',
  'overwritehost' => 'nc.bayz.de',
  'overwriteprotocol' => 'https',
  'trusted_domains' => 
  array (
    0 => 'lisa.bayz.de',
    1 => 'front',
    2 => 'mailu_front_1.mailu_default',
    3 => 'nc.bayz.de',

After another Docker Compose down and up, the instance should be all good! If the admin password need to be reset, access the container via sudo docker exec -it mailu_nc_1 /bin/bash and reset the password with: su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ user:resetpassword admin"

Next we can connect Nextcloud to the mailu IMAP server to use it for authentication. First we install the app “External user authentication” from the developers section. Next we add the following code to the above mentioned config.php:

  'user_backends' => array(
        'class' => 'OC_User_IMAP',
        'arguments' => array(
            'imap', 143, 'null', 'bayz.de', true, false

Restart the setup, and a login as user should be possible.

Sync existing files

In my case the instance was following a previous one. As part of the migration, a lot of “old” data had to be copied. The problem: copying the data for example via webdav is time consuming, does not perform and might be troublesome when the sync needs to be picked up after interruption again.

It is easier to sync direct from disc to disc with established tools like rsync. However, Nextcloud does not know that new files arrived that way and does not list them. The steps to make Nextcloud aware of those are:

  1. Log in as each user for which data should be synced so that target directories exist underneath the files/ directory
  2. Sync data with rsync or other tool of choice
  3. Correct permissions: chown -R ...:... files/
  4. Access container: sudo docker exec -it mailu_nc_1 /bin/bash
  5. Trigger file scan in Nextcloud: su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ files:scan --all"

Recurrent updater

For add-ons like the newsreader, Nextcloud needs to perform tasks on a regular base. Surprisingly enough, Nextcloud cannot easily do this on its own. The best way is to add a cron job to do that. And the best way to do that is a Systemd timer.

So first we add the service to be triggered regularly. On the host itself (not inside the container) create the file /etc/systemd/system/nextcloudcron.service:

Description=Nextcloud cron.php job

ExecStart=/usr/bin/docker exec mailu_nc_1 su -s /bin/sh www-data -c "/usr/local/bin/php -f /var/www/html/cron.php"

Then, create the timer via the file /etc/systemd/system/nextcloudcron.timer:

Description=Run Nextcloud cron.php every 5 minutes



Enable the timer: systemctl enable --now nextcloudcron.timer. And it is done. And is way more flexible and usable and maintainable than the old cron jobs. If you are new to timers, check their execution with sudo systemctl list-timers.

DB performance

A lot of Nextcloud’s performance depends on the performance of the DB. And DBs are all about indices. There are a few commands which can help with that – and which are recommended on the self check inside Nextcloud anyway:

access container: sudo docker exec -it mailu_nc_1 /bin/bash
add missing indices: su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ db:add-missing-indices" www-data
convert filecache: su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ db:convert-filecache-bigint" www-data
add missing columns: su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ db:add-missing-columns" www-data

Preview generator – fast image loading

The major clients used to access Nextcloud will probably be the Android client and a web browser. However, scrolling through galleries full of images is a pain: it takes ages until all the previews are loaded. Sometimes even a slide show is not possible because it all just takes too long.

This is because the images are not downloaded in real size (that would take too long), instead previews of the size required in that moment are generated live (still takes long, but not that long).

To make this all faster, one idea is to pre-generate the previews! To do so, we install the app “Preview Generator” in our instance. However, this generates a bit too many preview files, many in sizes which are hardly ever used. So we need to alter the sizes to be generated:

$ sudo docker exec -it mailu_nc_1 /bin/bash
$ su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ config:app:set previewgenerator squareSizes --value='256 1024'"
$ su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ config:app:set previewgenerator widthSizes  --value='384 2048'"
$ su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ config:app:set previewgenerator heightSizes --value='256 2048'"

Also we want to limit the preview sizes to not waste too much storage:

$ su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ config:system:set preview_max_x --value 2048"
$ su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ config:system:set preview_max_y --value 2048"
$ su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ config:system:set jpeg_quality --value 80"
$ su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ config:app:set preview jpeg_quality --value='80'"

Last but not least we run the preview generator:

su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ preview:generate-all -vvv"

Note that this can easily take hours, and thus I recommend to launch this via a tmux shell.

Of course new files will reach the system, so once in a while new previews should be generated. Use this command:

su - www-data -s /bin/bash -c "/usr/local/bin/php -f /var/www/html/occ preview:pre-generate"

This can also be added to a cron job similar to the timer above. Create the file /etc/systemd/system/nextcloudpreview.service:

Description=Nextcloud image preview generator job

ExecStart=/usr/bin/docker exec mailu_nc_1 su -s /bin/sh www-data -c "/usr/local/bin/php -f /var/www/html/occ preview:pre-generate"

And add a timer similar to the above one triggering the service every 15 minutes. Create the file /etc/systemd/system/nextcloudpreview.timer:

Description=Run Nextcloud preview generator every 15 minutes



Launch the timer: sudo systemctl enable --now nextcloudpreview.timer

One final word of caution: previews can take up a lot of space. Like A LOT. Expect maybe an additional 20% of storage needed for your images.

What’s next?

With Nextcloud up and running and all old data synced I was feeling good: all basic infrastructure services were running again. People could access all their stuff with only slight adjustments to their URLs.

The missing piece now was webmail and general groupware functionality. This will be covered in another post of this series.

More about that in the next post.

Image by RÜŞTÜ BOZKUŞ from Pixabay

<script> __ATA.cmd.push(function() { __ATA.initDynamicSlot({ id: 'atatags-26942-607a877a7e594', location: 120, formFactor: '001', label: { text: 'Advertisements', }, creative: { reportAd: { text: 'Report this ad', }, privacySettings: { text: 'Privacy', } } }); }); </script>

Fedora Linux 34 Cloud Test Day 2021-04-16 through 2021-04-19

Posted by Fedora Community Blog on April 14, 2021 08:11 PM
Fedora Linux 34 Cloud Test Day

Now that the Fedora Linux 34 is coming close to the release date, the Fedora Cloud SIG would like to get the community together this week to find and squash some bugs. We are organizing a test day from Friday, April 16th through Monday, April 19th.

For this event we’ll test Fedora Cloud Base content. See the Downloads Page for links to the Beta Cloud Base Images. We have qcow, AMI, and ISO images ready for testing.

How do test days work?

A test day is an event where anyone can help make sure that changes in Fedora are working well in the upcoming release. Fedora community members often participate, but the public is welcome also. You only need to be able to download materials (including some large files), and read and follow technical directions step by step, to contribute.

The wiki page for the Cloud test day has a lot of good information on what and how to test. After you’ve done some testing, you can log your results in the test day app. If you’re available on or around the day of the event, please do some testing and report your results.

Happy testing, and we hope to see you on test day.

The post Fedora Linux 34 Cloud Test Day 2021-04-16 through 2021-04-19 appeared first on Fedora Community Blog.

A critique of google chat

Posted by Kevin Fenzi on April 14, 2021 05:58 PM

I’m not usually one to say bad things about… well, anything, but more and more people I interact with lately have been using google chat at their preferred communication medium and I have been asked a few times why I dislike it, so I thought I would do a blog post on it and can then just point people here.

First a few pros before any cons:

  • It works fine in firefox as far as I have seen
  • It’s included in “google workspace’, so if you have that you have google chat too.
  • It looks ok. Links, emojis, reactions, links, videos all look/work fine.
  • It’s easy to start a chat with multiple people or groups
  • All your chat history is there on restart (but see below)
  • It’s encrypted (but no idea if google can read all the history or not)

And now all the actual current cons:

  • It’s a browser only application. If I don’t remember to reload that tab after rebooting, it’s just not there and I get no notices. Needless to say: no 3rd party applications have a chance to do better than the web interface.
  • It’s thread model is utter junk. There are ‘threads’ of conversation, but they are all in the same linear timeline, so its really really hard to tell or notice if someone added something to an old thread without knowing you need to scroll back up to it.
  • It’s horribly “unreliable”. By this I mean I could have the app loaded in my browser and _still_ not see notifications because: google is randomly asking me to login again, google chat decides to put <— and 100 more lines here –> collapsed section, google chat randomly decides to put “and 10 more messages” button at the bottom, Someone could have started a new thread and the one you care about has any of the above happen to it and you never know.
  • Someone you are messaging could just not use google chat or be busy or away. Google chat will helpfully mail them about your message… but 12 hours later. You can also say you are ‘busy’ but only for 8 hours at a time. If you are gone for a week, too bad.
  • Surprisingly search is kind of bad. It finds things you search for, but there’s no way to go to the ‘next’ match or see how many or navigate at all in it.
  • Related to search, there’s no way to have logs locally to use unix tools on like grep, etc.
  • There’s a indicator of how many messages are unread in a room/chat, but no indications if thats just general or if someone mentioned your name.
  • No good keyboard navigation, you have to click around to change channels, threads, conversations, etc.
  • Unless you have been invited, it’s virtual impossible to find rooms. There’s a ‘browse rooms’ dialog, but it only lets you search in the room name, not description or who is in what room or anything.
  • Depending on how things are configured, you may not be able to chat with everyone, only people in your company or workspace.
  • You can adjust notifications per room somewhat, but it’s not as flexible as any IRC client. You can notify on everything, or on just mentions of your name, or mentions of your name and every new thread or nothing. No way to look for keywords or anything.

And of course there’s potential future cons:

  • google changes chat products more than a bakery changes it’s day old bread. Is this iteration of google chat going to stick around? Who knows? Could be a completely rebranded and different one next week. ;(

So, all in all, I will use google chat if I have to, but it’s going to be my least favorite chat method and I wish everyone would switch to something else. Matrix perhaps?

0.33U WiFi-Bridge Network-Rack

Posted by Jon Chiappetta on April 14, 2021 05:24 PM
So everyone has pictures and videos of their nice metal 1U network racks all wired up but since there are no wires running through here, I had to come up with something a bit different. I have a small-sized, wood-based, rack-unit [approx. 0.33U :] that hosts the main wireless-bridge part of the network! It’s been running pretty good so far while WFH 🙂


Fedora Workstation 34 feature focus: Btrfs transparent compression

Posted by Fedora Magazine on April 14, 2021 08:00 AM

The release of Fedora 34 grows ever closer, and with that, some fun new features! A previous feature focus talked about some changes coming to GNOME version 40. This article is going to go a little further under the hood and talk about data compression and transparent compression in btrfs. A term like that may sound scary at first, but less technical users need not be wary. This change is simple to grasp, and will help many Workstation users in several key areas.

What is transparent compression exactly?

Transparent compression is complex, but at its core it is simple to understand: it makes files take up less space. It is somewhat like a compressed tar file or ZIP file. Transparent compression will dynamically optimize your file system’s bits and bytes into a smaller, reversible format. This has many benefits that will be discussed in more depth later on, however, at its core, it makes files smaller. This may leave most computer users with a question: “I can’t just read ZIP files. You need to decompress them. Am I going to need to constantly decompress things when I access them?”. That is where the “transparent” part of this whole concept comes in.

Transparent compression makes a file smaller, but the final version is indistinguishable from the original by the human viewer. If you have ever worked with Audio, Video, or Photography you have probably heard of the terms “lossless” and “lossy”. Think of transparent compression like a lossless compressed PNG file. You want the image to look exactly like the original. Small enough to be streamed over the web but still readable by a human. Transparent compression works similarly. Your file system will look and behave the same way as before (no ZIP files everywhere, no major speed reductions). Everything will look, feel, and behave the same. However, in the background it is taking up much less disk space. This is because BTRFS will dynamically compress and decompress your files for you. It’s “Transparent” because even with all this going on, you won’t notice the difference.

You can learn more about transparent compression at https://btrfs.wiki.kernel.org/index.php/Compression

Transparent compression sounds cool, but also too good to be true…

I would be lying if I said transparent compression doesn’t slow some things down. It adds extra CPU cycles to pretty much any I/O operation, and can affect performance in certain scenarios. However, Fedora is using the extremely efficient zstd:1 algorithm. Several tests show that relative to the other benefits, the downsides are negligible (as I mentioned in my explanation before). Better disk space usage is the greatest benefit. You may also receive reduction of write amplification (can increase the lifespan of SSDs), and enhanced read/write performance.

Btrfs transparent compression is extremely performant, and chances are you won’t even notice a difference when it’s there.

I’m convinced! How do I get this working?

In fresh installations of Fedora 34 and its corresponding beta, it should be enabled by default. However, it is also straightforward to enable before and after an upgrade from Fedora 33. You can even enable it in Fedora 33, if you aren’t ready to upgrade just yet.

  1. (Optional) Backup any important data. The process itself is completely safe, but human error isn’t.
  2. To truly begin you will be editing your fstab. This file tells your computer what file systems exist where, and how they should be handled. You need to be cautious here, but only a few small changes will be made so don’t be intimidated. On an installation of Fedora 33 with the default Btrfs layout the /etc/fstab file will probably look something like this:
<strong>$ $EDITOR /etc/fstab</strong>
UUID=1234 /                       btrfs   subvol=root     0 0
UUID=1234 /boot                   ext4    defaults        1 2
UUID=1234         /boot/efi               vfat    umask=0077,shortname=winnt 0 2
UUID=1234 /home                   btrfs   subvol=home     0 0

NOTE: While this guide builds around the standard partition layout, you may be an advanced enough user to partition things yourself. If so, you are probably also advanced enough to extrapolate the info given here onto your existing system. However, comments on this article are always open for any questions.

Disregard the /boot and /boot/efi directories as they aren’t (currently) compressed. You will be adding the argument compress=zstd:1. This tells the computer that it should transparently compress any newly written files if they benefit from it. Add this option in the fourth column, which currently only contains the subvol option for both /home and /:

UUID=1234 /                       btrfs   subvol=root,compress=zstd:1     0 0
UUID=1234 /boot                   ext4    defaults        1 2
UUID=1234         /boot/efi               vfat    umask=0077,shortname=winnt 0 2
UUID=1234 /home                   btrfs   subvol=home,compress=zstd:1     0 0

Once complete, simply save and exit (on the default nano editor this is CTRL-X, SHIFT-Y, then ENTER).

3. Now that fstab has been edited, tell the computer to read it again. After this, it will make all the changes required:

$ sudo mount -o remount / /home/

Once you’ve done this, you officially have transparent compression enabled for all newly written files!

Recommended: Retroactively compress old files

Chances are you already have many files on your computer. While the previous configuration will compress all newly written files, those old files will not benefit. I recommend taking this next (but optional) step to receive the full benefits of transparent compression.

  1. (Optional) Clean out any data you don’t need (empty trash etc.). This will speed things up. However, it’s not required.
  2. Time to compress your data. One simple command can do this, but its form is dependent on your system. Fedora Workstation (and any other desktop spins using the DNF package manager) should use:
$ sudo btrfs filesystem defrag -czstd -rv / /home/

Fedora Silverblue users should use:

$ sudo btrfs filesystem defrag -czstd -rv / /var/home/ 

Silverblue users may take note of the immutability of some parts of the file system as described here as well as this Bugzilla entry.

NOTE: You may receive several warnings that say something like “Cannot compress permission denied.”. This is because some files, on Silverblue systems especially, the user cannot easily modify. This is a tiny subset of files. They will most likely compress on their own, in time, as the system upgrades.

Compression can take anywhere from a few minutes to an hour depending on how much data you have. Luckily, since all new writes are compressed, you can continue working while this process completes. Just remember it may partially slow down your work at hand and/or the process itself depending on your hardware.

Once this command completes you are officially fully compressed!

How much file space is used, how big are my files

Due to the nature of transparent compression, utilities like du will only report exact, uncompressed, files space usage. This is not the actual space they take up on the disk. The compsize utility is the best way to see how much space your files are actually taking up on disk. An example of a compsize command is:

$ sudo compsize -x / /home/ 

This example provides exact information on how the two locations, / and /home/ are currently, transparently, compressed. If not installed, this utility is available in the Fedora Linux repository.


Transparent compression is a small but powerful change. It should benefit everyone from developers to sysadmin, from writers to artists, from hobbyists to gamers. It is one among many of the changes in Fedora 34. These changes will allow us to take further advantage of our hardware, and of the powerful Fedora Linux operating system. I have only just touched the surface here. I encourage those of you with interest to begin at the Fedora Project Wiki and Btrfs Wiki to learn more!

Locating D-Bus Resource Leaks

Posted by David Rheinsberg on April 14, 2021 12:00 AM

With dbus-broker we have introduced the resource-accounting of bus1 into the D-Bus world. We believe it greatly improves and strengthens the resource distribution of the D-Bus messages bus, and we have already found a handful of resource leaks that way. However, it can be a daunting task to solve resource exhaustion bugs, so I decided to describe the steps we took to resolve a recent resource-leak in the openQA package.

A few days ago, Adam Williamson approached me 1 with a bug in the openQA package, where he saw the log stream filled with messages like:

dbus-broker[<pid>]: Peer :1.<id> is being disconnected as it does not have the resources to receive a reply or unicast signal it expects.
dbus-broker[<pid>]: UID <uid> exceeded its 'bytes' quota on UID <uid>.

This is the typical sign of a resource exhaustion in dbus-broker. When the message broker generates or forwards messages to an individual client, it will queue them as outgoing-messages and push them into the unix-socket of the client. If this client does not dequeue messages, this queue might fill up. If a limit is reached, something needs to be done. Since D-Bus is not a lossy protocol, dropping messages is not an option. Instead, the message broker will either refuse new incoming operations or disconnect a client. All resources are accounted on UIDs, this means multiple clients of the same user will share the same resource limits.

Depending on what message is sent, it is accounted either on the receiver or sender. Furthermore, some messages can be refused by the broker, others cannot. The exact rules are described in the wiki 2.

In the case of openQA, the first step was to query the accounting information of the running message broker:

sudo dbus-send --system --dest=org.freedesktop.DBus --type=method_call --print-reply /org/freedesktop/DBus org.freedesktop.DBus.Debug.Stats.GetStats

(Replace --system with --session to query the session or user bus.)

While preferably this query is performed when the resource exhaustion happens, it will often yield useful information under normal operation as well. Resources are often consumed slowly, so the accumulation will still show up.

The output 3 of this query shows a list of all D-Bus clients with their accounting information. Furthermore, it lists all UIDs that have clients connected to this message bus, again with all accounting information. The challenge is to find suspicious entries in this huge data dump. The most promising solution so far was to search for "OutgoingBytes" and check for big numbers. This shows the number of bytes queued in the message broker for a particular client. It is usually 0, since the kernel queues are big enough to hold most normal messages. Even if it is not 0, it is usually just a couple of KiB.

In this case, we checked for "OutgoingBytes", and found:

dict entry(
    string "OutgoingBytes"
    uint32 62173024

62 MiB of messages are waiting to be delivered to that client. Expanding the logs to show the surrounding block, we see:

struct {
    string ":1.211366"
    array [
        dict entry(
            string "UnixUserID"
            variant                            uint32 991
        dict entry(
            string "ProcessID"
            variant                            uint32 674968

    array [
        dict entry(
            string "Matches"
            uint32 1
        dict entry(
            string "OutgoingBytes"
            uint32 62173024

This tells us the PID 674968 of user 991 has roughly 62 MiB of data queued, and it is likely not dequeuing the data. Furthermore, we see it has 1 message filter (D-Bus match rule) installed. D-Bus message filters will cause matching D-Bus signals to be delivered to a client. So a likely problem is that this client keeps receiving signals, but does not dispatch its client socket.

We digged further, and the data dump includes more such clients. Matching back the PIDs to processes via ps auxf, we found that each and every of those suspicious entries was /usr/bin/isotovideo: backend. The code of this process is part of the os-autoinst repository, in this case qemu.pm. A quick look showed only a single use of D-Bus 4. At a first glance, this looks alright. It creates a system-bus connection via the Net::DBus perl module, dispatches a method-call, and returns the result. However, we know this process has a match-rule installed (assuming the dbus-broker logs are correct), so we checked further and found that the Net::DBus module always installs a match-rule on NameOwnerChanged. Furthermore, it caches the system-bus connection in a global variable, sharing it across users in the same code-base.

Long story short, the os-autoinst qemu module created a D-Bus connection which was idle in the background and never dispatched by any code. However, the connection has a match-rule installed, and the message broker kept sending matching signals to that connection. This data accumulated and eventually exceeded the resource quota of that client. A workaround was quickly provided, and it will hopefully resolve this problem 5.

Hopefully, this short recap will be helpful to debug other similar situations. You are always welcome to message us on bus1-devel@googlegroups or on the dbus-broker GitHub issue tracker if you need help.

Cockpit 242

Posted by Cockpit Project on April 14, 2021 12:00 AM

Cockpit is the modern Linux admin interface. We release regularly.

Here are the release notes from Cockpit version 242.

Groundwork for Snowpack support

Snowpack is a web project build system alternative to webpack. Previously, when using Snowpack to build Cockpit pages using an npm module with an @, the page could not be loaded. This has been fixed. As Cockpit uses @patternfly in this style, it was an important first step.

Please be aware that there is not a fully working and supported example of a Snowpack-built Cockpit project at this time, just some initial experimentation.

Machines: Split out to separate project

The Machines page is easily the largest Cockpit component. It has been split out to the standalone cockpit-machines project, making it easier and faster to develop. This same split has also happened in Fedora and Debian packaging.

Cockpit website refresh

Cockpit website has been restyled and refreshed. This update includes:

  • a fresh style
  • completely new content on the front page
  • updated screenshots
  • a brand new docs page

Cockpit ecosystem updates

Today, we’re also releasing new versions of cockpit-podman, cockpit-ostree, and the newly-independent cockpit-machines.

Try it out

Cockpit 242 is available now:

ManageIQ em Podman

Posted by Daniel Lara on April 13, 2021 11:58 PM

 Um dia rápida pra subir o MaangeIQ via podman

efetue o pull da imagem

$ sudo podman pull docker.io/manageiq/manageiq:latest

Agora vamos subir 

$ sudo podman run -d -p 8443:443 --name manageiq namageiq

Só acessar agora 

Community Blog monthly update: March 2021

Posted by Fedora Community Blog on April 13, 2021 08:00 AM
Community Blog update

This is the latest in our monthly series summarizing the past month on the Community Blog. Please leave a comment below to let me know what you think.


In March, we published 21 posts. The site had 5,520 visits from 3,652 unique viewers. 888 visits came from search engines, while 450 came from the WordPress Android app, and 386 came from Twitter and 208 from Reddit.

The most read post last month was Fedora is a community; Fedora Linux is our OS.


Last month, contributors earned the following badges:

Your content here!

The Community Blog is the place to publish community-facing updates on what you’re working on in Fedora. The process is easy, so submit early and submit often.

The post Community Blog monthly update: March 2021 appeared first on Fedora Community Blog.

2021 OSI Board of Directors statement of intent

Posted by Justin W. Flory on April 13, 2021 08:00 AM

The post 2021 OSI Board of Directors statement of intent appeared first on Justin W. Flory's blog.

Justin W. Flory's blog - Free Software, music, travel, and life reflections

This first appeared on the Open Source Initiative Wiki. In light of the election update this year, I am republishing my statement of intent on my personal blog.

No culture can live if it attempts to be exclusive.

Mahatma Gandhi

I believe in the value of upholding the Open Source Definition as a mature and dependable legal framework while recognizing the OSI needs to work better with works that are not Open Source. My ambition as a candidate is to support existing work to enable a more responsive, more agile Open Source Initiative.

Twitter: @jwf_foss

Why should you vote for me?

I bring a public sector perspective to a conversation where it seems missing, despite the dependent relationship of the public sector to Free and Open Source works. In my work, I provide Open Source mentorship and coaching to humanitarian-driven start-ups hailing from 57 countries. I am an excellent communicator, I understand a subset of challenges faced by Open Source communities, and I have a collaborative nature.

I am also a millennial. The GPL was first drafted before I was born. My lived experience with Free Software and Open Source gives me a vantage point not well-represented in Open Source legal and policy work. My personal experience with Free and Open Source software is impacted by years of untangling my own digital life from technology decisions made for me, not by me. With that in mind, I realize not everyone can afford to be a Free Software purist, but we can still uphold the values of Open Source even if we do not use it exclusively.

Who am I?

I work as an Open Source Technical Advisor at UNICEF in the Office of Innovation. I manage and support an Open Source Mentorship programme for start-up investments and teams building Open Source products and communities from more than 57 countries. I also provide Open Source support to other UNICEF colleagues and recently coordinated UNICEF Innovation’s participation in the [on-going, at publication time] Outreachy round.

Outside of work, I have contributed to the Fedora Project for almost six years. I am soon ending a year-long term as the Diversity & Inclusion Advisor to the Fedora Council. I am a founding member of the Fedora Community Operations and Diversity & Inclusion teams. 

What are my qualifications?

I first contributed to Open Source as a teenager. I was a community moderator and staff member of the open source SpigotMC project. There, I handled user reports for a community forum with over 400,000 registered members. This is one of the most unique communities I have worked in, as the Spigot Community is a population of hundreds of thousands with an age demographic concentrated between ages 13-25.

Additionally, I am on the advisory board of Open @ RIT, the Open Source Programs Office for the Rochester Institute of Technology in Rochester, New York. This enables me to work more closely with academia, which has a growing interest in the growing ecosystem of academic Open Source Program Offices.

Finally, I regularly work with teams building Open Source solutions in support of children and UNICEF’s core work. I have lived experience of coaching teams on Open Source best practices across six continents. I have seen where Open Source worked well and where it didn’t. I bring this background and perspective into the work I would do as a member and representative elected by the Open Source Initiative constituency.

In summary, my lived experiences in Open Source, my connection to academic Open Source, and the humanitarian focus of my work make me a uniquely-qualified candidate for the OSI Board.

Interview responses

Luis Villa published four interview questions for OSI Board candidates on Opensource.com. I originally tweeted my response, but I copied it here for wider visibility too.

Q1: What should OSI do…

“…about the tens of millions of people who regularly collaborate to build software online (often calling that activity, colloquially, open source) but have literally no idea what OSI is or what it does?”

I am excited at the opportunity to contribute here. The UNICEF Office of Innovation (and my own Open Source Mentorship programme) rely on the Open Source Definition to guide our international Open Source work, even if we are still learning how to do it best. But without the OSD as a guiding light, our work is much harder. My team is well-positioned to be an advocate and voice of support for the Open Source Definition in policy environments where Open Source is not. This relates to on-going Giga connectivity work to connect schools worldwide to the Internet for equitable education opportunities for children.

So to directly answer the question, we have a conversation. Avoid anger when others choose software that is not Open Source. Avoid exasperated frustration when people pick licenses that are not Open Source. But the first step is always to teach & educate on the stories, values and history of the Free/Open Source community.

Q2: If an Ethical Software Initiative sprung up tomorrow, what should OSI’s relationship to it be?

The good folks behind the Ethical Source movement have done so. The OSI needs to be open to collaborate and engage with other orgs who steward legal works that do not adhere to the OSD.

I want to invite the Ethical Source folks into the conversation. How can we better partner together? If elected, I would commit myself to organizing a public town hall or community discussion with the Ethical Source folks. Coraline Ada Ehmke, Tobie Langel, and many other folks are doing great work in this space. So, let’s collaborate and work together.

Q3: When a license decision involves a topic…

“…on which the Open Source Definition is vague or otherwise unhelpful, what should the board do?”

The OSI needs to improve at saying what it is not. We are more clear on what the OSD is than we were even last year. As a candidate, I don’t have crazy ideas for the Definition. But there are things that are not Open Source. The world is changing.

We need to adapt. We must be nimble in changing with the world, or the values and motives of the original Free/Open Source movement are at risk of volatility. As a candidate, if presented with an unclear situation, I would take one of two options:

  1. If the proposed work stands against a principle of the OSD, it should not be approved as such, or the OSD becomes meaningless; OR
  2. Take an interpretive, “living document” view of the OSD for new copyleft innovations where the OSD is not clear or ambiguous.

For context, I am a copyleft believer. Promoting and advocating for the stability and integrity of Open Source licenses is a fundamental part of my interest as a candidate for the Board.

Q4: What role should the new staff play in license evaluation (or the OSD more generally)?

I don’t have an answer to this one. Foundations are mostly new to me. I would defer to expertise and listen to what others with more years have to say. I want to better understand the capacity and ambition of the OSI to take on new work with a steady staff.

I am a collaborator by nature and a team player. So, I want to enable the work for the OSI to be more agile and responsive in what I see as core, critical work.

That’s it. If you have specific questions, you are welcome to get in touch with me on Twitter or add a comment below.

Workshop on writing Python modules in Rust April 2020

Posted by Kushal Das on April 13, 2021 06:05 AM

I am conducting 2 repeat sessions for a workshop on "Writing Python modules in Rust".

The first session is on 16th April, 1500 UTC onwards, and the repeat session will be on 18th April 0900 UTC. More details can be found in this issue.

You don't have to have any prior Rust knowledge. I will be providing working code, and we will go very slowly to have a working Python module with useful functions in it.

If you are planning to attend or know anyone else who may want to join, then please point them to the issue link.

Ansible and Fedora/EPEL packaging status

Posted by Kevin Fenzi on April 12, 2021 10:35 PM

Just thought I would post a current status on ansible packaging in Fedora/EPEL

ansible 2.9.x (aka, “ansible classic”) continues to be available in EPEL7/EPEL8 and all supported Fedora releases. Odds are most people are just still using this. It does still get security and some small bugfixes, but no big changes or fixes.

ansible 3.x (aka, “ansible-base 2.10.x + community collections”). I had packaged ansible-base in rawhide/f34, but due to the naming changing and lack of time, I have dropped it. ansible-base is retired now in Fedora and likely never will land there.

ansible 4.x (aka, “ansible-core 2.11 + community collections”). I have renamed ansible-base to ansible-core in rawhide. Unfortunately, a dep was added on python-packaging, so there’s 6 or so packages to finish packaging up and getting reviewed. The collections are a bit all over the place as people have been submitting them and getting them in. You can find the ansible collections via ‘dnf list ansible-collection\*’. After I get ansible-core in shape, I am going to look at packaging up at least the rest of the collections for 4.x. At that point we could look at dropping ansible-classic (or moving it to ‘ansible-classic’ and shipping ansible-core + community collections as ‘ansible’ 4.x. Note that collections work with both ansible-classic and ansible 4.x.

ansible-core 2.12 (I don’t know if this will be in ansible 5.x, but I guess so?) will REQUIRE python-3.8 or larger. So, EPEL7 will probibly never move to this. EPEL8 may, but it might be tricky to use the python3.8 module for this.

So, progress is being made, all be it slowly. 🙂 I’ll post again soon hopefully to announce that ansible-core is usable/testable in rawhide.

What is Freedom?

Posted by Justin W. Flory on April 12, 2021 07:57 PM

The post What is Freedom? appeared first on Justin W. Flory's blog.

Justin W. Flory's blog - Free Software, music, travel, and life reflections

When I first saw the letter asking for Richard Stallman and the FSF Board of Directors resignations with merely five signatures, I knew I had to sign. Not because I knew it would be the popular thing to do. But because it was what was true in my heart. Only in a sense of deep empathy could I understand the reasons why it had finally come to this. I signed the letter because as much as I have personally benefited indirectly by the legacy of Mr. Stallman in my life, I feel his continued presence is harmful and more damaging at the forefront of the movement.

I don’t say that casually either. I have involuntarily found Open Source as my calling. Or my people. I contribute to Open Source because I love to collaborate and work together with other people. This challenges me. It humbles me in a way that I know I can always learn something new from someone else. For this, Open Source and Free Software have enriched my life. They have also given me, again involuntarily, an odd but productive way of coping with my own mental health issues, anxiety, and depression.

So how do I make sense of the emotions and feelings I have now? How do I untangle this complicated web of events and reactions by other people? To ignore it doesn’t seem possible. If I remove emotion, I am left with a purely rational motive to involve myself in this contemporary issue. My work, profession, and career goals are directly affected by however this discussion goes. There is no way out for me. It’s my job, so I have to care. But if you add emotions back in, to stand still and remain idle is heartbreaking. To do nothing is to commit to defeat. Resignation. The darkness.

Yet what is there to do? The only thing Stallman ever directly gave to me in life was an email explaining elegantly how there was nothing he could do for the Minecraft GPL community fiasco. At a time when I was so personally lost as I saw a community I love tear itself apart, he stood by idly as the so-called steward of these licenses that I was just too naïve to believe in. That experience to me now is amplified in the light of the much more egregious things he is accused of.

So, the Free Software Foundation welcomes Richard Matthew Stallman back to its board. Wonderful. Congratulations Mr. Stallman. I am going to pause for a moment of sadness and hurt as I contemplate the impact of this moment on our fragile movement, which has much bigger enemies today than it has in its 40 year legacy. But then…

I will move on. Because we have to. The only way is forward.

Policy proposal: New Code of Conduct

Posted by Fedora Community Blog on April 12, 2021 07:04 PM
Fedora community elections

The Fedora Council has been working with the Fedora Community Action and Impact Coordinator to update and improve Fedora’s Code of Conduct. This work began with Brian Exelbierd during his tenure as FCAIC and was then picked up by Marie Nordin at the start of 2020. The new draft of the Code of Conduct is more comprehensive than our current Code of Conduct and will be accompanied by a set of Clarifying Statements. The Clarifying Statements are a work in progress.

2020 has been a tough year for Fedora(and the world) and as noted in the recent Code of Conduct Report there were more than two times the number of incidents than the year prior. Based on this increase, the Council has agreed to expand management of incident reports to a Code of Conduct Committee, which is still in development. 

The updated Code of Conduct will go through the policy change policy process and will stay open for two weeks for community comment before moving to a Council vote. Please discuss this on the Discussion thread linked below. Significant updates on the Clarifying Statements and Code of Conduct Committee will be communicated via the Community Blog. 

The post Policy proposal: New Code of Conduct appeared first on Fedora Community Blog.

Custom RPMS and meta-rpm

Posted by Adam Young on April 12, 2021 05:32 PM

We are trying to enable the graphics hardware subsystem on the Raspberry Pi 4. This driver lives in mesa. The current Centos mesa.spec file does not enable the V3D Driver we need. Here are the steps I am going through to build the driver and integrate it into the meta-rpm build.

mesa.spec changes

The mesa.spec file has conditionals that enable differerent drivers for different situations. The Raspberry PI 4 is has an aarch64 processor and thus the V3D drivers should only be built for that chipset. That code looks like this:

%ifarch %{arm} aarch64
%define with_xa        1
%define with_v3d       1

Once that flag is enabled, we need to use it in the compile step. To do that, add it to the end of the gallium drivers:

%if 0%{?with_hardware}
  -Dgallium-drivers=swrast%{?with_iris:,iris},virgl,nouveau%{?with_vmware:,svga},radeonsi,r600%{?with_freedreno:,freedreno}%{?with_etnaviv:,etnaviv}%{?with_tegra:,tegra}%{?with_vc4:,vc4}%{?with_kmsro:,kmsro}%{?with_v3d:,v3d} \

If we only go this far, the rpmbuild process will error out, as we are now generating an artifact that is not accounted for in any of the packages.
Our current thinking is that this driver should be included in the dri-drivers RPM. To include it there, add it to the %files dri-drivers section

%if 0%{?with_vmware}
%if 0%{?with_v3d}

To ensure I know the difference between the RPM I am building and the source, I modified the version string to have tyt in it, like this:

Release:        2%{?rctag:.%{rctag}}.tyt

With these changes, we can build the rpms using the rpmbuild tool. To get all of the dependencies, use the yum-builddep tool as root.

sudo yum-builddep /home/ayoung/rpmbuild/SPECS/libdrm.spec 

And, of course, build the rpms with:

 rpmbuild -ba $HOME/rpmbuild/SPECS/mesa.spec

This will produce a bunch of RPMs in the subdir $HOME/rpmbuild/rpms/aarch64 subdir.

Note that the build process when done this way makes use of the RPMs installed on the base system. Since the Mesa subsystem has some changes, i found it necessary to install certain RPMs from Centos Stream directly, or rpmbuild complained of incompatible versions.
Here is what I installed last time:

yum install http://mirror.centos.org/centos/8-stream/AppStream/aarch64/os/Packages/libglvnd-1.3.2-1.el8.aarch64.rpm http://mirror.centos.org/centos/8-stream/AppStream/aarch64/os/Packages/libglvnd-devel-1.3.2-1.el8.aarch64.rpm http://mirror.centos.org/centos/8-stream/AppStream/aarch64/os/Packages/libglvnd-core-devel-1.3.2-1.el8.aarch64.rpm http://mirror.centos.org/centos/8-stream/AppStream/aarch64/os/Packages/libglvnd-egl-1.3.2-1.el8.aarch64.rpm http://mirror.centos.org/centos/8-stream/AppStream/aarch64/os/Packages/libglvnd-gles-1.3.2-1.el8.aarch64.rpm http://mirror.centos.org/centos/8-stream/AppStream/aarch64/os/Packages/libglvnd-glx-1.3.2-1.el8.aarch64.rpm http://mirror.centos.org/centos/8-stream/AppStream/aarch64/os/Packages/libglvnd-opengl-1.3.2-1.el8.aarch64.rpm

repository generation

Once the rpmbuild process is complete, I need to make the rpms available for building. I could do this in place on the system, but then other people cannot test them. So, instead, I am hosting a small repo on Fedora People. This is not intened to be a long lived repo, even in the nature of a COPR, but rather an interim step to sort out the RPM changes required.

Since my build machine does not have my private key on it, I need to pull the RPMS onto my desktop. I could build the repo either before or after I do this, but I tend to do it after. This allows me to keep the repo artifacts out of the rpmbuild tree on my build host. If I have other RPMS in my rpmbuild/RPMS/aarch64 subdir, I can leave them there without having them pollute the repo.

scp* /home/ayoung/Documents/pi
scp /home/ayoung/Documents/pi
cd /home/ayoung/Documents/pi
createrepo .
rsync -a /home/ayoung/Documents/pi admiyo@admiyo.fedorapeople.org:public_html/

The one step the leaves out is where I manually go to the fedorapeople machine and remove the old content. Not strictly necessary to be this manual, but I like to keep a hand on that process for now.

The repo is then exposed here:

Here is a sample .repo file that consumes it;

name=fedorapeople hosted repo of R Pi specific files

That file can be used to test the repo, and also will be used in the next step.

recipe update

The recipes in the meta-rpm project are automatically generated from dnf/yum repository meta data. In order to use the repository posted above, the recipes need to be regenerated using an the .repo file from the previous section.

$ git status
On branch centos-stream
Your branch is up to date with 'origin/centos-stream'.

Untracked files:
  (use "git add <file>..." to include in what will be committed)

With this file in place, run make in the top level directory of your meta-rpm git repository.

To confirm that the generated recipes have the right repository, look in the file meta-rpm/meta-rpm/recipes-rpms/generated/mesa.bb and check that the URLS look like thisL

https://admiyo.fedorapeople.org/pi/mesa-vulkan-drivers-debuginfo-20.3.3-2.tyt.aarch64.rpm 72fc207edfbcdbb5c32714342bd073500402082f41caf1b2546b229fa167a42e"

As of now, running bitbake rpmbased-desktop-image on the centos-streams branch will fail, but I don’t think it is based on these changes. I’ll update once I know what is going on, but for now, I want to post this to share the work I’ve done.

Fedora CoreOS K3S AWX

Posted by Daniel Lara on April 12, 2021 10:37 AM

Uma dica rápida pra subir o AWX no Fedora CoreOS com K3S

Primeiro, vamos instalar o pacote k3s-selinux via rpm-ostree:

# rpm-ostree install https://github.com/k3s-io/k3s-selinux/releases/download/v0.3.stable.0/k3s-selinux-0.3-0.el8.noarch.rpm


#  systemctl reboot

Agora vamos fazer a instalação do k3s:

# export K3S_KUBECONFIG_MODE="644"
# export INSTALL_K3S_EXEC="--flannel-backend=host-gw"

# curl -sfL https://get.k3s.io | sh -

Vamos criar o diretório para o  Persistent Volume

#mkdir -p /var/k8s-pv/awx-postgres

# cat <<EOF >> awx-postgres-pv.yml
apiVersion: v1
kind: PersistentVolume
  name: postgres-awx-postgres-0
    type: local
    storage: 10Gi
    - ReadWriteOnce
    path: "/var/k8s-pv/awx-postgres"

cria um arquivo agora 

# echo "
apiVersion: awx.ansible.com/v1beta1
kind: AWX
  name: awx
  tower_ingress_type: Ingress
" > awx.yml

Agora vamos executar 

# kubectl apply -f awx-postgres-pv.yml
# kubectl apply -f https://raw.githubusercontent.com/ansible/awx-operator/devel/deploy/awx-operator.yaml
# kubectl apply -f awx.yml

Agora só acompanhar vendo os logs 

kubectl logs -f awx-operator-........

e ao final 

Verifique em qual porta esta rodando o awx no caso na porta 30922

 # kubectl get svc 

e verifique a senha de admin 

# kubectl get secret awx-admin-password -o jsonpath='{.data.password}' | base64 --decode

e só acessar agora 


Scheduling tasks with cron

Posted by Fedora Magazine on April 12, 2021 08:48 AM

Cron is a scheduling daemon that executes tasks at specified intervals. These tasks are called cron jobs and are mostly used to automate system maintenance or administration tasks. For example, you could set a cron job to automate repetitive tasks such as backing up database or data, updating the system with the latest security patches, checking the disk space usage, sending emails, and so on. The cron jobs can be scheduled to run by the minute, hour, day of the month, month, day of the week, or any combination of these.

Some advantages of cron

These are a few of the advantages of using cron jobs:

  • You have much more control over when your job runs i.e. you can control the minute, the hour, the day, etc. when it will execute.
  • It eliminates the need to write the code for the looping and logic of the task and you can shut it off when you no longer need to execute the job.
  • Jobs do not occupy your memory when not executing so you are able to save the memory allocation.
  • If a job fails to execute and exits for some reason it will run again when the proper time comes.

Installing the cron daemon

Luckily Fedora Linux is pre-configured to run important system tasks to keep the system updated. There are several utilities that can run tasks such as cron, anacron, at and batch. This article will focus on the installation of the cron utility only. Cron is installed with the cronie package that also provides the cron services.

To determine if the package is already present or not, use the rpm command:

$ rpm -q cronie

If the cronie package is installed it will return the full name of the cronie package. If you do not have the package present in your system it will say the package is not installed.
To install type this:

$ dnf install cronie

Running the cron daemon

A cron job is executed by the crond service based on information from a configuration file. Before adding a job to the configuration file, however, it is necessary to start the crond service, or in some cases install it. What is crond? Crond is the compressed name of cron daemon (crond). To determine if the crond service is running or not, type in the following command:

$ systemctl status crond.service
● crond.service - Command Scheduler
      Loaded: loaded (/usr/lib/systemd/system/crond.service; enabled; vendor pre>
      Active: active (running) since Sat 2021-03-20 14:12:35 PDT; 1 day 21h ago
    Main PID: 1110 (crond)

If you do not see something similar including the line “Active: active (running) since…”, you will have to start the crond daemon. To run the crond service in the current session, enter the following command:

$ systemctl start crond.service

To configure the service to start automatically at boot time, type the following:

$ systemctl enable crond.service

If, for some reason, you wish to stop the crond service from running, use the stop command as follows:

$ systemctl stop crond.service

To restart it, simply use the restart command:

$ systemctl restart crond.service

Defining a cron job

The cron configuration

Here is an example of the configuration details for a cron job. This defines a simple cron job to pull the latest changes of a git master branch into a cloned repository:

*/59 * * * * username cd /home/username/project/design && git pull origin master

There are two main parts:

  • The first part is “*/59 * * * *”. This is where the timer is set to every 59 minutes.
  • The rest of the line is the command as it would run from the command line.
    The command itself in this example has three parts:
    • The job will run as the user “username”
    • It will change to the directory
    • The git command runs to pull the latest changes in the master branch.

Timing syntax

The timing information is the first part of the cron job string, as mentioned above. This determines how often and when the cron job is going to run. It consists of 5 parts in this order:

  • minute
  • hour
  • day of the month
  • month
  • day of the week

Here is a more graphic way to explain the syntax may be seen here:

 .---------------- minute (0 - 59)
 |  .------------- hour (0 - 23)
 |  |  .---------- day of month (1 - 31)
 |  |  |  .------- month (1 - 12) OR jan,feb,mar,apr …
 |  |  |  |  .---- day of week (0-6) (Sunday=0 or 7)
 |  |  |  |  |            OR sun,mon,tue,wed,thr,fri,sat
 |  |  |  |  |               
 *  *  *  *  *  user-name  command-to-be-executed 

Use of the asterisk

An asterisk (*) may be used in place of a number to represents all possible values for that position. For example, an asterisk in the minute position would make it run every minute. The following examples may help to better understand the syntax.

This cron job will run every minute, all the time:

* * * * * [command] 

A slash (/) indicates a multiple number of minutes The following example will run 12 times per hour, i.e., every 5 minutes:

*/5 * * * * [command]

The next example will run once a month, on the second day of the month at midnight (e.g. January 2nd 12:00am, February 2nd 12:00am, etc.):

0 0 2 * * [command]

Using crontab to create a cron job

Cron jobs run in the background and constantly check the /etc/crontab file, and the /etc/cron.*/ and /var/spool/cron/ directories. Each user has a unique crontab file in /var/spool/cron/ .

These cron files are not supposed to be edited directly. The crontab command is the method you use to create, edit, install, uninstall, and list cron jobs.

The same crontab command is used for creating and editing cron jobs. And what’s even cooler is that you don’t need to restart cron after creating new files or editing existing ones.

$ crontab -e

This opens your existing crontab file or creates one if necessary. The vi editor opens by default when calling crontab -e. Note: To edit the crontab file using Nano editor, you can optionally set the EDITOR=nano environment variable.

List all your cron jobs using the option -l and specify a user using the -u option, if desired.

$ crontab -l
$ crontab -u username -l

Remove or erase all your cron jobs using the following command:

$ crontab -r

To remove jobs for a specific user you must run the following command as the root user:

$ crontab -r -u username

Thank you for reading. cron jobs may seem like a tool just for system admins, but they are actually relevant to many kinds of web applications and user tasks.


Fedora Linux documentation for Automated Tasks

Next Open NeuroFedora meeting: 12 April 1300 UTC

Posted by The NeuroFedora Blog on April 12, 2021 08:41 AM
Photo by William White on Unsplash

Photo by William White on Unsplash.

Please join us at the next regular Open NeuroFedora team meeting on Monday 12 April at 1300UTC in #fedora-neuro on IRC (Freenode). The meeting is a public meeting, and open for everyone to attend. You can join us over:

You can use this link to convert the meeting time to your local time. Or, you can also use this command in the terminal:

$ date --date='TZ="UTC" 1300 today'

The meeting will be chaired by @bt0dotninja. The agenda for the meeting is:

We hope to see you there!

Episode 266 – The future of security scanning with Debricked

Posted by Josh Bressers on April 12, 2021 12:01 AM

Josh and Kurt talk to Emil Wåreus from Debricked about the future of security scanners. Debricked is doing some incredibly cool things to avoid relying on humans for vulnerability identification and cataloging. Learn what the future of security scanning is going to look like.

<audio class="wp-audio-shortcode" controls="controls" id="audio-2414-1" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_266_The_future_of_security_scanning_with_Debricked.mp3?_=1" type="audio/mpeg">https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_266_The_future_of_security_scanning_with_Debricked.mp3</audio>

Show Notes

FreeIPA and Foreman Proxy development setup

Posted by Lukas "lzap" Zapletal on April 12, 2021 12:00 AM

FreeIPA and Foreman Proxy development setup

I have been avoiding this for like ten years now, but today is the day when I will setup a FreeIPA with Foreman Proxy for development and testing purposes and here are my notes.

The goal is to deploy a libvirt VM with IPA server and Foreman Proxy intergated with it. The domain will be ipa.lan and the host named ipa.ipa.lan. This is NOT how you should deploy production Foreman FreeIPA integration! For that, reading our official documentation and using foreman-installer is suggested instead.

We need a VM, let’s go with CentOS 8.

virt-builder centos-8.2 --output /var/lib/libvirt/images/ipa.img --root-password password:redhat --hostname ipa.ipa.lan
virt-install --name ipa.ipa.lan --memory 2048 --vcpus 2 --disk /var/lib/libvirt/images/ipa.img --import --os-variant rhel8.3 --update
virsh console ipa.ipa.lan

We need a static IP for this VM:

nmcli con modify enp1s0 \
  ip4 \
  gw4 \
nmcli con down enp1s0
nmcli con up enp1s0

Make sure the hostname is correct:

hostnamectl set-hostname ipa.ipa.lan

Make sure to fix hosts file, FQDN must resolve to the IP address not localhost:

grep ipa /etc/hosts ipa.ipa.lan ipa

The installation is very smooth, expect just couple of questions like administrator password or the actual domain:

dnf module enable idm:DL1
dnf module install idm:DL1/dns
ipa-server-install --setup-dns --auto-forwarder --auto-reverse

Ensure firewall ports are enabled:

firewall-cmd --add-service=http --add-service=https --add-service=ldap --add-service=ldaps \
    --add-service=ntp --add-service=kerberos --add-service=dns --add-port=8000/tcp --permanent

Next up, install Foreman Proxy:

dnf -y install https://yum.theforeman.org/releases/2.4/el8/x86_64/foreman-release.rpm
dnf -y install foreman-proxy

Create the foreman user with minimum required permissions to manage Foreman hosts, create and configure keytab file. When asked for admin password, use the one used when installing the IPA server:

foreman-prepare-realm admin realm-smart-proxy
mv freeipa.keytab /etc/foreman-proxy/freeipa.keytab
chown foreman-proxy:foreman-proxy /etc/foreman-proxy/freeipa.keytab

Configure and start the Foreman Proxy service. This is for development purposes, so let’s only use HTTP. You may also want to add some trusted_hosts entries to allow access from Foreman:

cat /etc/foreman-proxy/settings.yml
:settings_directory: /etc/foreman-proxy/settings.d
:http_port: 8000
:log_level: DEBUG

Enable Realm module:

cat /etc/foreman-proxy/settings.d/realm.yml
:enabled: true
:use_provider: realm_freeipa

And enable FreeIPA plugin:

cat /etc/foreman-proxy/settings.d/realm_freeipa.yml
:keytab_path: /etc/foreman-proxy/freeipa.keytab
:principal: realm-smart-proxy@IPA.LAN
:ipa_config: /etc/ipa/default.conf
:remove_dns: true
:verify_ca: true

And start it up:

systemctl enable --now foreman-proxy

Realm feature should be available:

curl http://ipa.ipa.lan:8000/features

To show a host entry in IPA via CLI:

kinit admin
ipa host-show rex-dzurnak.ipa.lan
  Host name: rex-dzurnak.ipa.lan
  Class: ipa-debian-10
  Password: True
  Keytab: False
  Managed by: rex-dzurnak.ipa.lan

Add the foreman proxy into Foreman and start developing or testing. Have fun!

Make your sentences poorer, get out of the three comma club

Posted by Joe Brockmeier on April 11, 2021 05:02 PM

There’s a running gag in the show Silicon Valley about a character obsessed with being in the “three comma” club. Being a billionaire, in other...

The post Make your sentences poorer, get out of the three comma club appeared first on Dissociated Press.

Friday’s Fedora Facts: 2021-14

Posted by Fedora Community Blog on April 09, 2021 08:32 PM

Here’s your weekly Fedora report. Read what happened this week and what’s coming up. Your contributions are welcome (see the end of the post)! The Final freeze is underway. The F34 Final Go/No-Go meeting is Thursday.

I have weekly office hours on Wednesdays in the morning and afternoon (US/Eastern time) in #fedora-meeting-1. Drop by if you have any questions or comments about the schedule, Changes, elections, or anything else. See the upcoming meetings for more information.



<figure class="wp-block-table">
elementary Developer Weekendvirtual26-27 Junecloses ~20 Apr
All Things Openvirtual17-19 Octcloses 30 Apr
Akademyvirtual18-25 Junecloses 2 May
openSUSE Virtual Conferencevirtual18-20 Junecloses 4 May
DevConf.USvirtual2-3 Sepcloses 31 May

Help wanted

Upcoming test days

Prioritized Bugs

<figure class="wp-block-table">
Bug IDComponentStatus

Upcoming meetings


Fedora Linux 34


Upcoming key schedule milestones:

  • 2021-04-20 — Final release early target
  • 2021-04-27 — Final release target #1


Change tracker bug status. See the ChangeSet page for details of approved changes.

<figure class="wp-block-table">


<figure class="wp-block-table">
Bug IDComponentBug StatusBlocker Status

Fedora Linux 35


<figure class="wp-block-table">
Reduce dependencies on python3-setuptoolsSystem-WideApproved
RPM 4.17System-WideFESCo #2593
Smaller Container Base Image (remove sssd-client, util-linux, shadow-utils)Self-ContainedFESCo #2594
Erlang 24Self-ContainedAnnounced
Switching Cyrus Sasl from BerkeleyDB to GDBMSystem-WideAnnounced
Debuginfod By DefaultSelf-ContainedAnnounced

Changes approved, rejected, or withdrawn will be removed from this table the next week. See the ChangeSet page for a full list of approved changes.


Have something you want included? You can file an issue or submit a pull request in the fedora-pgm/pgm_communication repo.

The post Friday’s Fedora Facts: 2021-14 appeared first on Fedora Community Blog.

Release of osbuild 28

Posted by OSBuild Project on April 09, 2021 12:00 AM

We are happy to announce version 28 of osbuild. This time with a large set of fixes and minor additions to the different stages bundled with osbuild. Furthermore, Fedora 35 is now supported as host system.

Below you can find the official changelog from the osbuild-28 sources. All users are recommended to upgrade!

  • Add a new option to the org.osbuild.qemu assembler that controls the qcow2 format version (qcow2_compat).

  • Add history entries to the layers of OCI archives produced by the org.osbuild.oci-archive stage. This fixes push failures to quay.io.

  • Include only a specific, limited set of xattrs in OCI archives produced by the org.osbuild.oci-archive stage. This is specifically meant to exclude SELinux-related attributes (security.selinux) which are sometimes added even when invoking tar with the --no-selinux option.

  • The package metadata for the org.osbuild.rpm stage is now sorted by package name, to provide a stable sorting of the array independent of the rpm output.

  • Add a new runner for Fedora 35 (org.osbuild.fedora35) which is currently a symlink to the Fedora 30 runner.

  • The org.osbuild.ostree input now uses ostree-output as temporary directory and its description got fixed to reflect that it does indeed support pipeline and source inputs.

  • devcontainer: Include more packages needed for the Python extension and various tools to ease debugging and development in general. Preserve the fish history across container rebuilds.

  • Stage tests are writing the prodcued metadata to /tmp so the actual data can be inspected in case there is a deviation.

  • CI: Start running images tests against 8.4 and execute image_tests directly from osbuild-composer-tests. Base CI images have been updated to Fedora 33.

Contributions from: Achilleas Koutsou, Alexander Todorov, Christian Kellner, David Rheinsberg

— Berlin, 2021-04-08

sevctl available soon in Fedora 34

Posted by Connor Kuehl on April 08, 2021 01:25 PM

I am pleased to announce that sevctl will be available in the Fedora repositories starting with Fedora 34. Fedora is the first distribution to include sevctl in its repositories 🎉.

sevctl is an administrative utility for managing the AMD Secure Encrypted Virtualization (SEV) platform, which is available on AMD’s EPYC processors. It makes many routine AMD SEV tasks quite easy, such as:

  • Generating, exporting, and verifying a certificate chain
  • Displaying information about the SEV platform
  • Resetting the platform’s persistent state

As of this writing, Fedora 34 is entering its final freeze, but sevctl is queued for inclusion once Fedora 34 thaws. sevctl is already available in Fedora Rawhide for immediate use.

Please submit all bug reports, patches, and feature requests to sevctl’s upstream repository on GitHub.

Binary RPMs can be found in the rust-sevctl packaging repo.

Help wanted: program management team

Posted by Fedora Community Blog on April 08, 2021 08:00 AM

I’d love to spend time in different Fedora teams helping them with program management work, but there’s only so much of me to go around. Instead, I’m putting together a program management team. At a high level, the role of the program management team will be two-fold. The main job is to embed in other teams and provide support to them. A secondary role will be to back up some of my duties (like wrangling Changes) when I am out of the office. If you’re interested, fill out the When Is Good survey by 15 April, or read on for more information.

About the team

You can read more about the team on Fedora Docs, but some of the duties I see team members providing include:

  • Coordination with other Fedora teams (e.g. Websites, design)
  • Consulting on team process development and improvement
  • Tracking development plans against the Fedora schedule
  • Issue triage and management
  • Shepherding Change proposals and similar requests

Since this is a new team, we still have a lot to figure out. As we go, we’ll figure out what works and adjust to match.

About you

You don’t need to be an expert to join the team. I’d like everyone to have some experience with either contributing to Fedora or project/program management. If you’re lacking in one, we can help fill in the gaps. You should be well-organized (or at least able to fake it) and have 3-5 hours a week available to work with one or more teams in Fedora.

How to join

Fill out the When Is Good survey by 15 April to indicate your availability for a kickoff meeting. This will be a video meeting so that we can have a high-bandwidth conversation. I’m looking for four or five people to start, but if I get more interest, we’ll figure out how to scale. If you’re not sure if this is something you want to do, come to the meeting anyway. You can always decide to not participate.

How to get help from this team

If you’re on another Fedora team and would like to get support from the program management team, great! We don’t have a mechanism for requesting help yet, but that will be coming soon.

The post Help wanted: program management team appeared first on Fedora Community Blog.

lavapipe reporting Vulkan 1.1 (not compliant)

Posted by Dave Airlie on April 07, 2021 08:22 PM

The lavapipe vulkan software rasterizer in Mesa is now reporting Vulkan 1.1 support.

It passes all CTS tests for those new features in 1.1 but it stills fails all the same 1.0 tests so isn't that close to conformant. (lines/point rendering are the main areas of issue).

There are also a bunch of the 1.2 features implemented so that might not be too far away though 16-bit shader ops and depth resolve are looking a bit tricky.

If there are any specific features anyone wants to see or any crazy places/ideas for using lavapipe out there, please either file a gitlab issue or hit me up on twitter @DaveAirlie

Using network bound disk encryption with Stratis

Posted by Fedora Magazine on April 07, 2021 08:00 AM

In an environment with many encrypted disks, unlocking them all is a difficult task. Network bound disk encryption (NBDE) helps automate the process of unlocking Stratis volumes. This is a critical requirement in large environments. Stratis version 2.1 added support for encryption, which was introduced in the article “Getting started with Stratis encryption.” Stratis version 2.3 recently introduced support for Network Bound Disk Encryption (NBDE) when using encrypted Stratis pools, which is the topic of this article.

The Stratis website describes Stratis as an “easy to use local storage management for Linux.” The  short video “Managing Storage With Stratis” gives a quick demonstration of the basics. The video was recorded on a Red Hat Enterprise Linux 8 system, however, the concepts shown in the video also apply to Stratis in Fedora Linux.


This article assumes you are familiar with Stratis, and also Stratis pool encryption. If you aren’t familiar with these topics, refer to this article and the Stratis overview video previously mentioned.

NBDE requires Stratis 2.3 or later. The examples in this article use a pre-release version of Fedora Linux 34. The Fedora Linux 34 final release will include Stratis 2.3.

Overview of network bound disk encryption (NBDE)

One of the main challenges of encrypting storage is having a secure method to unlock the storage again after a system reboot. In large environments, typing in the encryption passphrase manually doesn’t scale well. NBDE addresses this and allows for encrypted storage to be unlocked in an automated manner.

At a high level, NBDE requires a Tang server in the environment. Client systems (using Clevis Pin) can automatically decrypt storage as long as they can establish a network connection to the Tang server. If there is no network connectivity to the Tang server, the storage would have to be decrypted manually.

The idea behind this is that the Tang server would only be available on an internal network, thus if the encrypted device is lost or stolen, it would no longer have access to the internal network to connect to the Tang server, therefore would not be automatically decrypted.

For more information on Tang and Clevis, see the man pages (man tang, man clevis) , the Tang GitHub page, and the Clevis GitHub page.

Setting up the Tang server

This example uses another Fedora Linux system as the Tang server with a hostname of tang-server. Start by installing the tang package:

dnf install tang

Then enable and start the tangd.socket with systemctl:

systemctl enable tangd.socket --now

Tang uses TCP port 80, so you also need to open that in the firewall:

firewall-cmd --add-port=80/tcp --permanent
firewall-cmd --add-port=80/tcp

Finally, run tang-show-keys to display the output signing key thumbprint. You’ll need this later.

# tang-show-keys

Creating the encrypted Stratis Pool

The previous article on Stratis encryption goes over how to setup an encrypted Stratis pool in detail, so this article won’t cover that in depth.

The first step is capturing a key that will be used to decrypt the Stratis pool. Even when using NBDE, you need to set this, as it can be used to manually unlock the pool in the event that the NBDE server is unreachable. Capture the pool1 key with the following command:

# stratis key set --capture-key pool1key
Enter key data followed by the return key:

Then I’ll create an encrypted Stratis pool (using the pool1key just created) named pool1 using the /dev/vdb device:

# stratis pool create --key-desc pool1key pool1 /dev/vdb

Next, create a filesystem in this Stratis pool named filesystem1, create a mount point, mount the filesystem, and create a testfile in it:

# stratis filesystem create pool1 filesystem1
# mkdir /filesystem1
# mount /dev/stratis/pool1/filesystem1 /filesystem1
# cd /filesystem1
# echo "this is a test file" > testfile

Binding the Stratis pool to the Tang server

At this point, we have the encrypted Stratis pool created, and also have a filesystem created in the pool. The next step is to bind your Stratis pool to the Tang server that you just setup. Do this with the stratis pool bind nbde command.

When you make the Tang binding, you need to pass several parameters to the command:

  • the pool name (in this example, pool1)
  • the key descriptor name (in this example, pool1key)
  • the Tang server name (in this example, http://tang-server)

Recall that on the Tang server, you previously ran tang-show-keys which showed the Tang output signing key thumbprint is l3fZGUCmnvKQF_OA6VZF9jf8z2s. In addition to the previous parameters, you either need to pass this thumbprint with the parameter –thumbprint l3fZGUCmnvKQF_OA6VZF9jf8z2s, or skip the verification of the thumbprint with the –trust-url parameter.

It is more secure to use the –thumbprint parameter. For example:

# stratis pool bind nbde pool1 pool1key http://tang-server --thumbprint l3fZGUCmnvKQF_OA6VZF9jf8z2s

Unlocking the Stratis Pool with NBDE

Next reboot the host, and validate that you can unlock the Stratis pool with NBDE, without requiring the use of the key passphrase. After rebooting the host, the pool is no longer available:

# stratis pool list
Name Total Physical Properties

To unlock the pool using NBDE, run the following command:

# stratis pool unlock clevis

Note that you did not need to use the key passphrase. This command could be automated to run during the system boot up.

At this point, the pool is now available:

# stratis pool list
Name Total Physical Properties
pool1 4.98 GiB / 583.65 MiB / 4.41 GiB ~Ca, Cr

You can mount the filesystem and access the file that was previously created:

# mount /dev/stratis/pool1/filesystem1 /filesystem1/
# cat /filesystem1/testfile
this is a test file

Rotating Tang server keys

Best practices recommend that you periodically rotate the Tang server keys and update the Stratis client servers to use the new Tang keys.

To generate new Tang keys, start by logging in to your Tang server and look at the current status of the /var/db/tang directory. Then, run the tang-show-keys command:

# ls -al /var/db/tang
total 8
drwx------. 1 tang tang 124 Mar 15 15:51 .
drwxr-xr-x. 1 root root 16 Mar 15 15:48 ..
-rw-r--r--. 1 tang tang 361 Mar 15 15:51 hbjJEDXy8G8wynMPqiq8F47nJwo.jwk
-rw-r--r--. 1 tang tang 367 Mar 15 15:51 l3fZGUCmnvKQF_OA6VZF9jf8z2s.jwk
# tang-show-keys

To generate new keys, run tangd-keygen and point it to the /var/db/tang directory:

# /usr/libexec/tangd-keygen /var/db/tang

If you look at the /var/db/tang directory again, you will see two new files:

# ls -al /var/db/tang
total 16
drwx------. 1 tang tang 248 Mar 22 10:41 .
drwxr-xr-x. 1 root root 16 Mar 15 15:48 ..
-rw-r--r--. 1 tang tang 361 Mar 15 15:51 hbjJEDXy8G8wynMPqiq8F47nJwo.jwk
-rw-r--r--. 1 root root 354 Mar 22 10:41 iyG5HcF01zaPjaGY6L_3WaslJ_E.jwk
-rw-r--r--. 1 root root 349 Mar 22 10:41 jHxerkqARY1Ww_H_8YjQVZ5OHao.jwk
-rw-r--r--. 1 tang tang 367 Mar 15 15:51 l3fZGUCmnvKQF_OA6VZF9jf8z2s.jwk

And if you run tang-show-keys, it will show the keys being advertised by Tang:

# tang-show-keys

You can prevent the old key (starting with l3fZ) from being advertised by renaming the two original files to be hidden files, starting with a period. With this method, the old key will no longer be advertised, however it will still be usable by any existing clients that haven’t been updated to use the new key. Once all clients have been updated to use the new key, these old key files can be deleted.

# cd /var/db/tang
# mv hbjJEDXy8G8wynMPqiq8F47nJwo.jwk   .hbjJEDXy8G8wynMPqiq8F47nJwo.jwk
# mv l3fZGUCmnvKQF_OA6VZF9jf8z2s.jwk   .l3fZGUCmnvKQF_OA6VZF9jf8z2s.jwk

At this point, if you run tang-show-keys again, only the new key is being advertised by Tang:

# tang-show-keys

Next, switch over to your Stratis system and update it to use the new Tang key. Stratis supports doing this while the filesystem(s) are online.

First, unbind the pool:

# stratis pool unbind pool1

Next, set the key with the original passphrase used when the encrypted pool was created:

# stratis key set --capture-key pool1key
Enter key data followed by the return key:

Finally, bind the pool to the Tang server with the updated key thumbprint:

# stratis pool bind nbde pool1 pool1key http://tang-server --thumbprint iyG5HcF01zaPjaGY6L_3WaslJ_E

The Stratis system is now configured to use the updated Tang key. Once any other client systems using the old Tang key have been updated, the two original key files that were renamed to hidden files in the /var/db/tang directory on the Tang server can be backed up and deleted.

What if the Tang server is unavailable?

Next, shutdown the Tang server to simulate it being unavailable, then reboot the Stratis system.

Again, after the reboot, the Stratis pool is not available:

# stratis pool list
Name Total Physical Properties

If you try to unlock it with NBDE, this fails because the Tang server is unavailable:

# stratis pool unlock clevis
Execution failed:
An iterative command generated one or more errors: The operation 'unlock' on a resource of type pool failed. The following errors occurred:
Partial action "unlock" failed for pool with UUID 4d62f840f2bb4ec9ab53a44b49da3f48: Cryptsetup error: Failed with error: Error: Command failed: cmd: "clevis" "luks" "unlock" "-d" "/dev/vdb" "-n" "stratis-1-private-42142fedcb4c47cea2e2b873c08fcf63-crypt", exit reason: 1 stdout: stderr: /dev/vdb could not be opened.

At this point, without the Tang server being reachable, the only option to unlock the pool is to use the original key passphrase:

# stratis key set --capture-key pool1key
Enter key data followed by the return key:

You can then unlock the pool using the key:

# stratis pool unlock keyring

Next, verify the pool was successfully unlocked:

# stratis pool list
Name Total Physical Properties
pool1 4.98 GiB / 583.65 MiB / 4.41 GiB ~Ca, Cr

Changes in technologies supported by syslog-ng: Python 2, CentOS 6 & Co.

Posted by Peter Czanik on April 06, 2021 11:39 AM

Technology is continuously evolving. There are regular changes in platforms running syslog-ng: old technologies disappear, and new technologies are introduced. While we try to provide stability and continuity to our users, we also need to adapt. Python 2 reached its end of life a year ago, CentOS 6 in November 2020. Using Java-based drivers has been problematic for many, so they were mostly replaced with native implementations.

From this blog you can learn about recent changes affecting syslog-ng development and packaging.

Python 2

Python 2 officially reached its end of life on 1st January, 2020, so well over a year ago. Still, until recently, compatibility with Python 2 has been tested continuously by developers. This testing was disabled when syslog-ng 3.31 was released. What it means, is that if anything is changed in Python-related code in syslog-ng, there is no guarantee that it will work with Python 2.

Packaging changes started even earlier. Distribution packages changed from Python 2 to Python 3 support already years ago, similarly to unofficial syslog-ng packages for openSUSE/SLES and Fedora/RHEL. While request for Python 3 support was regular, nobody asked for Python 2 after the switch. The last place supporting Python 2 as an alternative was DBLD, syslog-ng’s own container-based build tool for developers. The support there was also stopped for Fedora/RHEL, right before the 3.31 release.

CentOS 6

RHEL 6/CentOS 6 had been the most popular syslog-ng platform for many years. Many users liked it due to the lack of systemd. But all good things come to an end, and RHEL 6 (and thus CentOS 6) reached its end of life in November 2020.

Unofficial syslog-ng RPM packages for the platform were maintained on the Copr build service. Their policy is removing packages 180 days after a platform reaches its End of Life (EoL). I do not know the exact date, but around the end of April all RHEL 6/CentOS 6 repositories will be removed from Copr.

Note: if you still need those packages somewhere, create a local mirror for yourself. I do not have a local backup or a build and test environment anymore.

CentOS 7 ARM

RHEL 7 for ARM also reached its EoL in January. While CentOS 7 keeps supporting ARM, the Copr build service removed support for this platform and will remove related repositories, just as it did for CentOS 6. If you need those packages, you have time till the end of June to create a local mirror of them.

Java-based destination drivers

A long time ago, using Java to extend syslog-ng was the best direction to go. Client libraries for popular technologies were unavailable in C, but they were readily available in Java, for example for Elasticsearch and Kafka. Unfortunately, using Java also has several drawbacks, like increased resource usage and difficult configuration. Also, Java-based destination drivers could not be packaged as part of Linux distributions. So, as soon as native C-based clients became available, people switched to them. Only HDFS is not supported by any other means, but nobody seems to use it anymore – at least in the open source world.

What does it mean for you? Java-based drivers are still regularly tested by the syslog-ng development team. On the other hand, even the unofficial openSUSE/SLES and Fedora/RHEL packages dropped support for them. Java support is still there, as some people developed their own Java-based drivers. If you really need these drivers, use syslog-ng 3.27 from the unofficial RPM repositories or compile syslog-ng yourself. Unofficial Debian/Ubuntu packages still include Java-based drivers and on FreeBSD you can still build them in the sysutils/syslog-ng port.

If you have questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by email or even chat with us. For a list of possibilities, check our GitHub page under the “Community” section at https://github.com/syslog-ng/syslog-ng. On Twitter, I am available as @Pczanik.

Summary from the Diversity & Inclusion Team Meetup

Posted by Fedora Community Blog on April 06, 2021 08:00 AM

Fedora’s Diversity & Inclusion Team held a virtual meetup on Sunday March 21st, 2021. We had more than 20 attendees, with three main planning sessions and a Storytelling Workshop. The team had a successful event connecting, processing, and looking towards the future. The Storytelling Workshop was a fun way to unwind after a day of meetings and do something different as a team.

The team decided that Justin Flory will step down as D&I Advisor to Council at the end of this release cycle and Vipul Siddharth will step into the position. We discussed the vision we have for the team moving forward. One of the main sentiments was that we would like to involve the the “techy” or coding teams of the Fedora Project more into our efforts and events. In an endeavor to forward that involvement we are discussing moving from Fedora Women’s Day (FWD) to Fedora Week of Diversity (also, FWD), that would still include the former Fedora Women’s Day. Moving to a Fedora Week of Diversity would provide more opportunities to highlight all aspects of diversity, as well as provide the team more creativity with the event.

The Diversity & Inclusion Team is always looking for new contributors. We are in a planning and formulations phase, so feel free to join us on #fedora-diversity on IRC if you are interested in becoming involved!

<figure class="wp-block-image size-large"></figure>

Meetup Readout

Session: D&I Team Debrief

General Notes

  • D&I: This is still really new work in the scope of history, especially in the software & tech setting
    • draining work
    • resources may not be there, or not known how to use
    • self care is important
  • It would be great to have check ins
    • Main goal would be to stay connected and provide support
    • Casual D&I meetups, maybe monthly?
  • People have personal ties to Fedora and why it is important to them. Why do you Fedora?
    • Brings confidence
    • It is always there, through good times and bad
    • Friendship, fellowship

D&I Team History

  • Starting with a D&I rep to the council and grew from there
  • Diversity panel
    • talked about experiences
    • some adversity at the panel
  • FWD was one of the first things to be worked on
    • FWD is exciting because it happens all over the world, fosters inclusion, and is a decentralized way to empower communities
    • FWD 2020 virtual was a success, but it was mostly on Marie to organize
  • This team was always volunteer driven (no full time Fedorans)
    • When curve balls hit, things fell off the track
    • Council D&I rep was more work than is visible
  • Looking for support from the Council/external sources
    • Help to prioritize things
    • Help with strategy/direction

What did we learn?

  • Burnout
    • Sustainability = onboarding new folks was lacking
  • Being connected is important. We are a community, not just busy bees!
    • It is hard to pick up doing things quickly if we are not feeling connected.
    • We would like to start monthly informal meet-ups where we get together and hang out, not just work on Fedora.
  • Newcomers/onboarding
    • Need easyfix tickets for newcomers to get involved easily, right now it is hard to discern what to do when interested in the D&I team.
    • Too many tasks to choose from, hard to decipher where to start
      • People experience choice paralysis because fedora is huge
      • Created a limited number of tasks for newcomers to choose from

What did we do well?

  • Mentoring interns (mostly as individuals, but there is a lot of overlap)
  • Fedora Women’s Day
    • 6 six years running
    • Participation across four continents
  • Outreachy
    • Sponsoring Outreachy internships from D&I budget allocation
    • Coordinating and supporting
  • Happiness Packets
  • Fedora Appreciation Week
  • Friendship & support to each other
  • Representation
    • We’ve seen an increase in participation of women at Flock
  • Unconscious bias & imposter syndrome sessions

Session: D&I team: Goals, focus, future

General notes

  • Marie is here to help run meetings and support with project management
  • Justin is stepping down as D&I rep, to be replaced by Vipul

What would we like to see

  • Exposure to a broader swath of Fedora
    • How can we involved the rest of the community?
    • We need to promote events better
    • Reach out more through various platforms, to let folks know that we are here
  • The mentorship role in our team could be better aligned with Join SIG/ambassador mentor role
    • Something could be included in the new Join tickets
  • Hybrid events moving forward
    • Team focuses mostly on virtual events, folks can also do local events
  • Matrix/Element will also be improved
    • How can we capitalize on the new chat system?
      • Events, socials, video/audio, integration into other platforms
  • Articles/events/content that address mental health/marginalized folks/empowering & encouraging folks & underrepresented people
  • More docs (D&I) on how to continue contributing to (D&I and Fedora in general)


  • It is important for us to stay focused as a team.
    • We need to prioritize. Tackle limited things at a time to achieve progress.
  • Use our current infrastructure as best we can
  • Categories of improvement/work
    • Docs (D&I doc) (1)
      • Think more strategically, what do we want to put up here?
      • What do we do?
      • How to start (“fedora-join links ?”)
      • How to help?
        • How to continue to help? (Pointing to “fedora-join ?”)
    • Promotion/Marketing/Content (2)
    • Events (3)
    • Outreach/Support/Resource Library (4)

Session: Future of FWD

General Notes

  • “Fedora Women’s Day” -> “Fedora Week of Diversity”
  • Should include FWD local/virtually
    • 2 Time zone event (EU/US) if it is become too bigger so we an comfort everyone as well.
  • In person events could happen that week, and at the end their could be a virtual event
    • do local events, and then come back and connect with the community
  • Think about how to get techy people involved in the event
  • Let’s think about intersectionality, how can we feature that, how can we engage in a technical/creative community
    • Can this be a longer term process? Involving folks in activities before and after
  • We can include Women’s Days and other “days”, if we have the proper representation

Outcomes (vision)

  • Networking
  • Involving the broader Fedora community in the event
  • Inspiration

Brainstorm Session

  • Week of creation
  • Diversity hackathon
  • Week with a session a day with a guest facilitating conversations related to D&I
  • Fedora Stories: Building on contributor stories? We never found a permanent home for that.
  • FWD/D&I in podcast and make talks (non-tech stuff)
  • Fedora Zine takeover or make a bunch of pages for the zine
  • Mixing the idea of building diversity themed tech/craft projects
    • Featured during the virtual component. This could be a great way to get people to network and engage with one another.
    • Can help with education & incorporating & storytelling.

The post Summary from the Diversity & Inclusion Team Meetup appeared first on Fedora Community Blog.

Querying hostnames from beaker

Posted by Adam Young on April 05, 2021 09:11 PM

If you have requested a single host from beaker, the following one liner will tell the hostname for it.

bkr job-results   $( bkr job-list  -o $USER  --unfinished | jq -r  ".[]" )   | xpath -q -e string\(/job/recipeSet/recipe/roles/role/system/@value\)

This requires jq and xpath, as well as the beaker command line packages.

For me on Fedora 33 the packages are:

  • perl-XML-XPath-1.44-7.fc33.noarch
  • jq-1.6-5.fc33.x86_64
  • python3-beaker-1.10.0-9.fc33.noarch
  • beaker-redhat-0.2.1-2.fc33eng.noarch
  • beaker-common-28.2-1.fc33.noarch
  • beaker-client-28.2-1.fc33.noarch

Contribute at Fedora Linux 34 Upgrade, Audio, Virtualization, IoT, and Bootloader test days

Posted by Fedora Magazine on April 05, 2021 08:50 PM

Fedora test days are events where anyone can help make sure changes in Fedora Linux work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed to Fedora before, this is a perfect way to get started.

There are three upcoming test events in the next week.

  • Wednesday April 07, is to test the Upgrade from Fedora 32 and 33 to Fedora 34.
  • Friday April 09, is to test Audio changes made by Pipewire implementation.
  • Tuesday April 13, is to test the Virtualization in Fedora 34.
  • Monday, April 12 to Friday, April 16 is to test Fedora IoT 34.
  • Monday, April 12 and Tuesday, April 13 is to test the bootloader.

Come and test with us to make the upcoming Fedora Linux 34 even better. Read more below on how to do it.

Upgrade test day

As we come closer to Fedora Linux 34 release dates, it’s time to test upgrades. This release has a lot of changes and it becomes essential that we test the graphical upgrade methods as well as the command line. As a part of this test day, we will test upgrading from a full updated, F32 and F33 to F34 for all architectures (x86_64, ARM, aarch64) and variants (WS, cloud, server, silverblue, IoT).
This test day will happen on Wednesday, April 07

Audio test day

There is a recent proposal to replace the PulseAudio daemon with a functionally compatible implementation based on PipeWire. This means that all existing clients using the PulseAudio client library will continue to work as before, as well as applications shipped as Flatpak. The test day is to test that everything works as expected.
This will occur on Friday, April 09

Virtualization test day

We are going to test all forms of virtualization possible in Fedora. The test day will focus on testing Fedora or your favorite distro inside a bare metal implementation of Fedora running Boxes, KVM, VirtualBox and whatever you have. The general features of installing the OS and working with it are outlined in the test cases which you will find on the results page.
This will be happening on Tuesday, April 13.

IoT test week

For this test week, the focus is all-around; test all the bits that come in a Fedora IoT release as well as validate different hardware. This includes:

  • Basic installation to different media
  • Installing in a VM
  • rpm-ostree upgrades, layering, rebasing
  • Basic container manipulation with Podman.

We welcome all different types of hardware, but have a specific list of target hardware for convenience. This will be happening the week of Monday, April 12 to Friday, April 16.

Bootloader Test Day

This test day will focus on ensuring the working of new shim and Grub with BIOS and EFI/UEFI with secure boot enabled. We would like to have the community test it on as many possible types of off the shelve hardware. The test day will happen Monday, April 12 and Tuesday, April 13. More information is available on the wiki page.

How do test days work?

A test day is an event where anyone can help make sure changes in Fedora work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. Test days are the perfect way to start contributing if you not in the past.

The only requirement to get started is the ability to download test materials (which include some large files) and then read and follow directions step by step.

Detailed information about all the test days are on the wiki page links provided above. If you are available on or around the days of the events, please do some testing and report your results.

[Ed. note: Updated at 1920 UTC on 6 April to add IoT test day information; Updated at 1905 UTC on 7 April to add bootloader test day information]

Running the UniFi Network Controller in a Docker Container

Posted by Jon Chiappetta on April 05, 2021 07:44 PM

If you are needing a more generalized and containerized method to run the UniFi Network Controller and you don’t want it running on your main system, you can use a trusted app like Docker to achieve this task!

I made a new repo that has some Dockerfile supported scripts which will pull in the latest Debian container and customize a new image from scratch to run MongoDB + Java8. This is useful if you don’t particularly trust the pre-made, public Docker containers that are already out there!

git clone && cd dockerfi/ — The build and run commands are listed in the main script file (once the container has been started, just browse to https;// and restore from backup). The UI version is statically set to the previous stable release of 6.0.45!

Note: If you need to help layer 3 out: set-inform http;//192.168.X.Y:8080/inform


Edit: I made a small YouTube video running the script:

<figure class="wp-block-embed is-type-rich is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio">
<iframe allowfullscreen="true" class="youtube-player" height="372" sandbox="allow-scripts allow-same-origin allow-popups allow-presentation" src="https://www.youtube.com/embed/hNBjb2b1gNQ?version=3&amp;rel=1&amp;showsearch=0&amp;showinfo=1&amp;iv_load_policy=1&amp;fs=1&amp;hl=en&amp;autohide=2&amp;wmode=transparent" style="border:0;" width="660"></iframe>

Fedora Linux 34 Upgrade Test Day 2021-04-07

Posted by Fedora Community Blog on April 05, 2021 07:21 PM
Fedora 34 Upgrade test day

Wednesday 7 April is the Fedora Linux 34 Upgrade Test Day! As part of the preparation for Fedora Linux 34, we need your help to test if everything runs smoothly!

Why Upgrade Test Day?

We’re approaching the release date for Fedora Linux 34. Most users will be upgrading to Fedora Linux 34 and this test day will help us understand if everything is working perfectly. This test day will cover both a GNOME graphical upgrades and upgrades done using DNF.

We need your help!

All the instructions are on the wiki page, so please read through and come help us test! As always, the event will be in #fedora-test-day on Freenode IRC.

Share this!

Help promote the Test Day and share the article in your own circles!

The post Fedora Linux 34 Upgrade Test Day 2021-04-07 appeared first on Fedora Community Blog.

crocus: gallium for the gen4-7 generation

Posted by Dave Airlie on April 05, 2021 02:38 AM

The crocus project was recently mentioned in a phoronix article. The article covered most of the background for the project.

Crocus is a gallium driver to cover the gen4-gen7 families of Intel GPUs. The basic GPU list is 965, GM45, Ironlake, Sandybridge, Ivybridge and Haswell, with some variants thrown in. This hardware currently uses the Intel classic 965 driver. This is hardware is all gallium capable and since we'd like to put the classic drivers out to pasture, and remove support for the old infrastructure, it would be nice to have these generations supported by a modern gallium driver.

The project was initiated by Ilia Mirkin last year, and I've expended some time in small bursts to moving it forward. There have been some other small contributions from the community. The basis of the project is a fork of the iris driver with the old relocation based batchbuffer and state management added back in. I started my focus mostly on the older gen4/5 hardware since it was simpler and only supported GL 2.1 in the current drivers. I've tried to cleanup support for Ivybridge along the way.

The current status of the driver is in my crocus branch.

Ironlake is the best supported, it runs openarena and supertuxkart, and piglit has only around 100 tests delta vs i965 (mostly edgeflag related) and there is only one missing feature (vertex shader push constants). 

Ivybridge just stop hanging on second batch submission now, and glxgears runs on it. Openarena starts to the menu but is misrendering and a piglit run completes with some gpu hangs and a quite large delta. I expect IVB to move faster now that I've solved the worst hang.

Haswell runs glxgears as well.

I think once I take a closer look at Ivybridge/Haswell and can get Ilia (or anyone else) to do some rudimentary testing on Sandybridge, I will start taking a closer look at upstreaming it into Mesa proper.

Episode 265 – The lies closed source can tell, open source can’t

Posted by Josh Bressers on April 05, 2021 12:01 AM

Josh and Kurt talk about the PHP backdoor and the Ubiquity whistleblower. The key takeaway is to note how an open source project cannot cover up an incident, but closed source can and will cover up damaging information.

<audio class="wp-audio-shortcode" controls="controls" id="audio-2407-2" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_265_The_lies_closed_source_can_tell_open_source_cant.mp3?_=2" type="audio/mpeg">https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_265_The_lies_closed_source_can_tell_open_source_cant.mp3</audio>

Show Notes

Elixir and Phoenix after two years

Posted by Josef Strzibny on April 05, 2021 12:00 AM

Thoughts on the Elixir language and its Phoenix framework after two years of professional work.

I am a seasoned web developer (working primarily with Ruby on Rails before) and someone who got an opportunity to work on a commercial Elixir project. During the past 2 years, I wrote a lot of Elixir, which I had to learn from scratch. I always found this kind of personal post interesting, so I figured I would write one for you.


If you haven’t heard about Elixir yet, I recommend watching the Elixir Documentary featuring the creator José Valim for a start. I don’t remember exactly how I found out about Elixir, but most likely from some Ruby community news. I lurked around Elixir space, read many exciting blog posts, and was generally impressed. What drew me to Elixir? While many good things can be said for Elixir, I liked the idea of the preemptive scheduler and the per-process garbage collector. Why?

The preemtivness of Beam (the Erlang Virtual Machine) means that the system behaves reasonably under high load. That’s not a small thing. Being able to connect with a remote shell to your running system and still operate it despite the fact it’s under 100% load is quite something. A per-process GC then means that you don’t run GC most of the time while processing web requests. Both give you a very low latency. If you want to know what I am talking about, go and watch the excellent video by Saša Jurić The Soul of Erlang and Elixir. It’s the best video out there to realize what Beam is about.

Despite my interest, though, I never actually went and wrote any Elixir. I even told myself that I would most likely pass on Elixir. The problems at hand seemed solvable by Ruby/Rails, and forcing oneself to learn a language without commercial adoption is difficult. To my surprise, one Elixir project appeared out of nowhere in Prague, where I stayed at the time.

I was working on my book full-time, and without having any job per se I accepted :). The project itself is not public yet, so while I would love to tell you more about it, you will still have to wait for its public launch.


On the surface, Elixir feels Ruby-like, but soon you’ll realize it’s very different. It’s a strongly-typed, dynamic, but compiled language. Overall it’s very well designed and features a modern standard library.

Here are Elixir basic types:

iex> 1              # integer
iex> 0x1F           # integer
iex> 1.0            # float
iex> true           # boolean
iex> :atom          # atom / symbol
iex> "elixir"       # string
iex> [1, 2, 3]      # list
iex> [{:atom, "value"}, {:atom, "value2"}] # keyword list
iex> {1, 2, 3}      # tuple
iex> ~D[2021-03-30] # sigil
iex> ~r/^regex$/    # sigil

As you can see, there are no arrays. Just linked lists and quite special keyword lists. We have symbols like in Ruby (with the same problems of mixing them with strings for keys access) and tuples that get used a lot to return errors (:ok vs {:error, :name}). I love how tuples make the flow of returning errors standardized (even though it’s not enforced in any way).

Then there are maps (kind of Ruby’s Hash):

iex> map = %{a: 1, b: 2}
%{a: 1, b: 2}
iex> map[:a]
iex> %{map | a: 3}
%{a: 3, b: 2}

And named structs:

iex> defmodule User do
...>   defstruct name: "John", age: 27
...> end

Structs work similarly to maps, because it’s basically a wrapper on top of them.

We can use typespecs to add typing annotation for structs and function definitions, but they are limited. Elixir compiler won’t use them. Still, they help with documentation, and their syntax is actually nice:

defmodule StringHelpers do
  @type word() :: String.t()

  @spec long_word?(word()) :: boolean()
  def long_word?(word) when is_binary(word) do
    String.length(word) > 8

Arguably, we do get one of the best pattern matching out there. You can pattern match everything all the time. Thanks to pattern matching, you also almost don’t need static typing. Ruby is getting there as well, but could never really match the wholesome pattern matching experience of Elixir, which was designed around pattern matching from the beginning. You pattern match in method definitions on what arguments you accept, pattern match in case statements, and your regular code.

I really like pattern matching combined with function overloading:

def add(nil, nil), do: {:error, :cannot_be_nil}
def add(x, nil), do: {:error, :cannot_be_nil}
def add(nil, y), do: {:error, :cannot_be_nil}
def add(x, y), do: x + y

We could also pattern match on structs or use guard clauses:

def add(nil, nil), do: {:error, :cannot_be_nil}
def add(x, y) when is_integer(x) and is_integer(y) do
  x + y
def add(x, y), do: {:error, :has_to_be_integer}

You can also make your own guards with defguard/1 so guards can be pretty flexible.

Elixir is not an object-oriented language. We practically only write modules and functions. This helps tremendously in understanding code. No self. Just data in and out of functions and composition with pipes. Unfortunately, there is no early return that could be useful.

Standard library

The standard library is excellent and well documented. It feels modern because it’s modern. If you tried Elixir before, you might remember having to use external libraries for basic calendaring, but that’s the past. It does not try to implement everything as the philosophy is that you can also rely on Erlang standard library. An example of that might be functions to work with ETS (Erlang Term Storage), rand, and zip modules.

Calling Erlang is without performance penalty, and when I encounter an Erlang call, it does not even feel weird. All-in-all it feels clean and well designed especially compared to Ruby, which keeps a lot of legacy cruft in the standard library.

ExDoc might be the first impressive thing you get to see in the Elixir world. Just go on and browse the Elixir docs. Beautifully designed and featuring nice search, version switching, day and night modes. I love it. And as for the code documentation itself, Elixir is amazing. So are the docs for the main libraries and modules (Phoenix, Absinth). Some not-so-common ones could use help, though.


Elixir’s tooling is one of the best out there. Outside static type checking or editor support, that is. You get Mix which you use as a single interface for all the tasks you might want to do around a given project. From starting and compiling a project, managing dependencies, running custom tasks (like Rake from Ruby) to making releases for deployment. There is a standardized mix format to format your code as well:

$ mix new mix_project && cd mix_project
$ mix deps.get
$ mix deps.compile
$ mix test
$ mix format
$ mix release

A little annoying is the Erlang’s build tool rebar3 which you will use indirectly and which causes weird compilation errors:

==> myapp
** (Mix) Could not compile dependency :telemetry, "/home/strzibny/.mix/rebar3 bare compile --paths="/home/strzibny/Projects/digi/backend/_build/dev/lib/*/ebin"" command failed. You can recompile this dependency with "mix deps.compile telemetry", update it with "mix deps.update telemetry" or clean it with "mix deps.clean telemetry"

Luckily the helpful messages will guide you to fix it:

$ mix deps.get telemetry
Resolving Hex dependencies...
Dependency resolution completed:
$ mix deps.compile telemetry
===> Compiling telemetry

The question is, why it had to fail the first time?

Moving on from Mix, you’ll get to use the very nice IEx shell that I wrote about in detail already. My favorite things about IEx are the easy recompilation of the project:

iex(1)> recompile

And the easy and native way to set breakpoints:

iex(1)> break!(MyModule.my_func/1)

The only annoyance comes from Elixir data types and how they work. Inspecting lists require this:

iex(3)> inspect [27, 35, 51], charlists: :as_lists

Also, the Ruby IRB’s recent multiline support would be highly appreciated.

And there is more! Beam also gives you a debugger and an observer. To start Debugger:

iex(1)> :debugger.start()


Image borrowed from the official page on debugging.

And Observer:

iex(1)> :observer.start()

They are both graphical.

Debugger’s function is clear, Observer helps to oversee the processes and supervision trees as Erlang VM is based on the Actor pattern with lightweight supervised processes. Coming from Ruby, I also like how the compiler does catch a bunch of errors before you get to your program. Then we have Dialyzer that can catch a ton of stuff that’s wrong, including the optional types from typespecs. But it’s far from perfect (both in function and speed), and so many people don’t run with it.

Most developers seek a great IDE or editor integration. I am using Sublime Text together with Elixir language server, and I documented the setup before. There is also a good plugin for IntelliJ IDEA that might be the best you can get right now. Elixir is not Java, but many nice things work.

The only real trouble for me is that my setup is quite resource-hungry. So while super helpful, I do tend to disable it at times. In general, I would say the editor support is somehow on par with Ruby, but I also believe Elixir’s design allows for great tools, we just don’t have them yet.


Testing Elixir code is pretty nice. I like that everyone uses ExUnit. One cool thing is doctests:

# Test
defmodule CurrencyConversionTest do
  @moduledoc false

  use ExUnit.Case, async: true

  doctest CurrencyConversion

# Module
defmodule CurrencyConversion do
  @doc """
  Convert from currency A to B.
  ### Example
      iex> convert(Money.new(7_00, :CHF), :USD)
      %Money{amount: 7_03, currency: :USD}

The above documentation’s example will be run as part of the test suite. Sweet!

The thing getting used to coming from Rails is mocking. While you might like the end result, it certainly is more tedious to write. This is because you cannot just override anything like in the Ruby world. When I use the Mox library, I usually have to:

  • Write a behavior for my module (something like an interface)
  • Use this behavior for my real module and my new stub (that will return a happy path)
  • Register the stub with Mox
  • Use configuration to set the right module for dev/production and testing

That way, you can easily test a default response and also use Mox to return something else for each test (such as an error response). I have a post explaining that.

The language nature of modules and functions ensures that your testing is straightforward, and multicore support ensures your test runs really really fast. The downside to a fast test suite is that you have to compile it first. So do not necessarily expect fast tests for your projects in CI. You will, however, see a considerable improvement over the Rails test suites once they get big.


Phoenix is the go-to web framework for Elixir. It’s neither Rails in scope but neither a microframework like Sinatra. It has some conventions, but you can change them without any big problem. Part of the reason for that is that it’s essentially a library, and also that you pair it with Ecto, your “ORM”. You write your Elixir application “with” Phoenix, not writing a Phoenix application (like with Rails).

Apart from being fast (Elixir is not fast perse, but templates are super-efficient, for example), it has two unique features that make it stand out even more.

One of those is LiveView, which lets you build interactive applications without writing JavaScript. And the second is LiveDashboard, a beautiful dashboard built on top of LiveView that you can include in your application in 2 minutes. It gives you many tabs of useful information about the currently running system (and you can switch nodes easily too). Some of those are:

  • CPU, memory, IO breakdown
  • Metrics (think telemetry)
  • Request Logger (web version of console logs on steroids)
  • web version of :observer

I wish Phoenix had a maintenance policy like Rails so it could be taken more seriously. On the other hand, I think it doesn’t change as much anymore. Phoenix name and logo are also a nice touch as a reference to Beam’s fault tolerance (your Elixir processes will come back from the ashes).


What’s important to me in a web framework is productivity. I don’t care I can craft the best performing applications in C, or have everything compiler-checked. I care about getting stuff done. I prefer frameworks that are designed for small teams because I want to be productive on my own. Phoenix is not batteries-included as Rails, although having features like LiveDashboard is probably better than having Action Text baked in. There are file uploads in LiveView, but it’s not a complete framework like Active Storage. So it’s behind Rails a little in productivity, but it’s still a very productive framework.

I am also convinced Phoenix scales better not only for hardware but also in terms of the codebase. I like the idea of splitting lib/app and lib/app_web from the beginning and the introduction of contexts. Context tells you to split your lib/app in a kind of service-oriented way where you would have app/accounting.ex or app/accounts.ex as your entry points to the functionality of your app.

Another interesting aspect is that since Phoenix is compiled, browsing your development version of the app is not slow like in Rails. It flies. Errors are also pretty good (and both error reporting and compiler warnings are improving every day):

constraint error when attempting to insert struct:

    * unique_validation_api_request_on_login_id_year_week (unique_constraint)

If you would like to stop this constraint violation from raising an
exception and instead add it as an error to your changeset, please
call `unique_constraint/3` on your changeset with the constraint
`:name` as an option.

The changeset defined the following constraints:

    * unique_address_api_request_on_login_id_year_week (unique_constraint)

But what I really really like? The development of Phoenix full-stack applications. No split between an Asset Pipeline and Webpacker (two competing solutions) and everything works without separately running your development Webpack server. You change a React component, switch to a Firefox window, and the change is there! And the only thing you were running is your mix phx.server.

But productivity cannot happen without good libraries. While Elixir and Phoenix eco-system has some outstanding options for things like GraphQL (Absinth) and Stripe (Stripity Stripe) there are not many good options for cloud libraries and other integrations. I feel like Stripe is the only exception here, but it’s not an official SDK.

Sometimes this is problematic as making your own SOAP library is not as much fun if you need to be shipping features involving SOAP at the same time. Sometimes, though, this can lead to building minimal solutions that are easy to maintain. We have practically two little modules for using object storage in Azure. I blogged before about how I implemented Azure pre-signing if you are interested.


The deployment of Phoenix can be as easy as copying the Mix release I already mentioned to the remote server. You can then start it as a systemd service, for instance. While it wasn’t always straightforward to deploy Elixir web applications, it got ridiculously easy recently. Imagine running something like this:

$ mix deps.get --only prod
$ MIX_ENV=prod mix compile
$ npm install --prefix ./assets
$ npm run deploy --prefix ./assets
$ MIX_ENV=prod mix phx.digest
$ MIX_ENV=prod mix release new_phoenix
$ PORT=4000 build/prod/rel/new_phoenix/bin/new_phoenix start

Of course, you can make a light way Docker container too, but maybe you don’t even need to. Mix releases are entirely self-contained (even better than a Java’s JAR)! Here is how to make them with a little bit of context. The only thing to pay attention to is that they are platform-dependent, so you cannot cross-compile them easily right now.

Although people are drawn to Elixir for its distributed nature, its performance makes it a great platform for running a powerful single server too (which is how devs at X-Plane flight simulator run their Elixir backend). Especially since Elixir also supports hot deployments, which is kind of cool. Mix releases do not support this option, though.


The Elixir (and Phoenix) community is amazing. I always got quick and very helpful answers on Elixir Forum and other places. Elixir is niche. But it’s not Crystal or Nim niche. Still, it’s not an exception that you get answers directly from José Valim. How he can even reply so fast is still beyond me :). Thanks, Jóse!

Podium, Sketch, Bleacher Report, Brex, Heroku, and PepsiCo are famous brands using Elixir. Elixir Companies is a site tracking public Elixir adoption. I am myself on a not yet public project, so I am sure they are more Elixir out there!

If you are blogging about Elixir, join us at BeamBloggers.com. There is also ElixirStatus for community news.

Worth it?

And that’s pretty much it. If you are surprised I didn’t get into OTP, it’s because I didn’t get to do much OTP. It’s sure great (you reap the benefits just by using Phoenix), but you can use Elixir without doing a lot of OTP yourself.

The certain pros of Elixir are a small language you’ll learn fast, a modern, beautifully documented standard library, robust pattern matching, and understanding functions without headaches. What I don’t like is the same split for string vs atom keys in maps (without the Rails HashWithIndifferentAccess ) and I have to admit – there are times I miss my instance variables.

Learning Elixir and Phoenix is undoubtedly worth it. I think it’s technologically the best option to build ambitious web applications and soft-realtime systems we have today. It still lacks in few areas, but nothing that cannot be fixed in the future. And if not for Elixir, then for the Beam platform (see Caramel).

I also like that Elixir is not just a language for the web. There is Nerves for IoT, and recently we got Nx with LibTorch backend.

InvoicePrinter 2.1 with Ruby 3 support

Posted by Josef Strzibny on April 05, 2021 12:00 AM

Ruby 3 was released three months ago, so it was a time to support it in InvoicePrinter, a pure Ruby library for generating PDF invoices.

InvoicePrinter 2.1 dependencies were upgraded to Prawn 2.4 and Ruby 3 by following the separation of positional and keyword arguments. If you pass a hash to InvoicePrinter::Document or InvoicePrinter::Document::Item, you now need to use double splat in front of it:


This tells Ruby you are indeed passing a hash for keyword arguments.

Apart from Ruby 3 support, I improved the single line note. The note field was cut if it was longer than one line. The issue is now fixed by supporting a multi-line note. Here is how it looks like.

Finally, this release removes address fields that got deprecated with a warning in 2.0 release.

Instead of providing addresses in a granual fields as:

provider_street: '5th Avenue',
provider_street_number: '1',
provider_postcode: '747 05',
provider_city: 'NYC',

You now have to do it as follows:

provider_address = <<ADDRESS
  Rolnická 1
  747 05  Opava

invoice = InvoicePrinter::Document.new(
  number: 'NO. 198900000001',
  provider_name: 'John White',
  provider_lines: provider_address,

Since the library doesn’t want to be concerned with formatting the address fields, it’s better to support addresses in a more flexible way by having a multiline field.

I released 2.1.0.rc1 for you to try and the final 2.1.0 will follow shortly. If you are missing something in InvoicePrinter, it’s a good time to open a feature request for 2.2 too.