Fedora People

Friday’s Fedora Facts: 2021-14

Posted by Fedora Community Blog on April 09, 2021 08:32 PM

Here’s your weekly Fedora report. Read what happened this week and what’s coming up. Your contributions are welcome (see the end of the post)! The Final freeze is underway. The F34 Final Go/No-Go meeting is Thursday.

I have weekly office hours on Wednesdays in the morning and afternoon (US/Eastern time) in #fedora-meeting-1. Drop by if you have any questions or comments about the schedule, Changes, elections, or anything else. See the upcoming meetings for more information.



<figure class="wp-block-table">
elementary Developer Weekendvirtual26-27 Junecloses ~20 Apr
All Things Openvirtual17-19 Octcloses 30 Apr
Akademyvirtual18-25 Junecloses 2 May
openSUSE Virtual Conferencevirtual18-20 Junecloses 4 May
DevConf.USvirtual2-3 Sepcloses 31 May

Help wanted

Upcoming test days

Prioritized Bugs

<figure class="wp-block-table">
Bug IDComponentStatus

Upcoming meetings


Fedora Linux 34


Upcoming key schedule milestones:

  • 2021-04-20 — Final release early target
  • 2021-04-27 — Final release target #1


Change tracker bug status. See the ChangeSet page for details of approved changes.

<figure class="wp-block-table">


<figure class="wp-block-table">
Bug IDComponentBug StatusBlocker Status

Fedora Linux 35


<figure class="wp-block-table">
Reduce dependencies on python3-setuptoolsSystem-WideApproved
RPM 4.17System-WideFESCo #2593
Smaller Container Base Image (remove sssd-client, util-linux, shadow-utils)Self-ContainedFESCo #2594
Erlang 24Self-ContainedAnnounced
Switching Cyrus Sasl from BerkeleyDB to GDBMSystem-WideAnnounced
Debuginfod By DefaultSelf-ContainedAnnounced

Changes approved, rejected, or withdrawn will be removed from this table the next week. See the ChangeSet page for a full list of approved changes.


Have something you want included? You can file an issue or submit a pull request in the fedora-pgm/pgm_communication repo.

The post Friday’s Fedora Facts: 2021-14 appeared first on Fedora Community Blog.

Release of osbuild 28

Posted by OSBuild Project on April 09, 2021 12:00 AM

We are happy to announce version 28 of osbuild. This time with a large set of fixes and minor additions to the different stages bundled with osbuild. Furthermore, Fedora 35 is now supported as host system.

Below you can find the official changelog from the osbuild-28 sources. All users are recommended to upgrade!

  • Add a new option to the org.osbuild.qemu assembler that controls the qcow2 format version (qcow2_compat).

  • Add history entries to the layers of OCI archives produced by the org.osbuild.oci-archive stage. This fixes push failures to quay.io.

  • Include only a specific, limited set of xattrs in OCI archives produced by the org.osbuild.oci-archive stage. This is specifically meant to exclude SELinux-related attributes (security.selinux) which are sometimes added even when invoking tar with the --no-selinux option.

  • The package metadata for the org.osbuild.rpm stage is now sorted by package name, to provide a stable sorting of the array independent of the rpm output.

  • Add a new runner for Fedora 35 (org.osbuild.fedora35) which is currently a symlink to the Fedora 30 runner.

  • The org.osbuild.ostree input now uses ostree-output as temporary directory and its description got fixed to reflect that it does indeed support pipeline and source inputs.

  • devcontainer: Include more packages needed for the Python extension and various tools to ease debugging and development in general. Preserve the fish history across container rebuilds.

  • Stage tests are writing the prodcued metadata to /tmp so the actual data can be inspected in case there is a deviation.

  • CI: Start running images tests against 8.4 and execute image_tests directly from osbuild-composer-tests. Base CI images have been updated to Fedora 33.

Contributions from: Achilleas Koutsou, Alexander Todorov, Christian Kellner, David Rheinsberg

— Berlin, 2021-04-08

sevctl available soon in Fedora 34

Posted by Connor Kuehl on April 08, 2021 01:25 PM

I am pleased to announce that sevctl will be available in the Fedora repositories starting with Fedora 34. Fedora is the first distribution to include sevctl in its repositories 🎉.

sevctl is an administrative utility for managing the AMD Secure Encrypted Virtualization (SEV) platform, which is available on AMD’s EPYC processors. It makes many routine AMD SEV tasks quite easy, such as:

  • Generating, exporting, and verifying a certificate chain
  • Displaying information about the SEV platform
  • Resetting the platform’s persistent state

As of this writing, Fedora 34 is entering its final freeze, but sevctl is queued for inclusion once Fedora 34 thaws. sevctl is already available in Fedora Rawhide for immediate use.

Please submit all bug reports, patches, and feature requests to sevctl’s upstream repository on GitHub.

Binary RPMs can be found in the rust-sevctl packaging repo.

Help wanted: program management team

Posted by Fedora Community Blog on April 08, 2021 08:00 AM

I’d love to spend time in different Fedora teams helping them with program management work, but there’s only so much of me to go around. Instead, I’m putting together a program management team. At a high level, the role of the program management team will be two-fold. The main job is to embed in other teams and provide support to them. A secondary role will be to back up some of my duties (like wrangling Changes) when I am out of the office. If you’re interested, fill out the When Is Good survey by 15 April, or read on for more information.

About the team

You can read more about the team on Fedora Docs, but some of the duties I see team members providing include:

  • Coordination with other Fedora teams (e.g. Websites, design)
  • Consulting on team process development and improvement
  • Tracking development plans against the Fedora schedule
  • Issue triage and management
  • Shepherding Change proposals and similar requests

Since this is a new team, we still have a lot to figure out. As we go, we’ll figure out what works and adjust to match.

About you

You don’t need to be an expert to join the team. I’d like everyone to have some experience with either contributing to Fedora or project/program management. If you’re lacking in one, we can help fill in the gaps. You should be well-organized (or at least able to fake it) and have 3-5 hours a week available to work with one or more teams in Fedora.

How to join

Fill out the When Is Good survey by 15 April to indicate your availability for a kickoff meeting. This will be a video meeting so that we can have a high-bandwidth conversation. I’m looking for four or five people to start, but if I get more interest, we’ll figure out how to scale. If you’re not sure if this is something you want to do, come to the meeting anyway. You can always decide to not participate.

How to get help from this team

If you’re on another Fedora team and would like to get support from the program management team, great! We don’t have a mechanism for requesting help yet, but that will be coming soon.

The post Help wanted: program management team appeared first on Fedora Community Blog.

lavapipe reporting Vulkan 1.1 (not compliant)

Posted by Dave Airlie on April 07, 2021 08:22 PM

The lavapipe vulkan software rasterizer in Mesa is now reporting Vulkan 1.1 support.

It passes all CTS tests for those new features in 1.1 but it stills fails all the same 1.0 tests so isn't that close to conformant. (lines/point rendering are the main areas of issue).

There are also a bunch of the 1.2 features implemented so that might not be too far away though 16-bit shader ops and depth resolve are looking a bit tricky.

If there are any specific features anyone wants to see or any crazy places/ideas for using lavapipe out there, please either file a gitlab issue or hit me up on twitter @DaveAirlie

Using network bound disk encryption with Stratis

Posted by Fedora Magazine on April 07, 2021 08:00 AM

In an environment with many encrypted disks, unlocking them all is a difficult task. Network bound disk encryption (NBDE) helps automate the process of unlocking Stratis volumes. This is a critical requirement in large environments. Stratis version 2.1 added support for encryption, which was introduced in the article “Getting started with Stratis encryption.” Stratis version 2.3 recently introduced support for Network Bound Disk Encryption (NBDE) when using encrypted Stratis pools, which is the topic of this article.

The Stratis website describes Stratis as an “easy to use local storage management for Linux.” The  short video “Managing Storage With Stratis” gives a quick demonstration of the basics. The video was recorded on a Red Hat Enterprise Linux 8 system, however, the concepts shown in the video also apply to Stratis in Fedora Linux.


This article assumes you are familiar with Stratis, and also Stratis pool encryption. If you aren’t familiar with these topics, refer to this article and the Stratis overview video previously mentioned.

NBDE requires Stratis 2.3 or later. The examples in this article use a pre-release version of Fedora Linux 34. The Fedora Linux 34 final release will include Stratis 2.3.

Overview of network bound disk encryption (NBDE)

One of the main challenges of encrypting storage is having a secure method to unlock the storage again after a system reboot. In large environments, typing in the encryption passphrase manually doesn’t scale well. NBDE addresses this and allows for encrypted storage to be unlocked in an automated manner.

At a high level, NBDE requires a Tang server in the environment. Client systems (using Clevis Pin) can automatically decrypt storage as long as they can establish a network connection to the Tang server. If there is no network connectivity to the Tang server, the storage would have to be decrypted manually.

The idea behind this is that the Tang server would only be available on an internal network, thus if the encrypted device is lost or stolen, it would no longer have access to the internal network to connect to the Tang server, therefore would not be automatically decrypted.

For more information on Tang and Clevis, see the man pages (man tang, man clevis) , the Tang GitHub page, and the Clevis GitHub page.

Setting up the Tang server

This example uses another Fedora Linux system as the Tang server with a hostname of tang-server. Start by installing the tang package:

dnf install tang

Then enable and start the tangd.socket with systemctl:

systemctl enable tangd.socket --now

Tang uses TCP port 80, so you also need to open that in the firewall:

firewall-cmd --add-port=80/tcp --permanent
firewall-cmd --add-port=80/tcp

Finally, run tang-show-keys to display the output signing key thumbprint. You’ll need this later.

# tang-show-keys

Creating the encrypted Stratis Pool

The previous article on Stratis encryption goes over how to setup an encrypted Stratis pool in detail, so this article won’t cover that in depth.

The first step is capturing a key that will be used to decrypt the Stratis pool. Even when using NBDE, you need to set this, as it can be used to manually unlock the pool in the event that the NBDE server is unreachable. Capture the pool1 key with the following command:

# stratis key set --capture-key pool1key
Enter key data followed by the return key:

Then I’ll create an encrypted Stratis pool (using the pool1key just created) named pool1 using the /dev/vdb device:

# stratis pool create --key-desc pool1key pool1 /dev/vdb

Next, create a filesystem in this Stratis pool named filesystem1, create a mount point, mount the filesystem, and create a testfile in it:

# stratis filesystem create pool1 filesystem1
# mkdir /filesystem1
# mount /dev/stratis/pool1/filesystem1 /filesystem1
# cd /filesystem1
# echo "this is a test file" > testfile

Binding the Stratis pool to the Tang server

At this point, we have the encrypted Stratis pool created, and also have a filesystem created in the pool. The next step is to bind your Stratis pool to the Tang server that you just setup. Do this with the stratis pool bind nbde command.

When you make the Tang binding, you need to pass several parameters to the command:

  • the pool name (in this example, pool1)
  • the key descriptor name (in this example, pool1key)
  • the Tang server name (in this example, http://tang-server)

Recall that on the Tang server, you previously ran tang-show-keys which showed the Tang output signing key thumbprint is l3fZGUCmnvKQF_OA6VZF9jf8z2s. In addition to the previous parameters, you either need to pass this thumbprint with the parameter –thumbprint l3fZGUCmnvKQF_OA6VZF9jf8z2s, or skip the verification of the thumbprint with the –trust-url parameter.

It is more secure to use the –thumbprint parameter. For example:

# stratis pool bind nbde pool1 pool1key http://tang-server --thumbprint l3fZGUCmnvKQF_OA6VZF9jf8z2s

Unlocking the Stratis Pool with NBDE

Next reboot the host, and validate that you can unlock the Stratis pool with NBDE, without requiring the use of the key passphrase. After rebooting the host, the pool is no longer available:

# stratis pool list
Name Total Physical Properties

To unlock the pool using NBDE, run the following command:

# stratis pool unlock clevis

Note that you did not need to use the key passphrase. This command could be automated to run during the system boot up.

At this point, the pool is now available:

# stratis pool list
Name Total Physical Properties
pool1 4.98 GiB / 583.65 MiB / 4.41 GiB ~Ca, Cr

You can mount the filesystem and access the file that was previously created:

# mount /dev/stratis/pool1/filesystem1 /filesystem1/
# cat /filesystem1/testfile
this is a test file

Rotating Tang server keys

Best practices recommend that you periodically rotate the Tang server keys and update the Stratis client servers to use the new Tang keys.

To generate new Tang keys, start by logging in to your Tang server and look at the current status of the /var/db/tang directory. Then, run the tang-show-keys command:

# ls -al /var/db/tang
total 8
drwx------. 1 tang tang 124 Mar 15 15:51 .
drwxr-xr-x. 1 root root 16 Mar 15 15:48 ..
-rw-r--r--. 1 tang tang 361 Mar 15 15:51 hbjJEDXy8G8wynMPqiq8F47nJwo.jwk
-rw-r--r--. 1 tang tang 367 Mar 15 15:51 l3fZGUCmnvKQF_OA6VZF9jf8z2s.jwk
# tang-show-keys

To generate new keys, run tangd-keygen and point it to the /var/db/tang directory:

# /usr/libexec/tangd-keygen /var/db/tang

If you look at the /var/db/tang directory again, you will see two new files:

# ls -al /var/db/tang
total 16
drwx------. 1 tang tang 248 Mar 22 10:41 .
drwxr-xr-x. 1 root root 16 Mar 15 15:48 ..
-rw-r--r--. 1 tang tang 361 Mar 15 15:51 hbjJEDXy8G8wynMPqiq8F47nJwo.jwk
-rw-r--r--. 1 root root 354 Mar 22 10:41 iyG5HcF01zaPjaGY6L_3WaslJ_E.jwk
-rw-r--r--. 1 root root 349 Mar 22 10:41 jHxerkqARY1Ww_H_8YjQVZ5OHao.jwk
-rw-r--r--. 1 tang tang 367 Mar 15 15:51 l3fZGUCmnvKQF_OA6VZF9jf8z2s.jwk

And if you run tang-show-keys, it will show the keys being advertised by Tang:

# tang-show-keys

You can prevent the old key (starting with l3fZ) from being advertised by renaming the two original files to be hidden files, starting with a period. With this method, the old key will no longer be advertised, however it will still be usable by any existing clients that haven’t been updated to use the new key. Once all clients have been updated to use the new key, these old key files can be deleted.

# cd /var/db/tang
# mv hbjJEDXy8G8wynMPqiq8F47nJwo.jwk   .hbjJEDXy8G8wynMPqiq8F47nJwo.jwk
# mv l3fZGUCmnvKQF_OA6VZF9jf8z2s.jwk   .l3fZGUCmnvKQF_OA6VZF9jf8z2s.jwk

At this point, if you run tang-show-keys again, only the new key is being advertised by Tang:

# tang-show-keys

Next, switch over to your Stratis system and update it to use the new Tang key. Stratis supports doing this while the filesystem(s) are online.

First, unbind the pool:

# stratis pool unbind pool1

Next, set the key with the original passphrase used when the encrypted pool was created:

# stratis key set --capture-key pool1key
Enter key data followed by the return key:

Finally, bind the pool to the Tang server with the updated key thumbprint:

# stratis pool bind nbde pool1 pool1key http://tang-server --thumbprint iyG5HcF01zaPjaGY6L_3WaslJ_E

The Stratis system is now configured to use the updated Tang key. Once any other client systems using the old Tang key have been updated, the two original key files that were renamed to hidden files in the /var/db/tang directory on the Tang server can be backed up and deleted.

What if the Tang server is unavailable?

Next, shutdown the Tang server to simulate it being unavailable, then reboot the Stratis system.

Again, after the reboot, the Stratis pool is not available:

# stratis pool list
Name Total Physical Properties

If you try to unlock it with NBDE, this fails because the Tang server is unavailable:

# stratis pool unlock clevis
Execution failed:
An iterative command generated one or more errors: The operation 'unlock' on a resource of type pool failed. The following errors occurred:
Partial action "unlock" failed for pool with UUID 4d62f840f2bb4ec9ab53a44b49da3f48: Cryptsetup error: Failed with error: Error: Command failed: cmd: "clevis" "luks" "unlock" "-d" "/dev/vdb" "-n" "stratis-1-private-42142fedcb4c47cea2e2b873c08fcf63-crypt", exit reason: 1 stdout: stderr: /dev/vdb could not be opened.

At this point, without the Tang server being reachable, the only option to unlock the pool is to use the original key passphrase:

# stratis key set --capture-key pool1key
Enter key data followed by the return key:

You can then unlock the pool using the key:

# stratis pool unlock keyring

Next, verify the pool was successfully unlocked:

# stratis pool list
Name Total Physical Properties
pool1 4.98 GiB / 583.65 MiB / 4.41 GiB ~Ca, Cr

Changes in technologies supported by syslog-ng: Python 2, CentOS 6 & Co.

Posted by Peter Czanik on April 06, 2021 11:39 AM

Technology is continuously evolving. There are regular changes in platforms running syslog-ng: old technologies disappear, and new technologies are introduced. While we try to provide stability and continuity to our users, we also need to adapt. Python 2 reached its end of life a year ago, CentOS 6 in November 2020. Using Java-based drivers has been problematic for many, so they were mostly replaced with native implementations.

From this blog you can learn about recent changes affecting syslog-ng development and packaging.

Python 2

Python 2 officially reached its end of life on 1st January, 2020, so well over a year ago. Still, until recently, compatibility with Python 2 has been tested continuously by developers. This testing was disabled when syslog-ng 3.31 was released. What it means, is that if anything is changed in Python-related code in syslog-ng, there is no guarantee that it will work with Python 2.

Packaging changes started even earlier. Distribution packages changed from Python 2 to Python 3 support already years ago, similarly to unofficial syslog-ng packages for openSUSE/SLES and Fedora/RHEL. While request for Python 3 support was regular, nobody asked for Python 2 after the switch. The last place supporting Python 2 as an alternative was DBLD, syslog-ng’s own container-based build tool for developers. The support there was also stopped for Fedora/RHEL, right before the 3.31 release.

CentOS 6

RHEL 6/CentOS 6 had been the most popular syslog-ng platform for many years. Many users liked it due to the lack of systemd. But all good things come to an end, and RHEL 6 (and thus CentOS 6) reached its end of life in November 2020.

Unofficial syslog-ng RPM packages for the platform were maintained on the Copr build service. Their policy is removing packages 180 days after a platform reaches its End of Life (EoL). I do not know the exact date, but around the end of April all RHEL 6/CentOS 6 repositories will be removed from Copr.

Note: if you still need those packages somewhere, create a local mirror for yourself. I do not have a local backup or a build and test environment anymore.

CentOS 7 ARM

RHEL 7 for ARM also reached its EoL in January. While CentOS 7 keeps supporting ARM, the Copr build service removed support for this platform and will remove related repositories, just as it did for CentOS 6. If you need those packages, you have time till the end of June to create a local mirror of them.

Java-based destination drivers

A long time ago, using Java to extend syslog-ng was the best direction to go. Client libraries for popular technologies were unavailable in C, but they were readily available in Java, for example for Elasticsearch and Kafka. Unfortunately, using Java also has several drawbacks, like increased resource usage and difficult configuration. Also, Java-based destination drivers could not be packaged as part of Linux distributions. So, as soon as native C-based clients became available, people switched to them. Only HDFS is not supported by any other means, but nobody seems to use it anymore – at least in the open source world.

What does it mean for you? Java-based drivers are still regularly tested by the syslog-ng development team. On the other hand, even the unofficial openSUSE/SLES and Fedora/RHEL packages dropped support for them. Java support is still there, as some people developed their own Java-based drivers. If you really need these drivers, use syslog-ng 3.27 from the unofficial RPM repositories or compile syslog-ng yourself. Unofficial Debian/Ubuntu packages still include Java-based drivers and on FreeBSD you can still build them in the sysutils/syslog-ng port.

If you have questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by email or even chat with us. For a list of possibilities, check our GitHub page under the “Community” section at https://github.com/syslog-ng/syslog-ng. On Twitter, I am available as @Pczanik.

Summary from the Diversity & Inclusion Team Meetup

Posted by Fedora Community Blog on April 06, 2021 08:00 AM

Fedora’s Diversity & Inclusion Team held a virtual meetup on Sunday March 21st, 2021. We had more than 20 attendees, with three main planning sessions and a Storytelling Workshop. The team had a successful event connecting, processing, and looking towards the future. The Storytelling Workshop was a fun way to unwind after a day of meetings and do something different as a team.

The team decided that Justin Flory will step down as D&I Advisor to Council at the end of this release cycle and Vipul Siddharth will step into the position. We discussed the vision we have for the team moving forward. One of the main sentiments was that we would like to involve the the “techy” or coding teams of the Fedora Project more into our efforts and events. In an endeavor to forward that involvement we are discussing moving from Fedora Women’s Day (FWD) to Fedora Week of Diversity (also, FWD), that would still include the former Fedora Women’s Day. Moving to a Fedora Week of Diversity would provide more opportunities to highlight all aspects of diversity, as well as provide the team more creativity with the event.

The Diversity & Inclusion Team is always looking for new contributors. We are in a planning and formulations phase, so feel free to join us on #fedora-diversity on IRC if you are interested in becoming involved!

<figure class="wp-block-image size-large"></figure>

Meetup Readout

Session: D&I Team Debrief

General Notes

  • D&I: This is still really new work in the scope of history, especially in the software & tech setting
    • draining work
    • resources may not be there, or not known how to use
    • self care is important
  • It would be great to have check ins
    • Main goal would be to stay connected and provide support
    • Casual D&I meetups, maybe monthly?
  • People have personal ties to Fedora and why it is important to them. Why do you Fedora?
    • Brings confidence
    • It is always there, through good times and bad
    • Friendship, fellowship

D&I Team History

  • Starting with a D&I rep to the council and grew from there
  • Diversity panel
    • talked about experiences
    • some adversity at the panel
  • FWD was one of the first things to be worked on
    • FWD is exciting because it happens all over the world, fosters inclusion, and is a decentralized way to empower communities
    • FWD 2020 virtual was a success, but it was mostly on Marie to organize
  • This team was always volunteer driven (no full time Fedorans)
    • When curve balls hit, things fell off the track
    • Council D&I rep was more work than is visible
  • Looking for support from the Council/external sources
    • Help to prioritize things
    • Help with strategy/direction

What did we learn?

  • Burnout
    • Sustainability = onboarding new folks was lacking
  • Being connected is important. We are a community, not just busy bees!
    • It is hard to pick up doing things quickly if we are not feeling connected.
    • We would like to start monthly informal meet-ups where we get together and hang out, not just work on Fedora.
  • Newcomers/onboarding
    • Need easyfix tickets for newcomers to get involved easily, right now it is hard to discern what to do when interested in the D&I team.
    • Too many tasks to choose from, hard to decipher where to start
      • People experience choice paralysis because fedora is huge
      • Created a limited number of tasks for newcomers to choose from

What did we do well?

  • Mentoring interns (mostly as individuals, but there is a lot of overlap)
  • Fedora Women’s Day
    • 6 six years running
    • Participation across four continents
  • Outreachy
    • Sponsoring Outreachy internships from D&I budget allocation
    • Coordinating and supporting
  • Happiness Packets
  • Fedora Appreciation Week
  • Friendship & support to each other
  • Representation
    • We’ve seen an increase in participation of women at Flock
  • Unconscious bias & imposter syndrome sessions

Session: D&I team: Goals, focus, future

General notes

  • Marie is here to help run meetings and support with project management
  • Justin is stepping down as D&I rep, to be replaced by Vipul

What would we like to see

  • Exposure to a broader swath of Fedora
    • How can we involved the rest of the community?
    • We need to promote events better
    • Reach out more through various platforms, to let folks know that we are here
  • The mentorship role in our team could be better aligned with Join SIG/ambassador mentor role
    • Something could be included in the new Join tickets
  • Hybrid events moving forward
    • Team focuses mostly on virtual events, folks can also do local events
  • Matrix/Element will also be improved
    • How can we capitalize on the new chat system?
      • Events, socials, video/audio, integration into other platforms
  • Articles/events/content that address mental health/marginalized folks/empowering & encouraging folks & underrepresented people
  • More docs (D&I) on how to continue contributing to (D&I and Fedora in general)


  • It is important for us to stay focused as a team.
    • We need to prioritize. Tackle limited things at a time to achieve progress.
  • Use our current infrastructure as best we can
  • Categories of improvement/work
    • Docs (D&I doc) (1)
      • Think more strategically, what do we want to put up here?
      • What do we do?
      • How to start (“fedora-join links ?”)
      • How to help?
        • How to continue to help? (Pointing to “fedora-join ?”)
    • Promotion/Marketing/Content (2)
    • Events (3)
    • Outreach/Support/Resource Library (4)

Session: Future of FWD

General Notes

  • “Fedora Women’s Day” -> “Fedora Week of Diversity”
  • Should include FWD local/virtually
    • 2 Time zone event (EU/US) if it is become too bigger so we an comfort everyone as well.
  • In person events could happen that week, and at the end their could be a virtual event
    • do local events, and then come back and connect with the community
  • Think about how to get techy people involved in the event
  • Let’s think about intersectionality, how can we feature that, how can we engage in a technical/creative community
    • Can this be a longer term process? Involving folks in activities before and after
  • We can include Women’s Days and other “days”, if we have the proper representation

Outcomes (vision)

  • Networking
  • Involving the broader Fedora community in the event
  • Inspiration

Brainstorm Session

  • Week of creation
  • Diversity hackathon
  • Week with a session a day with a guest facilitating conversations related to D&I
  • Fedora Stories: Building on contributor stories? We never found a permanent home for that.
  • FWD/D&I in podcast and make talks (non-tech stuff)
  • Fedora Zine takeover or make a bunch of pages for the zine
  • Mixing the idea of building diversity themed tech/craft projects
    • Featured during the virtual component. This could be a great way to get people to network and engage with one another.
    • Can help with education & incorporating & storytelling.

The post Summary from the Diversity & Inclusion Team Meetup appeared first on Fedora Community Blog.

Querying hostnames from beaker

Posted by Adam Young on April 05, 2021 09:11 PM

If you have requested a single host from beaker, the following one liner will tell the hostname for it.

bkr job-results   $( bkr job-list  -o $USER  --unfinished | jq -r  ".[]" )   | xpath -q -e string\(/job/recipeSet/recipe/roles/role/system/@value\)

This requires jq and xpath, as well as the beaker command line packages.

For me on Fedora 33 the packages are:

  • perl-XML-XPath-1.44-7.fc33.noarch
  • jq-1.6-5.fc33.x86_64
  • python3-beaker-1.10.0-9.fc33.noarch
  • beaker-redhat-0.2.1-2.fc33eng.noarch
  • beaker-common-28.2-1.fc33.noarch
  • beaker-client-28.2-1.fc33.noarch

Contribute at Fedora Linux 34 Upgrade, Audio, Virtualization, IoT, and Bootloader test days

Posted by Fedora Magazine on April 05, 2021 08:50 PM

Fedora test days are events where anyone can help make sure changes in Fedora Linux work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed to Fedora before, this is a perfect way to get started.

There are three upcoming test events in the next week.

  • Wednesday April 07, is to test the Upgrade from Fedora 32 and 33 to Fedora 34.
  • Friday April 09, is to test Audio changes made by Pipewire implementation.
  • Tuesday April 13, is to test the Virtualization in Fedora 34.
  • Monday, April 12 to Friday, April 16 is to test Fedora IoT 34.
  • Monday, April 12 and Tuesday, April 13 is to test the bootloader.

Come and test with us to make the upcoming Fedora Linux 34 even better. Read more below on how to do it.

Upgrade test day

As we come closer to Fedora Linux 34 release dates, it’s time to test upgrades. This release has a lot of changes and it becomes essential that we test the graphical upgrade methods as well as the command line. As a part of this test day, we will test upgrading from a full updated, F32 and F33 to F34 for all architectures (x86_64, ARM, aarch64) and variants (WS, cloud, server, silverblue, IoT).
This test day will happen on Wednesday, April 07

Audio test day

There is a recent proposal to replace the PulseAudio daemon with a functionally compatible implementation based on PipeWire. This means that all existing clients using the PulseAudio client library will continue to work as before, as well as applications shipped as Flatpak. The test day is to test that everything works as expected.
This will occur on Friday, April 09

Virtualization test day

We are going to test all forms of virtualization possible in Fedora. The test day will focus on testing Fedora or your favorite distro inside a bare metal implementation of Fedora running Boxes, KVM, VirtualBox and whatever you have. The general features of installing the OS and working with it are outlined in the test cases which you will find on the results page.
This will be happening on Tuesday, April 13.

IoT test week

For this test week, the focus is all-around; test all the bits that come in a Fedora IoT release as well as validate different hardware. This includes:

  • Basic installation to different media
  • Installing in a VM
  • rpm-ostree upgrades, layering, rebasing
  • Basic container manipulation with Podman.

We welcome all different types of hardware, but have a specific list of target hardware for convenience. This will be happening the week of Monday, April 12 to Friday, April 16.

Bootloader Test Day

This test day will focus on ensuring the working of new shim and Grub with BIOS and EFI/UEFI with secure boot enabled. We would like to have the community test it on as many possible types of off the shelve hardware. The test day will happen Monday, April 12 and Tuesday, April 13. More information is available on the wiki page.

How do test days work?

A test day is an event where anyone can help make sure changes in Fedora work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. Test days are the perfect way to start contributing if you not in the past.

The only requirement to get started is the ability to download test materials (which include some large files) and then read and follow directions step by step.

Detailed information about all the test days are on the wiki page links provided above. If you are available on or around the days of the events, please do some testing and report your results.

[Ed. note: Updated at 1920 UTC on 6 April to add IoT test day information; Updated at 1905 UTC on 7 April to add bootloader test day information]

Running the UniFi Network Controller in a Docker Container

Posted by Jon Chiappetta on April 05, 2021 07:44 PM

If you are needing a more generalized and containerized method to run the UniFi Network Controller and you don’t want it running on your main system, you can use a trusted app like Docker to achieve this task!

I made a new repo that has some Dockerfile supported scripts which will pull in the latest Debian container and customize a new image from scratch to run MongoDB + Java8. This is useful if you don’t particularly trust the pre-made, public Docker containers that are already out there!

git clone && cd dockerfi/ — The build and run commands are listed in the main script file (once the container has been started, just browse to https;// and restore from backup). The UI version is statically set to the previous stable release of 6.0.45!

Note: If you need to help layer 3 out: set-inform http;//192.168.X.Y:8080/inform


Edit: I made a small YouTube video running the script:

<figure class="wp-block-embed is-type-rich is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio">
<iframe allowfullscreen="true" class="youtube-player" height="372" sandbox="allow-scripts allow-same-origin allow-popups allow-presentation" src="https://www.youtube.com/embed/hNBjb2b1gNQ?version=3&amp;rel=1&amp;showsearch=0&amp;showinfo=1&amp;iv_load_policy=1&amp;fs=1&amp;hl=en&amp;autohide=2&amp;wmode=transparent" style="border:0;" width="660"></iframe>

Fedora Linux 34 Upgrade Test Day 2021-04-07

Posted by Fedora Community Blog on April 05, 2021 07:21 PM
Fedora 34 Upgrade test day

Wednesday 7 April is the Fedora Linux 34 Upgrade Test Day! As part of the preparation for Fedora Linux 34, we need your help to test if everything runs smoothly!

Why Upgrade Test Day?

We’re approaching the release date for Fedora Linux 34. Most users will be upgrading to Fedora Linux 34 and this test day will help us understand if everything is working perfectly. This test day will cover both a GNOME graphical upgrades and upgrades done using DNF.

We need your help!

All the instructions are on the wiki page, so please read through and come help us test! As always, the event will be in #fedora-test-day on Freenode IRC.

Share this!

Help promote the Test Day and share the article in your own circles!

The post Fedora Linux 34 Upgrade Test Day 2021-04-07 appeared first on Fedora Community Blog.

crocus: gallium for the gen4-7 generation

Posted by Dave Airlie on April 05, 2021 02:38 AM

The crocus project was recently mentioned in a phoronix article. The article covered most of the background for the project.

Crocus is a gallium driver to cover the gen4-gen7 families of Intel GPUs. The basic GPU list is 965, GM45, Ironlake, Sandybridge, Ivybridge and Haswell, with some variants thrown in. This hardware currently uses the Intel classic 965 driver. This is hardware is all gallium capable and since we'd like to put the classic drivers out to pasture, and remove support for the old infrastructure, it would be nice to have these generations supported by a modern gallium driver.

The project was initiated by Ilia Mirkin last year, and I've expended some time in small bursts to moving it forward. There have been some other small contributions from the community. The basis of the project is a fork of the iris driver with the old relocation based batchbuffer and state management added back in. I started my focus mostly on the older gen4/5 hardware since it was simpler and only supported GL 2.1 in the current drivers. I've tried to cleanup support for Ivybridge along the way.

The current status of the driver is in my crocus branch.

Ironlake is the best supported, it runs openarena and supertuxkart, and piglit has only around 100 tests delta vs i965 (mostly edgeflag related) and there is only one missing feature (vertex shader push constants). 

Ivybridge just stop hanging on second batch submission now, and glxgears runs on it. Openarena starts to the menu but is misrendering and a piglit run completes with some gpu hangs and a quite large delta. I expect IVB to move faster now that I've solved the worst hang.

Haswell runs glxgears as well.

I think once I take a closer look at Ivybridge/Haswell and can get Ilia (or anyone else) to do some rudimentary testing on Sandybridge, I will start taking a closer look at upstreaming it into Mesa proper.

Episode 265 – The lies closed source can tell, open source can’t

Posted by Josh Bressers on April 05, 2021 12:01 AM

Josh and Kurt talk about the PHP backdoor and the Ubiquity whistleblower. The key takeaway is to note how an open source project cannot cover up an incident, but closed source can and will cover up damaging information.

<audio class="wp-audio-shortcode" controls="controls" id="audio-2407-1" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_265_The_lies_closed_source_can_tell_open_source_cant.mp3?_=1" type="audio/mpeg">https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_265_The_lies_closed_source_can_tell_open_source_cant.mp3</audio>

Show Notes

Elixir and Phoenix after two years

Posted by Josef Strzibny on April 05, 2021 12:00 AM

Thoughts on the Elixir language and its Phoenix framework after two years of professional work.

I am a seasoned web developer (working primarily with Ruby on Rails before) and someone who got an opportunity to work on a commercial Elixir project. During the past 2 years, I wrote a lot of Elixir, which I had to learn from scratch. I always found this kind of personal post interesting, so I figured I would write one for you.


If you haven’t heard about Elixir yet, I recommend watching the Elixir Documentary featuring the creator José Valim for a start. I don’t remember exactly how I found out about Elixir, but most likely from some Ruby community news. I lurked around Elixir space, read many exciting blog posts, and was generally impressed. What drew me to Elixir? While many good things can be said for Elixir, I liked the idea of the preemptive scheduler and the per-process garbage collector. Why?

The preemtivness of Beam (the Erlang Virtual Machine) means that the system behaves reasonably under high load. That’s not a small thing. Being able to connect with a remote shell to your running system and still operate it despite the fact it’s under 100% load is quite something. A per-process GC then means that you don’t run GC most of the time while processing web requests. Both give you a very low latency. If you want to know what I am talking about, go and watch the excellent video by Saša Jurić The Soul of Erlang and Elixir. It’s the best video out there to realize what Beam is about.

Despite my interest, though, I never actually went and wrote any Elixir. I even told myself that I would most likely pass on Elixir. The problems at hand seemed solvable by Ruby/Rails, and forcing oneself to learn a language without commercial adoption is difficult. To my surprise, one Elixir project appeared out of nowhere in Prague, where I stayed at the time.

I was working on my book full-time, and without having any job per se I accepted :). The project itself is not public yet, so while I would love to tell you more about it, you will still have to wait for its public launch.


On the surface, Elixir feels Ruby-like, but soon you’ll realize it’s very different. It’s a strongly-typed, dynamic, but compiled language. Overall it’s very well designed and features a modern standard library.

Here are Elixir basic types:

iex> 1              # integer
iex> 0x1F           # integer
iex> 1.0            # float
iex> true           # boolean
iex> :atom          # atom / symbol
iex> "elixir"       # string
iex> [1, 2, 3]      # list
iex> [{:atom, "value"}, {:atom, "value2"}] # keyword list
iex> {1, 2, 3}      # tuple
iex> ~D[2021-03-30] # sigil
iex> ~r/^regex$/    # sigil

As you can see, there are no arrays. Just linked lists and quite special keyword lists. We have symbols like in Ruby (with the same problems of mixing them with strings for keys access) and tuples that get used a lot to return errors (:ok vs {:error, :name}). I love how tuples make the flow of returning errors standardized (even though it’s not enforced in any way).

Then there are maps (kind of Ruby’s Hash):

iex> map = %{a: 1, b: 2}
%{a: 1, b: 2}
iex> map[:a]
iex> %{map | a: 3}
%{a: 3, b: 2}

And named structs:

iex> defmodule User do
...>   defstruct name: "John", age: 27
...> end

Structs work similarly to maps, because it’s basically a wrapper on top of them.

We can use typespecs to add typing annotation for structs and function definitions, but they are limited. Elixir compiler won’t use them. Still, they help with documentation, and their syntax is actually nice:

defmodule StringHelpers do
  @type word() :: String.t()

  @spec long_word?(word()) :: boolean()
  def long_word?(word) when is_binary(word) do
    String.length(word) > 8

Arguably, we do get one of the best pattern matching out there. You can pattern match everything all the time. Thanks to pattern matching, you also almost don’t need static typing. Ruby is getting there as well, but could never really match the wholesome pattern matching experience of Elixir, which was designed around pattern matching from the beginning. You pattern match in method definitions on what arguments you accept, pattern match in case statements, and your regular code.

I really like pattern matching combined with function overloading:

def add(nil, nil), do: {:error, :cannot_be_nil}
def add(x, nil), do: {:error, :cannot_be_nil}
def add(nil, y), do: {:error, :cannot_be_nil}
def add(x, y), do: x + y

We could also pattern match on structs or use guard clauses:

def add(nil, nil), do: {:error, :cannot_be_nil}
def add(x, y) when is_integer(x) and is_integer(y) do
  x + y
def add(x, y), do: {:error, :has_to_be_integer}

You can also make your own guards with defguard/1 so guards can be pretty flexible.

Elixir is not an object-oriented language. We practically only write modules and functions. This helps tremendously in understanding code. No self. Just data in and out of functions and composition with pipes. Unfortunately, there is no early return that could be useful.

Standard library

The standard library is excellent and well documented. It feels modern because it’s modern. If you tried Elixir before, you might remember having to use external libraries for basic calendaring, but that’s the past. It does not try to implement everything as the philosophy is that you can also rely on Erlang standard library. An example of that might be functions to work with ETS (Erlang Term Storage), rand, and zip modules.

Calling Erlang is without performance penalty, and when I encounter an Erlang call, it does not even feel weird. All-in-all it feels clean and well designed especially compared to Ruby, which keeps a lot of legacy cruft in the standard library.

ExDoc might be the first impressive thing you get to see in the Elixir world. Just go on and browse the Elixir docs. Beautifully designed and featuring nice search, version switching, day and night modes. I love it. And as for the code documentation itself, Elixir is amazing. So are the docs for the main libraries and modules (Phoenix, Absinth). Some not-so-common ones could use help, though.


Elixir’s tooling is one of the best out there. Outside static type checking or editor support, that is. You get Mix which you use as a single interface for all the tasks you might want to do around a given project. From starting and compiling a project, managing dependencies, running custom tasks (like Rake from Ruby) to making releases for deployment. There is a standardized mix format to format your code as well:

$ mix new mix_project && cd mix_project
$ mix deps.get
$ mix deps.compile
$ mix test
$ mix format
$ mix release

A little annoying is the Erlang’s build tool rebar3 which you will use indirectly and which causes weird compilation errors:

==> myapp
** (Mix) Could not compile dependency :telemetry, "/home/strzibny/.mix/rebar3 bare compile --paths="/home/strzibny/Projects/digi/backend/_build/dev/lib/*/ebin"" command failed. You can recompile this dependency with "mix deps.compile telemetry", update it with "mix deps.update telemetry" or clean it with "mix deps.clean telemetry"

Luckily the helpful messages will guide you to fix it:

$ mix deps.get telemetry
Resolving Hex dependencies...
Dependency resolution completed:
$ mix deps.compile telemetry
===> Compiling telemetry

The question is, why it had to fail the first time?

Moving on from Mix, you’ll get to use the very nice IEx shell that I wrote about in detail already. My favorite things about IEx are the easy recompilation of the project:

iex(1)> recompile

And the easy and native way to set breakpoints:

iex(1)> break!(MyModule.my_func/1)

The only annoyance comes from Elixir data types and how they work. Inspecting lists require this:

iex(3)> inspect [27, 35, 51], charlists: :as_lists

Also, the Ruby IRB’s recent multiline support would be highly appreciated.

And there is more! Beam also gives you a debugger and an observer. To start Debugger:

iex(1)> :debugger.start()


Image borrowed from the official page on debugging.

And Observer:

iex(1)> :observer.start()

They are both graphical.

Debugger’s function is clear, Observer helps to oversee the processes and supervision trees as Erlang VM is based on the Actor pattern with lightweight supervised processes. Coming from Ruby, I also like how the compiler does catch a bunch of errors before you get to your program. Then we have Dialyzer that can catch a ton of stuff that’s wrong, including the optional types from typespecs. But it’s far from perfect (both in function and speed), and so many people don’t run with it.

Most developers seek a great IDE or editor integration. I am using Sublime Text together with Elixir language server, and I documented the setup before. There is also a good plugin for IntelliJ IDEA that might be the best you can get right now. Elixir is not Java, but many nice things work.

The only real trouble for me is that my setup is quite resource-hungry. So while super helpful, I do tend to disable it at times. In general, I would say the editor support is somehow on par with Ruby, but I also believe Elixir’s design allows for great tools, we just don’t have them yet.


Testing Elixir code is pretty nice. I like that everyone uses ExUnit. One cool thing is doctests:

# Test
defmodule CurrencyConversionTest do
  @moduledoc false

  use ExUnit.Case, async: true

  doctest CurrencyConversion

# Module
defmodule CurrencyConversion do
  @doc """
  Convert from currency A to B.
  ### Example
      iex> convert(Money.new(7_00, :CHF), :USD)
      %Money{amount: 7_03, currency: :USD}

The above documentation’s example will be run as part of the test suite. Sweet!

The thing getting used to coming from Rails is mocking. While you might like the end result, it certainly is more tedious to write. This is because you cannot just override anything like in the Ruby world. When I use the Mox library, I usually have to:

  • Write a behavior for my module (something like an interface)
  • Use this behavior for my real module and my new stub (that will return a happy path)
  • Register the stub with Mox
  • Use configuration to set the right module for dev/production and testing

That way, you can easily test a default response and also use Mox to return something else for each test (such as an error response). I have a post explaining that.

The language nature of modules and functions ensures that your testing is straightforward, and multicore support ensures your test runs really really fast. The downside to a fast test suite is that you have to compile it first. So do not necessarily expect fast tests for your projects in CI. You will, however, see a considerable improvement over the Rails test suites once they get big.


Phoenix is the go-to web framework for Elixir. It’s neither Rails in scope but neither a microframework like Sinatra. It has some conventions, but you can change them without any big problem. Part of the reason for that is that it’s essentially a library, and also that you pair it with Ecto, your “ORM”. You write your Elixir application “with” Phoenix, not writing a Phoenix application (like with Rails).

Apart from being fast (Elixir is not fast perse, but templates are super-efficient, for example), it has two unique features that make it stand out even more.

One of those is LiveView, which lets you build interactive applications without writing JavaScript. And the second is LiveDashboard, a beautiful dashboard built on top of LiveView that you can include in your application in 2 minutes. It gives you many tabs of useful information about the currently running system (and you can switch nodes easily too). Some of those are:

  • CPU, memory, IO breakdown
  • Metrics (think telemetry)
  • Request Logger (web version of console logs on steroids)
  • web version of :observer

I wish Phoenix had a maintenance policy like Rails so it could be taken more seriously. On the other hand, I think it doesn’t change as much anymore. Phoenix name and logo are also a nice touch as a reference to Beam’s fault tolerance (your Elixir processes will come back from the ashes).


What’s important to me in a web framework is productivity. I don’t care I can craft the best performing applications in C, or have everything compiler-checked. I care about getting stuff done. I prefer frameworks that are designed for small teams because I want to be productive on my own. Phoenix is not batteries-included as Rails, although having features like LiveDashboard is probably better than having Action Text baked in. There are file uploads in LiveView, but it’s not a complete framework like Active Storage. So it’s behind Rails a little in productivity, but it’s still a very productive framework.

I am also convinced Phoenix scales better not only for hardware but also in terms of the codebase. I like the idea of splitting lib/app and lib/app_web from the beginning and the introduction of contexts. Context tells you to split your lib/app in a kind of service-oriented way where you would have app/accounting.ex or app/accounts.ex as your entry points to the functionality of your app.

Another interesting aspect is that since Phoenix is compiled, browsing your development version of the app is not slow like in Rails. It flies. Errors are also pretty good (and both error reporting and compiler warnings are improving every day):

constraint error when attempting to insert struct:

    * unique_validation_api_request_on_login_id_year_week (unique_constraint)

If you would like to stop this constraint violation from raising an
exception and instead add it as an error to your changeset, please
call `unique_constraint/3` on your changeset with the constraint
`:name` as an option.

The changeset defined the following constraints:

    * unique_address_api_request_on_login_id_year_week (unique_constraint)

But what I really really like? The development of Phoenix full-stack applications. No split between an Asset Pipeline and Webpacker (two competing solutions) and everything works without separately running your development Webpack server. You change a React component, switch to a Firefox window, and the change is there! And the only thing you were running is your mix phx.server.

But productivity cannot happen without good libraries. While Elixir and Phoenix eco-system has some outstanding options for things like GraphQL (Absinth) and Stripe (Stripity Stripe) there are not many good options for cloud libraries and other integrations. I feel like Stripe is the only exception here, but it’s not an official SDK.

Sometimes this is problematic as making your own SOAP library is not as much fun if you need to be shipping features involving SOAP at the same time. Sometimes, though, this can lead to building minimal solutions that are easy to maintain. We have practically two little modules for using object storage in Azure. I blogged before about how I implemented Azure pre-signing if you are interested.


The deployment of Phoenix can be as easy as copying the Mix release I already mentioned to the remote server. You can then start it as a systemd service, for instance. While it wasn’t always straightforward to deploy Elixir web applications, it got ridiculously easy recently. Imagine running something like this:

$ mix deps.get --only prod
$ MIX_ENV=prod mix compile
$ npm install --prefix ./assets
$ npm run deploy --prefix ./assets
$ MIX_ENV=prod mix phx.digest
$ MIX_ENV=prod mix release new_phoenix
$ PORT=4000 build/prod/rel/new_phoenix/bin/new_phoenix start

Of course, you can make a light way Docker container too, but maybe you don’t even need to. Mix releases are entirely self-contained (even better than a Java’s JAR)! Here is how to make them with a little bit of context. The only thing to pay attention to is that they are platform-dependent, so you cannot cross-compile them easily right now.

Although people are drawn to Elixir for its distributed nature, its performance makes it a great platform for running a powerful single server too (which is how devs at X-Plane flight simulator run their Elixir backend). Especially since Elixir also supports hot deployments, which is kind of cool. Mix releases do not support this option, though.


The Elixir (and Phoenix) community is amazing. I always got quick and very helpful answers on Elixir Forum and other places. Elixir is niche. But it’s not Crystal or Nim niche. Still, it’s not an exception that you get answers directly from José Valim. How he can even reply so fast is still beyond me :). Thanks, Jóse!

Podium, Sketch, Bleacher Report, Brex, Heroku, and PepsiCo are famous brands using Elixir. Elixir Companies is a site tracking public Elixir adoption. I am myself on a not yet public project, so I am sure they are more Elixir out there!

If you are blogging about Elixir, join us at BeamBloggers.com. There is also ElixirStatus for community news.

Worth it?

And that’s pretty much it. If you are surprised I didn’t get into OTP, it’s because I didn’t get to do much OTP. It’s sure great (you reap the benefits just by using Phoenix), but you can use Elixir without doing a lot of OTP yourself.

The certain pros of Elixir are a small language you’ll learn fast, a modern, beautifully documented standard library, robust pattern matching, and understanding functions without headaches. What I don’t like is the same split for string vs atom keys in maps (without the Rails HashWithIndifferentAccess ) and I have to admit – there are times I miss my instance variables.

Learning Elixir and Phoenix is undoubtedly worth it. I think it’s technologically the best option to build ambitious web applications and soft-realtime systems we have today. It still lacks in few areas, but nothing that cannot be fixed in the future. And if not for Elixir, then for the Beam platform (see Caramel).

I also like that Elixir is not just a language for the web. There is Nerves for IoT, and recently we got Nx with LibTorch backend.

InvoicePrinter 2.1 with Ruby 3 support

Posted by Josef Strzibny on April 05, 2021 12:00 AM

Ruby 3 was released three months ago, so it was a time to support it in InvoicePrinter, a pure Ruby library for generating PDF invoices.

InvoicePrinter 2.1 dependencies were upgraded to Prawn 2.4 and Ruby 3 by following the separation of positional and keyword arguments. If you pass a hash to InvoicePrinter::Document or InvoicePrinter::Document::Item, you now need to use double splat in front of it:


This tells Ruby you are indeed passing a hash for keyword arguments.

Apart from Ruby 3 support, I improved the single line note. The note field was cut if it was longer than one line. The issue is now fixed by supporting a multi-line note. Here is how it looks like.

Finally, this release removes address fields that got deprecated with a warning in 2.0 release.

Instead of providing addresses in a granual fields as:

provider_street: '5th Avenue',
provider_street_number: '1',
provider_postcode: '747 05',
provider_city: 'NYC',

You now have to do it as follows:

provider_address = <<ADDRESS
  Rolnická 1
  747 05  Opava

invoice = InvoicePrinter::Document.new(
  number: 'NO. 198900000001',
  provider_name: 'John White',
  provider_lines: provider_address,

Since the library doesn’t want to be concerned with formatting the address fields, it’s better to support addresses in a more flexible way by having a multiline field.

I released 2.1.0.rc1 for you to try and the final 2.1.0 will follow shortly. If you are missing something in InvoicePrinter, it’s a good time to [open]((https://github.com/strzibny/invoice_printer) a feature request for 2.2 too.

Unified upstream and downstream testing with tmt and Packit

Posted by Cockpit Project on April 05, 2021 12:00 AM

Automated package update gating can tremendously increase the quality of a Linux distribution. (Gated packages are only accepted into a distribution when tests pass.)

Two and a half years ago, we started to gate the Fedora cockpit package on our browser integration tests. We have continued to increase the number of tests ever since.

I’m especially happy gating is now in Fedora, as I had worked on testing in Ubuntu and Debian many years ago. (Adoption is a bit slower in Fedora, as it does not do reverse dependency gating yet.)

Fedora gating woes

But there’s a problem of scale: The more tests we added to gating, the more likely it became that any one of them would fail. Fedora’s distribution gating tests also failed at the worst possible time: After an upstream release. It felt like every single Bodhi update in the last year had failing tests. I couldn’t remember a single time when tests were green.

Fedora’s test VMs use different settings from Cockpit’s, such as the number of CPUs and amount of RAM, or the list of preinstalled packages. The time it takes to perform each test varies as well. For example: Fedora’s testing VMs (running on EC2) are notably slow during evenings in Europe.

Running Fedora’s tests locally requires know-how and tricks:

  • How is the test environment defined and configured?
  • Where can someone download the gating VM images?
  • How do I start them to get a similar environment as the CI system?

Fedora’s Standard Test Interface was flexible and precise when covering the API, but lacked pinning down the test environment. The documentation more or less says “just run ansible-playbook in a VM”, but there is no tool to provide such a VM.

It was time to fix this once and for all.

Fix: Run distribution tests upstream

The concept to fix the tests is simple:

  1. Pin down the environment where these tests run, and provide a tool to create and use them.
  2. Make it trivial to locally run and debug a package’s gating tests.
  3. Run gating tests for every upstream change (i.e. pull request), using the exact same environment, test metadata, and configuration.

I’m happy to say that, after a lot of work from several different teams, all these now exist!

Flexible Metadata Format

FMF (Flexible Metadata Format) is the successor of the Ansible-based Standard Test Interface. FMF is declarative YAML and distribution/project agnostic. The “flexible” in FMF is rich, so that (by design) it does not limit what tests can do or where to they run. Despite its complexity, most settings have good defaults, so you don’t need to know about every detail.

We first added FMF to Cockpit’s starter kit. As a reference, the central file is test/browser/main.fmf. This lists the test dependencies, the entry script, and a timeout:

    Run browser integration tests on the host
  - cockpit-starter-kit
  - npm
  - python3
test: ./browser.sh
duration: 60m

Translating from the STI Ansible tests.yml is straightforward. The STI configuration looked like this:

- hosts: localhost
  - role: standard-test-source
    - always

  - role: standard-test-basic
    - classic
    - cockpit
    - npm
    - python3
    - verify:
        dir: .
        run: ./verify.sh
        save-files: ["logs/*"]

Aside from the above, there’s a little bit of boilerplate needed:

  • .fmf/version (just “1”)
  • At least one top-level plans/*.fmf. This can be the same for every project. Hopefully, it may be the implied default some day.

This test metadata format provides underpinnings for the following new tools.

Test Management Tool

Test Management Tool (tmt) addresses the first two points (pinning the environment and running locally). If a project has FMF metadata for its tests, running tmt as simple as:

tmt run

The tool then:

  1. downloads a standard Fedora development series VM (34 at the moment)
  2. starts it in libvirt/QEMU
  3. runs your tests inside the VM
  4. produces a live log while the test is running
  5. copies out all the test logs and artifacts
  6. cleans up everything in the previous steps

tmt customization

The run command uses a lot of defaults, but supports customization.

Example 1: Run on a different Fedora release:

tmt run --all provision --how virtual --image fedora-33

Example 2: Run the steps until the report stage (thus skipping finish). This allows you to ssh into the test VM and investigate failures.

tmt run --until report
tmt run -l login

See --help and the documentation for details.

Until recently, this only worked with qemu:///system libvirt. (That is: not in containers or toolbox.)

The latest testcloud and tmt versions have switched to qemu:///session by default. (Thanks to Petr Šplíchal for responding to my nagging so quickly!) Using session enables tmt to run without root privileges, bridges, or services.


Packit is a tool and a service to automatically package upstream releases into Fedora or Copr.

It recently learned a cool new trick: The Packit-as-a-Service GitHub app runs a project’s FMF test plans in pull requests. Packit-as-a-Service is open source, simple to set up, and free to use. For projects that use it, this addresses point 3 above (running gating tests for every upstream change).

Tests run on the testing-farm, which provides reasonable (1 CPU, 2 GiB RAM) AWS EC2 instances. Critically, this is the exact same infrastructure that the Fedora gating tests use. This is by design. It’s easier to maintain one testing farm than two sets of infrastructure. Using the same infrastructure provides the necessary reproducibility for project maintainers.

Like Travis or GitHub workflows, your project only needs to add a packit.yaml file. For example, here’s Cockpit starter-kit’s:

specfile_path: cockpit-starter-kit.spec
  post-upstream-clone: make cockpit-starter-kit.spec
  # reduce memory consumption of webpack in sandcastle container
  # https://github.com/packit/sandcastle/pull/92
  # https://medium.com/the-node-js-collection/node-js-memory-management-in-container-environments-7eb8409a74e8
  create-archive: make NODE_OPTIONS=--max-old-space-size=500 dist-gzip
  # starter-kit.git has no release tags; your project can drop this once you have a release
  get-current-version: make print-version
  - job: tests
    trigger: pull_request
      - fedora-all

The YAML above binds together:

  • the knowledge how to produce an upstream release tarball from your branch
  • where the spec file is
  • which Fedora releases to run tests in a PR

Packit will then use this information to:

  1. build the tarball (create-archive)
  2. build an SRPM with the spec file
  3. build the SRPM in a temporary Copr
  4. use tmt to run your tests against these built RPMs

For an upstream project relying on tests, it can’t get much simpler!

An in-practice example with starter-kit

As an example: Look at a recent starter-kit PR. Click on “View Details” to expand the tests. It shows four Packit runs.

It’s great, but not yet perfect. It is still not obvious how to get from such a result link to all artifacts.

Minor quality-of-life improvements that are likely forthcoming:

  • Finding test artifacts (for now, look at the log to find out the path to the /work-allXXXXXX directory and append that to the URL)
  • Seeing live logs while a test is running

Recent Fedora CI changes

As mentioned above, Fedora’s gating tests are now using the exact same testing farm as Packit. This recent switch allows the test to run in the same environment. It also supports the new FMF+tmt test metadata and the legacy STI format.

These changes get us close to the goal of sharing tests upstream and downstream.

Missing: embedded test support

While it’s almost complete, there is a missing part. There is no current clean way to run tests contained in the upstream tarball. Right now, the packaging dist-git must have a top-level FMF test plan like this:

  how: fmf
  repository: https://github.com/cockpit-project/cockpit
  # FIXME: get rid of the hardcoding: https://github.com/psss/tmt/issues/585
  ref: "241"
  how: tmt

The workaround, seen in the above snippet, uses tests from a specific tag in the upstream project git. The git tag must match the release in the spec file, to keep tests in-sync with the tested packages. This is awkward, as it requires accessing a remote git (at a specific tag), even though tests exist in the source tarball.

Changing this requires some tmt design discussion. For now, we hacked our release scripts to bump up the test plan’s ref: when committing a new release to dist-git. If you use this in your project, you need similar “magic” or always update the test plan’s ref: along with your spec file.

Even with this hack, Cockpit’s commit to move from STI to upstream FMF was still a major net gain. Cockpit’s tests run straight from upstream now.

Putting it all together

Cockpit’s starter-kit, the basis for creating your own Cockpit UIs, implements this all now: FMF metadata, setup scripts, packit.yaml, and documentation.

Doing the same for Cockpit itself was more involved, because packit’s create-archive step has limits: it needs to work in a 768 MiB VM and finish within 30 minutes, but for larger projects this is not enough for webpack. Instead, a GitHub workflow builds the tarballs and Packit downloads the pre-built artifacts. (We want to do that anyway, as pre-building is useful for speeding up reviews and local development as well.)

The VM constraints are not an issue for smaller projects like cockpit-podman. The entire webpack build does fit within packit’s limits.

It should also not be an issue for most C/Python/etc. projects where make dist (or meson dist, ./setup.py sdist, etc.) will usually be quick and lean.

Finally, we were able to collect the prize… Thanks to the new testing frameworks, Cockpit release 241 passed Fedora gating tests for the first time in roughly a year! 🎉


There are finally tools to for cloud-first, proper, consistent, and free upstream/downstream CI… and all without having to maintain your own infrastructure! This is a major milestone and motivator. There’s now no excuse to ship any more broken stuff! 😀

Many thanks in particular to Petr Šplíchal (testcloud/tmt), Tomas Tomecek (packit), and Miroslav Vadkerti (Testing Farm) for tirelessly fixing stuff, responding to my nagging, and helping me with figuring out how it all hangs together!

New badge: Signal Specialist (Upstream Release Monitoring II) !

Posted by Fedora Badges on April 02, 2021 03:16 PM
Signal Specialist (Upstream Release Monitoring II)You mapped 50 upstream project to a Fedora package on release-monitoring.org

Friday’s Fedora Facts: 2021-13

Posted by Fedora Community Blog on April 02, 2021 03:00 PM

Here’s your weekly Fedora report. Read what happened this week and what’s coming up. Your contributions are welcome (see the end of the post)! The Final freeze begins Tuesday.

I have weekly office hours on Wednesdays in the morning and afternoon (US/Eastern time) in #fedora-meeting-1. Drop by if you have any questions or comments about the schedule, Changes, elections, or anything else. See the upcoming meetings for more information.

Remember that many locations are changing to/from summer time in the next few weeks.



<figure class="wp-block-table">
elementary Developer Weekendvirtual26-27 Junecloses ~20 Apr
Akademyvirtual18-25 Junecloses 2 May
openSUSE Virtual Conferencevirtual18-20 Junecloses 4 May
DevConf.USvirtual2-3 Sepcloses 31 May

Help wanted

Prioritized Bugs

<figure class="wp-block-table">
Bug IDComponentStatus

Upcoming meetings


Fedora Linux 34


Upcoming key schedule milestones:

  • 2021-04-20 — Final release early target
  • 2021-04-27 — Final release target #1


Change tracker bug status. See the ChangeSet page for details of approved changes.

<figure class="wp-block-table">


<figure class="wp-block-table">
Bug IDComponentBug StatusBlocker Status

Fedora Linux 35


<figure class="wp-block-table">
rpmautospec – removing release and changelog fields from spec filesSystem-WideApproved
“Fedora Linux” in /etc/os-releaseSystem-WideApproved
Reduce dependencies on python3-setuptoolsSystem-WideFESCo #2590
RPM 4.17System-WideAnnounced
Smaller Container Base Image (remove sssd-client, util-linux, shadow-utils)Self-ContainedAnnounced

Changes approved, rejected, or withdrawn will be removed from this table the next week. See the ChangeSet page for a full list of approved changes.


Have something you want included? You can file an issue or submit a pull request in the fedora-pgm/pgm_communication repo.

The post Friday’s Fedora Facts: 2021-13 appeared first on Fedora Community Blog.

Fedora Council statement on Richard Stallman rejoining FSF Board

Posted by Fedora Magazine on April 02, 2021 12:00 PM

The Fedora Project envisions a world where everyone benefits from free and open source software built by inclusive, welcoming, and open-minded communities.

We care about free software, but free software is not just bits and bytes. Fedora is our people, and our vision includes healthy community. A healthy community requires that we be welcoming and inclusive. For those in our community who have experienced harassment of any kind, this past week has been painful and exhausting.

There is no room for harassment, bullying, or other forms of abuse in Fedora. We take our Code of Conduct seriously in order to ensure a welcoming community.

Along with many in the free and open source software world, the Fedora Council was taken aback that the Free Software Foundation (FSF) has allowed Richard Stallman to rejoin their Board of Directors given his history of abuse and harassment. The Fedora Council does not normally involve itself with the governance of other projects. However, this is an exceptional case due to the FSF’s stewardship of the GPL family of licenses, which are critical for the work we do.

In keeping with our values, we will stop providing funding or attendance to any FSF-sponsored events and any events at which Richard Stallman is a featured speaker or exhibitor. This also applies to any organization where he has a leadership role.

Excellent technical contribution is not enough. We expect all members of our community to uphold the Friends value.

– The Fedora Council

Save the date: Fedora Linux 34 Release Party!

Posted by Fedora Community Blog on April 02, 2021 08:00 AM

On the tail of the release of Fedora Linux 34 Beta, I am excited to announce that we will be celebrating the final release of Fedora Linux 34 with a virtual Release Party! Join us April 30th & May 1st for a series of sessions on the new features in F34 as well as some of the latest news and developments in Fedora. Make sure to save the dates and register on Hopin to party with Fedora!

There will be more details coming shortly, but you can expect to enjoy sessions on topics such as Fedora KDE, i3, Fedora Zine, and the new Fedora logo. We will also have a series of our favorite socials, including a pub quiz and a couple game sessions. Lastly, look forward to testing out a new Hallway Track solution that should be a lot of fun and bring some spontaneity to the event!

The post Save the date: Fedora Linux 34 Release Party! appeared first on Fedora Community Blog.

All systems go

Posted by Fedora Infrastructure Status on April 01, 2021 10:26 PM
New status good: Everything seems to be working. for services: Ipsilon, Badges, Blockerbugs, Package Updates Manager, Fedora Infrastructure Cloud, COPR Build System, Documentation website, Fedora elections, Account System, Fedora Messaging Bus, Fedora Calendar, Fedora pastebin service, The Koji Buildsystem, Koschei Continuous Integration, Kerberos, Mailing Lists, Module Build Service, Mirror List, Mirror Manager, Fedora Packages App, Pagure, Fedora People, Package maintainers git repositories, Fedora Container Registry, ABRT Server, Fedora websites, Fedora Wiki, Zodbot IRC bot

There are scheduled downtimes in progress

Posted by Fedora Infrastructure Status on April 01, 2021 07:22 PM
New status scheduled: Update/reboot of all services for services: Ipsilon, Badges, Blockerbugs, Package Updates Manager, Fedora Infrastructure Cloud, COPR Build System, Documentation website, Fedora elections, Account System, Fedora Messaging Bus, Fedora Calendar, Fedora pastebin service, The Koji Buildsystem, Koschei Continuous Integration, Kerberos, Mailing Lists, Module Build Service, Mirror List, Mirror Manager, Fedora Packages App, Pagure, Fedora People, Package maintainers git repositories, Fedora Container Registry, ABRT Server, Fedora websites, Fedora Wiki, Zodbot IRC bot

Major service disruption

Posted by Fedora Infrastructure Status on April 01, 2021 05:59 PM
Service 'The Koji Buildsystem' now has status: major: Koji failed on update

Happy Backup Day

Posted by Jens Kuehnel on April 01, 2021 11:38 AM

Finally I can write my happy “backup day” blog post, after several years of problems with my own backup. Even when it is a couple of days to late 😉 The last weeks I create a new backup setup and rolled it out to all my “production” systems. Yeahh.

Here is my new / current setup:


During my search for a backup solution I stumbled upon the tool backupninja and really liked the idea. Don’t write your own script that calls DB backup and the filesystem backup, but instead drop a file inside /etc/backup.d/ and it will automatically does it for you and sends an email if there is a problem. Even when this is a very Debian centric project, there is a Fedora/EPEL package for it. But the EPEL Package is only on version 1.1.0, instead of the current 1.2.1, but a mock rebuild of the fedora package for RHEL7 fixed that without any major issues. 1.2.1 is important because now restic is supported. There is still some love needed, because there are now at least two trees that are drifting apart ;-(

restic vs borg

The part that took the longest was the decision restic vs. borg. I used duplicity before, but the necessity to make a full backup regularly was a big NO for me. Even when duplicity is the only backup – that I know off – that supports repairing of errors within the archive with the help of par2 files. But at the End the decision was borg vs. restic. After some considerations I decided to simply start with both and decide later 😉 Better two backups than no backups.

Android Backup = seedvault vs. TitaniumBackup

I’m using titanium Backup for Android for a very long time. It works good, it can be bought without a google account and I moved between multiple Devices by restoring a backup on the new device. But the need for a rooting solution makes it hard for devices other than my personal own ( dor my wife, friends etc.). I was surprised when I learned today the my LineageOS already have a backup solution build in. It was something that was mention in the release-notes of LineageOS 18.1: seedvault. I was shocked to find it already in current LineageOS 17.1. My highlight was the direct support for nextcloud as a backup target. I did not tested the restore yet, but that will be next 😉 So at the moment I also have 2 backup solution for my Google free Android phone.

Password and Bookmark sync and Backup

My last paint point when it comes to backup was backup of my Browser bookmark and Passwords including sync. Of course I could use Chromium or Firefox buildin sync mechanism. (I use both browser at the moment, but I still use primarily firefox. In part because I use the buildin password store. )

An article about bitwarden and podman was posted in the fedora magazin years ago and I was playing around since about a year ago. Because Chromium was forbidden to use the sync from Google Chrome soon (or already), the idea was regrowing in my head.

The nice thing about bitwarden is that it is OpenSource and has the possibility to run it self hosted, but it is quite a heavy setup with more than 10 containers running and it has to be real docker, not podman ;-(. But bitwarden_rs to the rescue. It is a rewrite in rust, that is using a sqlite database and can run without docker. Because it is a rewrite it does not support all options from the full setup, but for a small setup it is ideal. The migration is not yet done, but that will hopefully happen in the next couple of weeks.

Ask Fedora retrospective – 2020

Posted by Fedora Community Blog on April 01, 2021 08:00 AM

In the first quarter of 2019, we officially moved the Ask Fedora user support web site to Discourse. You can read more about the migration on the Ask Fedora Retrospective – 2019 published last year.

How it is going

The forum is still running very well and in 2020 we can note a growth of new topics (almost doubled), a growth of replies, and a growth of users and people interactions. And an increase of replies marked as solved.

While we can see a growth in topics and replies in the English category, discussions in other languages categories (we have Spanish, Italian, Persian, Simplified Chinese and Traditional Chinese) are still in line with 2019 numbers (very low compared to English, just a few dozen). As noted in 2019, probably even non-native English speaking people prefer to ask questions in the English category, possibly because it is more active so there are greater chances to get answers. Or they prefer to use other places where their native language is the main and solely one used.

Spam and bad behavior

Spam is nearly absent thanks to the fact that authentication is tied to FAS and thanks to Discourse’s antispam system.

Like in 2019, also in 2020 there was not any notable flame or bad behavior that needed an heavy intervention by moderators (i.e. nobody was suspended due to Code of Conduct infringement).


Thanks to the impulse of the Fedora Project Leader, we made several adjustments to the platform in order to make it more effective, friendly and usable. For example, publishing in the main categories is now forbidden; people have to use the subcategories. This lead to less work for moderators. We rearranged the badges to be more Ask-Fedora-specific. Display names are now visible in the posts. The default site language is based on the headers provided by the browser. We also made some other tweaks on minimal posts length, notifications, and other small improvements.


Topics and replies

As you may know, for each language category there are two subcategories covering the entire lifecycle of a release: “on using Fedora” and “on installing or upgrading Fedora”. We still strongly believe that this is a good categorization: the forum looks more clean and easy if we avoid a high number of categories i.e. for each Spin, each Edition, each Desktop Environment.

The number of new posts (topics) in the English “on using Fedora” subcategory was 1072 in 2019, with 5048 replies, and 1907 with 9492 replies in 2020. The number of new posts in the English “on installing or upgrading Fedora” subcategory was 227 with 1566 replies in 2019 and 323 with 2059 replies in 2020.

<figure class="wp-block-image size-large"></figure> <figure class="wp-block-image size-large"></figure>

The total number of topics in all the forum categories was 1602 with 7898 replies in 2019 and 2519 with 12546 replies in 2020.

<figure class="wp-block-image size-large"></figure>

The number of topics marked as solved was 550 in 2019 and 846 in 2020.

The average number of daily replies was 40 in 2019 and 50 in 2020.


Discourse works with “trust levels”. When a user log in to the forum, they start at Trust Level 0. At every interaction with the forum (new topics, replies, likes given and received, and also the time spent reading the posts of other people), they gain points, and they can reach new Trust Levels.

In 2019, there was a total of 658 users with a Trust Level equal or greater to 1. At the end of 2020 there was a total of 1569 users that reached or went beyond such Trust Level.

Counting also the Trust Level 0, that are, as said before, the less active users (maybe just curious people, or people who joined the forum just to ask a single question), at the end of 2019 there were 2103 users, whereas at the end of 2020 we have a total of 4540 users, that is 2437 resistered users in the previous year.

The future

New changes are on their way. The design team is working on a redesign of the web site theme. We are also hoping to connect Discourse’s badge system with Fedora Badges.

Heroes of Ask Fedora

The Ask Fedora community want to thank these users for their great contribution, time and great work helping other people:

  • vgaetera , with 98 answers marked as solution
  • augenauf, 32
  • twohot, 31
  • lcts, 26
  • vits95, 19
  • ersen, 16
  • alexpl, 13
  • computersavvy, 11
  • grumpey, 11
  • tjdoyle, 10

And like the previous year: thanks to everyone using Ask Fedora… and keep it growing.

The post Ask Fedora retrospective – 2020 appeared first on Fedora Community Blog.

نسخه نهایی توزیع AlmaLinux OS 8.3 منتشر شد

Posted by Fedora fans on April 01, 2021 06:30 AM

AlmaLinuxپس از انتشار نسخه ی بتا از توزیع AlmaLinux، اکنون تیم توسعه ی این توزیع خبر انتشار نسخه نهایی AlmaLinux OS 8.3 را اعلام کرد.

AlmaLinux OS بر پایه سورس کد های Red Hat Enterprise Linux می باشد و هدف آن جایگزین شدن با CentOS می باشد. AlmaLinux OS برای همیشه آزاد و Open Source است و هم اکنون نسخه ی پایدار آن در دسترس می باشد که کاربران می توانند برای هر منظوری مانند استفاده های عمومی، بر روی bare-metal، بر روی ماشین مجازی (virtual machinesسرویس دهندگان ابری (cloud providers) و کانتینر (container) از آن استفاده کنند.

گفتنی است جهت تبدیل CentOS به AlmaLinux نیز می توانید از migration script استفاده کنید.

جهت اطلاعات بیشتر درمورد AlmaLinux OS 8.3 می توانید آگهی انتشار آن را مطالعه کنید:


جهت دانلود نسخه نهایی AlmaLinux OS 8.3 می توانید به لینک پایین مراجعه کنید:



The post نسخه نهایی توزیع AlmaLinux OS 8.3 منتشر شد first appeared on طرفداران فدورا.

More 0’s For Easier Self-Signed SSL-Certificate Fingerprint ID’ing

Posted by Jon Chiappetta on April 01, 2021 01:32 AM

So if you’re using a self-signed SSL cert which is for personal use but is public facing (similar to an SSH key upon first connect), you will get a scary warning about it of course! It is recommended to verify the cryptographic hash of that certificate to help ensure that there is no Person-In-The-Middle attack taking place. You can have some fun, at least, with self-signed certs because you can put almost anything in them so I wrote a little script to generate some leading 0’s in the fingerprint field. This helps to not only slow down an attacker trying to trick me (they need to generate something similar which takes a little more time) but it’s also easier to remember a more basic pattern (my laptop is a bit slow so I could only get 5 of them which is about 20-bits worth of nothing — The more 0s, The more secure! :):

$ openssl x509 -in crt.pem -noout -fingerprint
SHA1 Fingerprint=00:00:0F:D1:86:3F:A0:39:10:67:78:0A:13:DD:3B:55:BC:68:A4:3B

==> crt.pem <==

==> key.pem <==


import time, string, random, subprocess
from OpenSSL import crypto, SSL
#openssl genrsa -out key.pem 2048
b = subprocess.check_output('cat key.pem', shell=True)
k = crypto.load_privatekey(crypto.FILETYPE_PEM, b)
r = string.digits+string.ascii_uppercase
l = range(16)
t = (10*365*24*60*60)
s = 0
while True:
  c = crypto.X509()
  d = c.get_subject()
  d.C = "ZZ" ; d.L = "ZZ" ; d.O = "ZZ" ; d.ST = "ZZ"
  d.CN = "" ; d.OU = ''.join(random.choice(r) for _ in l)
  c.sign(k, 'sha1')
  f = c.digest('sha1')
  if f.startswith('00:00:'):
    print(crypto.dump_certificate(crypto.FILETYPE_PEM, c))
    if f.startswith('00:00:00:'):

Fedora 34 Beta Upgrade

Posted by Alejandro Pérez on March 31, 2021 09:19 PM

 Fedora 34 Beta

Fedora 34 Beta was release so needed to test, my first move was to upgrade my HP probook machine  following the dnf system-upgrade steps works with no issue and my machine came up to Fedora 34 with any issues. To my surprise since have not been on the loop of release, the machine seems faster, it is an old notebook, but works. Love the new Gnome 40, seems faster than before and more comfortable.

After that upgrade repeated the same treatment on my Mac BookPro with the same results boot faster, nice and smooth interface from Gnome 40, have to give to Gnome team, excellent work.

Only issue my VPN connection did not work, have to set the proper cipher. So no big issue.

Must of my work consist of ruby scripts which run on version 2.6 and 2.7 so Fedora 34 comes with Ruby version 3.0 some of my scripts did not work, one solution was to remove ruby and use rvm to install the different versions that I needed. But did not wanted to do that, time to check Fedora modularity a quick glance to the documentation a couple of dnf module commands and was set with version 2.7 and keep working as usual.

If you have not check modularity will recommend to read documentation at https://docs.fedoraproject.org/en-US/modularity/installing-modules/ 

on my case this where steps needed

   sudo dnf remove ruby

   sudo dnf module list 

   sudo dnf module enable ruby:2.7

   sudo dnf module install ruby:default/2.7

from that point on ruby 2.7 works and all gems installed are related to 2.7 so it works and my works will continue normal.

As always Fedora team do a great job with upgrades.

All systems go

Posted by Fedora Infrastructure Status on March 31, 2021 06:58 PM
Service 'Package maintainers git repositories' now has status: good: Everything seems to be working.

NGINX HTTPS Reverse Proxy With Basic Auth

Posted by Jon Chiappetta on March 31, 2021 06:26 PM

Lets say you wanted to run a local area network controller web service that was made by a company that you didn’t completely trust, what would be your options? If you wanted proper authenticated+encrypted access to it, you could setup a trustworthy VPN service like OpenVPN and remote into the LAN or you can also setup a reverse https proxy service that handles the TLS channel + basic authentication first before forwarding on the traffic to the internal web service. For example, Nginx is a pretty powerful and amazingly simple service to achieve this setup (just make sure to note the SSL certificate fingerprint :):

# /etc/nginx/sites-available/default
# htpasswd -bc ssl.pwd user pass
# openssl req -x509 -newkey rsa:2048 -nodes -keyout ssl.key -days 3650 -out ssl.crt
# chown root:www-data ssl.* ; chmod 640 ssl.*
# openssl x509 -in ssl.crt -noout -fingerprint
server {
	listen 443 ssl;
	ssl_certificate /etc/nginx/sites-available/ssl.crt;
	ssl_certificate_key /etc/nginx/sites-available/ssl.key;
	ssl_protocols TLSv1.1 TLSv1.2 TLSv1.3;
	ssl_ciphers HIGH:!aNULL:!MD5;
	location / {
		auth_basic "Admin Area";
		auth_basic_user_file /etc/nginx/sites-available/ssl.pwd;

Policy proposal: Update default content license to CC BY-SA 4.0

Posted by Fedora Community Blog on March 31, 2021 04:52 PM
Fedora community elections

Earlier this month, Matthew Miller suggested the Fedora Council update the default content license from the Creative Commons Attribution-ShareAlike 3.0 Unported (CC BY-SA 3.0) license to the Creative Commons Attribution-ShareAlike 4.0 (CC BY-SA 4.0) license. This license applies to content (not code) submitted to Fedora that does not have an explicit license attached. It does not override the explicit license choices of contributors or upstream projects.

While CC BY-SA 3.0 and CC BY-SA 4.0 share the same philosophical underpinnings, version 4 includes some meaningful improvements that benefit our community:

  • CC BY-SA 4.0 is global in scope, so does not require “porting” to individual jurisdictions the way previous versions did. This is an obvious benefit to a project with users and contributors worldwide.
  • CC BY-SA 4.0 also has official translations available. Translating legal documents requires translating not just the text, but the legal meaning. This makes unofficial translations questionable in legal contexts.
  • The attribution requirements in CC BY-SA 4.0 have been brought in line with the generally-accepted practices used by creators.
  • License violations under CC BY-SA 4.0 are automatically cured if they are fixed within 30 days of discovery.

The Fedora Project Contributor Agreement (FPCA), which all Fedora contributors sign, allows the Council to modify the default license:

The Fedora Council may, by public announcement, subsequently designate an additional or alternative default license for a given category of Contribution (a “Later Default License”). A Later Default License shall be chosen from the appropriate categorical sublist of Acceptable Licenses For Fedora.

FPCA (section 3)

In accordance with the Policy Change Policy, the Fedora Council will begin voting on this on 15 April.

The post Policy proposal: Update default content license to CC BY-SA 4.0 appeared first on Fedora Community Blog.

Major service disruption

Posted by Fedora Infrastructure Status on March 31, 2021 01:09 PM
Service 'Package maintainers git repositories' now has status: major: Git seems unhappy, being investigated

Playing with modular synthesizers and VCV Rack

Posted by Fedora Magazine on March 31, 2021 08:00 AM

You know about using Fedora Linux to write code, books, play games, and listen to music. You can also do system simulation, work on electronic circuits, work with embedded systems too via Fedora Labs. But you can also make music with the VCV Rack software. For that, you can use to Fedora Jam or work from a standard Fedora Workstation installation with the LinuxMAO Copr repository enabled. This article describes how to use modular synthesizers controlled by Fedora Linux.

Some history

The origin of the modular synthesizer dates back to the 1950’s and was soon followed in the 60’s by the Moog modular synthesizer. Wikipedia has a lot more on the history.

<figure class="wp-block-image size-large">Moog Modular Synth<figcaption>Moog synthesizer circa 1975</figcaption></figure>

But, by the way, what is a modular synthesizer ?

These synthesizers are made of hardware “blocks” or modules with specific functions like oscillators, amplifier, sequencer, and other various functions. The blocks are connected together by wires. You make music with these connected blocks by manipulating knobs. Most of these modular synthesizers came without keyboard.

<figure class="wp-block-image size-large">A modular patch</figure>

Modular synthesizers were very common in the early days of progressive rock (with Emerson Lake and Palmer) and electronic music (Klaus Schulze, for example). 

After a while people forgot about modular synthesizers because they were cumbersome, hard to tune, hard to fix, and setting a patch (all the wires connecting the modules) was a time consuming task not very easy to perform live. Price was also a problem because systems were mostly sold as a small series of modules, and you needed at least 10 of them to have a decent set-up.

In the last few years, there has been a rebirth of these synthesizers. Doepfer produces some affordable models and a lot of modules are also available and have open sources schematics and codes (check Mutable instruments for example).

But, a few years ago came … VCV Rack. VCV Rack stands for Voltage Controlled Virtual Rack: software-based modular synthesizer lead by Andrew Belt. His first commit on GitHub was Monday Nov 14 18:34:40 2016. 

Getting started with VCV Rack


To be able to use VCV Rack, you can either go to the VCV Rack web site and install a binary for Linux or, you can activate a Copr repository dedicated to music: the LinuxMAO Copr repository (disclaimer: I am the man behind this Copr repository). As a reminder, Copr is not officially supported by Fedora infrastructure. Use packages at your own risk.

Enable the repository with:

sudo dnf copr enable ycollet/linuxmao

Then install VCV Rack:

sudo dnf install Rack-v1

You can now start VCV Rack from the console of via the Multimedia entry in the start menu:

$ Rack &
<figure class="wp-block-image size-large">VCV Rack after startup</figure>

Add some modules

The first step is now to clean up everything and leave just the AUDIO-8 module. You can remove modules in various ways:

  • Click on a module and hit the backspace key
  • Right click on a module and click “delete”

The AUDIO-8 module allows you to connect from and to audio devices. Here are the features for this module.

<figure class="wp-block-image size-large">AUDIO 8 input module</figure>

Now it’s time to produce some noise (for the music, we’ll see that later).

Right click inside VCV Rack (but outside of a module) and a module search window will appear. 

<figure class="wp-block-image size-large">VCV Rack search window</figure>

Enter “VCO-2” in the search bar and click on the image of the module. This module is now on VCV Rack.

To move a module: click and drag the module.

To move a group of modules, hit shift + click + drag a module and all the modules on the right of the dragged modules will move with the selected module.

<figure class="wp-block-image size-large">Some first VCV Rack modules</figure>

Now you need to connect the modules by drawing a wire between the “OUT” connector of VCO-2 module and the “1” “TO DEVICE” of AUDIO-8 module.

Left-click on the “OUT” connector of the VCO-2 module and while keeping the left-click, drag your mouse to the “1” “TO DEVICE” of the AUDIO-8 module. Once on this connector, release your left-click. 

<figure class="wp-block-image size-large">A first VCV Rack connection</figure>

To remove a wire, do a right-click on the connector where the wire is connected.

To draw a wire from an already connected connector, hold “ctrl+left+click” and draw the wire. For example, you can draw a wire from “OUT” connector of module VCO-2 to the “2” “TO DEVICE” connector of AUDIO-8 module.

What are these wires ?

Wires allow you to control various part of the module. The information handled by these wires are Control Voltages, Gate signals, and Trigger signals.

CV (Control Voltages): These typically control pitch and range between a minimum value around -1 to -5 volt and a maximum value between 1 and 5 volt.

What is the GATE signal you find on some modules? Imagine a keyboard sending out on/off data to an amplifier module: its voltage is at zero when no key is  pressed and jumps up to max level (5v for example) when a key is pressed; release the key, and the voltage goes back to zero again. A GATE signal can be emitted by things other than a keyboard. A clock module, for example, can emit gate signals.

Finally, what is a TRIGGER signal you find on some modules? It’s a square pulse which starts when you press a key and stops after a while.

In the modular world, gate and trigger signals are used to trigger drum machines, restart clocks, reset sequencers and so on. 

Connecting everybody

Let’s control an oscillator with a CV signal. But before that, remove your VCO-2 module (click on the module and hit backspace).

Do a right-click on VCV Rack a search for these modules:

  • VCO-1 (a controllable oscillator)
  • LFO-1 (a low frequency oscillator which will control the frequency of the VCO-1)

Now draw wires:

  • between the “SAW” connector of the LFO-1 module and the “V/OCT” (Voltage per Octave) connector of the VCO-1 module
  • between the “SIN” connector of the VCO-1 module and the “1” “TO DEVICE” of the AUDIO-8 module
<figure class="wp-block-image size-large">VCV Rack module controling module via CV</figure>

You can adjust the range of the frequency by turning the FREQ knob of the LFO-1 module.

You can also adjust the low frequency of the sequence by turning the FREQ knob of the VCO-1 module.

The Fundamental modules for VCV Rack

When you install the Rack-v1, the Rack-v1-Fundamental package is automatically installed. Rack-v1 only installs the rack system, with input / output modules, but without other basic modules.

In the Fundamental VCV Rack packages, there are various modules available.

<figure class="wp-block-image size-large">VCV Rack fundamental modules</figure>

Some important modules to have in mind:

  • VCO: Voltage Controlled Oscillator
  • LFO: Low Frequency Oscillator
  • VCA: Voltage Controlled Amplifier
  • SEQ: Sequencers (to define a sequence of voltage / notes)
  • SCOPE: an oscilloscope, very useful to debug your connexions
  • ADSR: a module to generate an envelope for a note. ADSR stands for Attack / Decay / Sustain / Release

And there are a lot more functions available. I recommend you watch tutorials related to VCV Rack on YouTube to discover all these functionalities, in particular the Video Channel of Omri Cohen.

What to do next

Are you limited to the Fundamental modules? No, certainly not! VCV Rack provides some closed sources modules (for which you’ll need to pay) and a lot of other modules which are open source. All the open source modules are packages for Fedora 32 and 33. How many VCV Rack packages are available ?

sudo dnf search rack-v1 | grep src | wc -l 

And counting.  Each month new packages appear. If you want to install everything at once, run:

sudo dnf install `dnf search rack-v1 | grep src | sed -e "s/\(^.*\)\.src.*/\1/"`

Here are some recommended modules to start with.

  • BogAudio (dnf install rack-v1-BogAudio)
  • AudibleInstruments (dnf install rack-v1-AudibleInstruments)
  • Valley (dnf install rack-v1-Valley)
  • Befaco (dnf install rack-v1-Befaco)
  • Bidoo (dnf install rack-v1-Bidoo)
  • VCV-Recorder (dnf install rack-v1-VCV-Recorder)

A more complex case

<figure class="wp-block-image size-large">VCV Rack second example</figure>

From Fundamental, use MIXER, AUDIO-8, MUTERS, SEQ-3, VCO-1, ADSR, VCA.


  • Plateau module from Valley package (it’s an enhanced reverb).
  • BassDrum9 from DrumKit package.
  • HolonicSystems-Gaps from HolonicSystems-Free package.

How it sounds: checkout this video on my YouTube channel.

Managing MIDI

VCV Rack as a bunch of modules dedicated to MIDI management.

<figure class="wp-block-image size-large">VCV Rack managing MIDI</figure>

With these modules and with a tool like the Akai LPD-8:

<figure class="wp-block-image size-large">AKAI LPD 8</figure>

You can easily control knob in VCV Rack modules from a real life device.

Before buying some devices, check it’s Linux compatibility. Normally every “USB Class Compliant” device works out of the box in every Linux distribution.

The MIDI → Knob mapping is done via the “MIDI-MAP” module. Once you have selected the MIDI driver (first line) and MIDI device (second line), click on “unmapped”. Then, touch a knob you want to control on a module (for example the “FREQ” knob of the VCO-1 Fundamental module). Now, turn the knob of the MIDI device and there you are; the mapping is done.

Artistic scopes

Last topic of this introduction paper: the scopes.

VCV Rack has several standard (and useful) scopes. The SCOPE module from Fundamental for example.

But it also has some interesting scopes.

<figure class="wp-block-image size-large">VCV Rack third example</figure>

This used 3 VCO-1 modules from Fundamental and a fullscope from wiqid-anomalies.

The first connector at the top of the scope corresponds to the X input. The one below is the Y input and the other one below controls the color of the graph.

For the complete documentation of this module, check:

For more information

If you’re looking for help or want to talk to the VCV Rack Community, visit their Discourse forum. You can get patches (a patch is the file saved by VCV Rack) for VCV Rack on Patch Storage.

Check out how vintage synthesizers looks like on Vintage Synthesizer Museum or Google’s online exhibition. The documentary “I Dream of Wires” provides a look at the history of modular synthesizers. Finally, the book Developing Virtual Syntehsizers with VCV Rack provides more depth.

The Lounge web IRC client in Fedora

Posted by Lukas "lzap" Zapletal on March 31, 2021 12:00 AM

The Lounge web IRC client in Fedora

My graphics card died and thanks to COVID and Bitcoin, it will be a long wait until it’s back. I am on Mac M1 at the moment and it looks like there are not many good IRC clients on MacOS.

Let’s run a simple web-based IRC client which can also work as a bouncer (no need of ZNC). I randomly selected one which is called The Lounge, looks nice and works okay for me. This one is written in NodeJS and since there is no package in Fedora, I’ve decided to build it via yarn. It needs only one native dependency I think - sqlite3 so do not expect any problems on that front:

# dnf install nodejs yarn sqlite-devel
# dnf groupinstall "Development Tools"
# mkdir ~/.thelounge
# cd ~/.thelounge
# yarn add thelounge

If you prefer installing it into /usr/local then run yarn global add thelounge.

Create a user service, I will be running it as a regular user:

# cat /home/lzap/.config/systemd/user/thelounge.service
Description=The Lounge IRC client
ExecStart=/home/lzap/.thelounge/node_modules/.bin/thelounge start

Start it to create a default configuration file:

# systemctl --user daemon-reload
# systemctl --user enable --now thelounge

Optionally, stop the service for now and review the configuration. There are couple of things I recommend to tune. By default the service listens on HTTP (9090), no HTTPS is configured, it stores all logs in both sqlite3 and text files and it is configured as “private” instance, meaning you need to login with a username and password:

# vim ~/.thelounge/config.js

Create new user:

# ~/.thelounge/node_modules/.bin/thelounge add lzap

Visit http://localhost:9090 or https://localhost:9090 if you’ve configured SSL. There you can create one or more IRC connections, join all channels, everything will be written into a separate ~/.thelounge/users/user.js configuration file which is nice. If you disabled sqlite3 logging, everything is stored in text files which I appreciate a lot.

If you want a simple letsencrypt tutorial for Fedora, read my prevous blog post:

# grep "https: {" -A5  ~/.thelounge/config.js
  https: {
    enable: true,
    key: "/etc/pki/tls/private/home.zapletalovi.com.key",
    certificate: "/var/lib/acme/certs/home.zapletalovi.com.crt",
    ca: "",

No intermediate CAs are needed for letsencrypt so you can leave the field blank. Have fun.

Letsencrypt a Fedora server

Posted by Lukas "lzap" Zapletal on March 31, 2021 12:00 AM

Letsencrypt a Fedora server

I was looking for a simple letsencrypt tutorial for my home server running Fedora but it looks like the official (and quite capable) certbot is not availble in Fedora repos. So I have decided to go a more simple route of using acme-tiny shell script which is present and does the same, at least if you are running Apache httpd.

First off, install Apache httpd, SSL support and acme script itself:

# dnf install httpd mod_ssl acme-tiny

Let’s assume that the Apache server is already serving some files and is available on the desired domain via HTTP (not HTTPS yet):

# systemctl enable --now httpd
# curl -s http://home.zapletalovi.com | grep -o "Test Page"
Test Page

We are almost there, trust me. Generate a new certificate request. OpenSSL tool will ask several questions like name, organization and this stuff. Make sure that the Common Name (CN) is correct.

# cd /etc/pki/tls
# ln -s /var/lib/acme/csr .
# openssl req -new -nodes -keyout private/home.zapletalovi.com.key -out csr/home.zapletalovi.com.csr
# chmod 0400 private/home.zapletalovi.com.key

The next step is the actual communication with the authority, putting the challenge hash into /var/www/challenges directory which is exported by Apache httpd and downloading the signed request:

# systemctl start acme-tiny

See system journal for any errors. If you encounter one, just start the script manually but make sure to use acme user account not root:

# su acme -s /bin/bash
# /usr/libexec/acme-tiny/sign

And that’s really all! You should have your certificate signed by letsencrypt now. Configure the desired software to use the new certificate and the key from the following paths:

# find /var/lib/acme /etc/pki/tls/private

For example I want to actually configure the Apache httpd itself:

# grep zapletalovi /etc/httpd/conf.d/ssl.conf
SSLCertificateFile /var/lib/acme/certs/home.zapletalovi.com.crt
SSLCertificateKeyFile /etc/pki/tls/private/home.zapletalovi.com.key

If you are like me and running under SELinux enforcing, make sure that the newly generated certificates have the proper label:

# semanage fcontext -a -f a -t cert_t '/var/lib/acme/certs(/.*)?'
# restorecon -rv /var/lib/acme/certs

The final and the most important step - enable systemd timer which will automatically extend the certificate for you:

# systemctl enable --now acme-tiny.timer

That was easy.

Enable serial console for libvirt

Posted by Lukas "lzap" Zapletal on March 31, 2021 12:00 AM

Enable serial console for libvirt

QEMU/KVM libvirt virtual machine can be acessed via serial console. When a new VM is created, serial console device is created. However to fully utilize this, several steps are needed on the guest machine.

The first option is to start getty service on serial console to get a login prompt when system finishes booting. This is as easy as:

# systemctl enable --now serial-getty@ttyS0.service

To access serial console via libvirt command line do:

# virsh console virtual_machine_name

This approach is simple enough, but when something goes wrong and VM does not boot, it is not possible to access the VM during early boot or even bootloader. In that case, perform the additional configuration:

# grep console /etc/default/grub
GRUB_TERMINAL_INPUT="console serial"
GRUB_TERMINAL_OUTPUT="console serial"
GRUB_CMDLINE_LINUX="... console=ttyS0"

Then write new grub configuration, for EFI systems do the following:

# grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg

For BIOS systems do:

# grub2-mkconfig -o /boot/grub2/grub.cfg

Reboot and connect early to access grub or to see early boot messages. These commands will work on Fedora, CentOS, Red Hat Enterprise Linux and clones.

Red Hat Certified Specialist in Services Management and Automation

Posted by Fabio Alessandro Locati on March 31, 2021 12:00 AM
Late last year, I’ve read that a new Ansible-related exam was available: the Red Hat Certified Specialist in Services Management and Automation exam (EX358). I’ve taken and passed this exam at the end of January. It was the first time I did a Red Hat exam that was brand new and without having the possibility of finding online some opinions around it. Some people have reported for other exams that when new exams are launched, the scoring has issues.

Cockpit 241

Posted by Cockpit Project on March 31, 2021 12:00 AM

Cockpit is the modern Linux admin interface. We release regularly.

Here are the release notes from Cockpit version 241.

Kdump: Beautification and alignment fixes

This release improves the Kdump page, used for easier kernel debugging. The reworked page now matches the style of the rest of Cockpit and fixes alignment issues.

Cloud image account initialization

Start of TLS certificate improvements

Cockpit is starting the process of transitioning away from its (somewhat unique) “merged” form of certificate files, towards the more usual separate files for the certificate and the private key. This release changes the IPA certificate setup to use the split format, as well as setting the correct permissions at the time the certificate is created.

Watch for more progress on this front in coming releases.

Try it out

Cockpit 241 is available now:

New badge: Magazine Editor !

Posted by Fedora Badges on March 30, 2021 03:15 PM
Magazine EditorIt's our job to make you look good!

It’s time to fix CVE

Posted by Josh Bressers on March 30, 2021 02:13 PM

The late, great, John Lewis is well known for a quote about getting into trouble.

Never, ever be afraid to make some noise and get in good trouble, necessary trouble.

It’s time to start some good trouble.

Anyone who knows me, reads this blog, or follows me on Twitter, is well aware I have been a proponent of CVE Identifiers for a very long time. I once assigned CVE IDs to most open source security vulnerabilities. I’ve helped more than one company and project adopt CVE IDs for their advisories. I encourage anyone who will listen to adopt CVE IDs. I’ve even talked about it on the podcast many times.

I also think it’s become clear that the generic term “CVE” and “Vulnerability” now have the same meaning. This is a convenient collision because the world needs a universal identifier for security issues. We don’t have to invent a new one. But it’s also important we don’t let our current universal identifier continue to fall behind.

For the last few years I’ve been bothered by the CVE project as it stands under MITRE, but it took time figure out why. CVE IDs under MITRE have stalled out, in a time when we are seeing unprecedented growth in the cybersecurity space. If you aren’t growing but the world around you is, you are actually shrinking. The realty is CVE IDs should be more important than ever, but they’re just not. The number of CVE IDs isn’t growing, it’s been flat for the last few years. Security scanners and related vendors such as GitHub, Snyk, Whitesource, and Anchore are exploding in popularity and instead of being focused on CVE IDs, they’re all creating their own identifiers because getting CVE IDs often isn’t worth the trouble. As a consumer of this information, it’s unpleasant dealing with all these IDs. If nothing is done it’s likely CVE IDs won’t matter at all in a few years because they will be an inconsequential niche identifier. It’s again time for the Distributed Weakness Filing project to step in and help keep CVE IDs relevant.

The problem

I want to start with a project I’ve been working on for a long time. I called it cve-analysis, it’s public on GitHub, you can give it a try. Basically we take the NVD CVE data (which also includes past Distributed Weakness Filing data), put it in Elasticsearch, then look at it. For example here is a graph of the published CVE Identifiers with associated data by year:

Anyone with a basic understanding of math will immediately know this is not a graph showing growth. It’s actually showing a steep decline. While the last few years are in what looks like minor decline, the reality is the world around CVE IDs is growing exponentially, so while CVE IDs are shrinking slowly on the graph, in the greater world of expanding cybersecurity, it’s shrinking very quickly. That graph should look like more like this

<figure class="wp-block-image size-large"></figure>

I’ve left out 2021 for the purpose of the fit line. To be honest, I think the fit line should be more aggressive but I kept it conservative to avoid the fit line becoming a point of contention. If CVE IDs were growing properly, we should be seeing more than 23,000 CVE IDs in 2021. Unless something changes we’ll probably see slightly less than 16,000.

I want to add a note here about this data. I am showing the CVE IDs based on the year in the ID. So CVE-2020-0001 will be under the 2020 graph, no matter when it actually gets assigned. It is common for a 2020 ID to be assigned in 2021 for example. There is not a public source of reliable data on when an ID was assigned. The data I could find did not look drastically different, so I picked the simple option.

I could go on pointing out problems, but I think the number of IDs is an easy problem to understand and helps show other latent problems. The number of IDs should be growing, but it’s not. CVE IDs, as currently managed under MITRE, are shrinking in both number and relevance. We need CVE IDs more than ever, but the CVE Project as managed by MITRE has change very little in the last 22 years. The MITRE CVE world of 1999 isn’t drastically different from the MITRE CVE world of 2021. But the security world of 2021 looks nothing like 1999 did.

How did we get here?

I don’t think there will ever be one simple reason the MITRE CVE project ended up where it did. I want to bring us back to somewhere around 2014. Let’s start with this tweet

<figure class="wp-block-image size-large"></figure>

I don’t think anyone would fault the CVE project for this. Every project has to constrain itself, infinite scope is impossible. If we look at the CVE ID graph around 2014 we can see it was in a bit of a slump, the data shows us inconsistent growth from year to year.

One big reason for this struggled growth is what I would describe the legacy CVE data as “artisanal security”. All of the IDs are hand crafted by humans, humans don’t scale. The data is very prose instead of being something machine readable. If you’ve ever tried to get an ID assigned to something, it can be a very overwhelming task. Take a look at this GitHub issue if you want an example. From my experience this sort of back and forth is not an exception, it’s the norm, especially for beginners. Most people give up. This sort of behavior might have been considered OK in 1999. It’s not 1999 anymore.

This sets us up for 2016 and what looked like things were going to get better. Start with this article by Steve Ragan, it’s still a great read today, you won’t regret reading it. Steve summarizes some of the issues the project was seeing. They were missing a lot of vulnerabilities. There was quite a dust up around CVE IDs in 2016. It became suddenly obvious the project wasn’t covering nearly as much as it should be. I mean, there were only 22 CVE Naming Authorities (CNA) in 2016 (remember, artisanal security). A CNA is an organization that has been given permission to assign IDs to a very specific set of applications. CNAs are expected to stay in their lane. From 1999 to 2016. In 17 years they only picked up 22 organizations to assign CVE IDs? That’s slightly more than one per year. At the time I’m writing there are 159 CNAs. It’s still an outdated model with an unacceptably low number.

If we look again pull up our graph

We can see a noticeable uptick in 2017. For most of us paying attention this was what we were waiting for, the CVE project was finally going to start growing!

And then it didn’t.

There are a lot of reasons for this stagnation, none of which are easy to explain. Rather than spending a lot of time trying to explain what’s happening, I will refer to this 2020 article by Jessica Haworth. The point in that article is that the CNA system lets a CNA decide what should get an ID. This is sort of how security worked in 1999 when security researchers were often treated as adversaries. This is no longer the case, yet researchers have no easy way to request and use their own CVE IDs. If you report a vulnerability to an organization, and they decide it doesn’t get a CVE ID, it’s very difficult to get one yourself. So many researchers just don’t bother.

Right about here I’m sure someone will point out that a CVE request form exists for just this sort of thing! Go look at it https://cveform.mitre.org, I’ll let you make up your own mind about it.

The Solution

Fixing any problem is difficult. This is where a friend of mine enters our story. What is wrong with, and how to fix CVE IDs is a topic Kurt Seifried has spent a great deal of time thinking about and working on. One thing that happened in 2016 to try to get CVE back on the right path was the DWF project. The idea was to make CVE data work more like open source. We can call that DWF version 1. DWF version 1 didn’t work out. There are many reasons for that, but the single biggest is that DWF tried to follow the legacy CVE rules which ended up strangling the project to death. Open source doesn’t work when buried under artificial constraints. Open source communities need to be able to move around and breath so they can thrive and grow.

After a few years, a lot of thought, and some tool development, this leads us to DWF version 2. The problem with version 1 is it wasn’t really an open source community. Version 1 of DWF was really just a GitHub repository of artisanal CVE data that Kurt single-handedly tried to meet the constraints of legacy CVE. He should be commended for sticking with it as long as he did, but humans don’t scale. DWF version 2 sheds the yoke of legacy CVE and creates an open community anyone can participate in and is on the path of 100% automated CVE IDs. There is no reason you shouldn’t be able to get a CVE ID in less than ten seconds.

<figure class="alignright">Image</figure>

The very first argument you will hear is going to be one of quality. How there’s no way the community can create IDs that will match the hand crafted quality of legacy CVE IDs! Firstly, that’s not true. Remember the cve-analysis project? We can look into all of these claims. Data beats conjecture, we have data. Here’s one example of what I’ve found. There are a lot of examples like this, I’m not going to nitpick them all to death. Let’s just say, it’s not all master craftsman quality descriptions. There are a lot of demons hiding in this data, it’s not as impressive as some would like you to believe.

And the second, far more important point, is this argument was used against Wikipedia. Remember encyclopedias? Yeah, they spent a lot of time trying to discredit the power of community. Community already won. Claiming community powered data is inferior is such an outdated way of thinking it doesn’t even need to be deconstructed in 2021.

So what is DWF?

The TL;DR is DWF is a community driven project to assign DWF CVE IDs to security issues. Part of the project will be defining concepts like what is a “security issue”? This is 2021, not 1999 anymore. The world of security vulnerabilities has changed drastically. We need to rethink everything about what a vulnerability even is. The most important thing is we do it as a community and in public. Communities beat committees every day of the week!

This is not a project condoned by MITRE in any way. They are of the opinion they have a monopoly on identifiers that follow the CVE format. They called us pirates. And even tried to submit pull requests to change how the DWF project works. This is the “good trouble” part of all this.

There is a website at https://iwantacve.org that lets you request an ID. You enter a few details into a web form, and you get a candidate, or CAN ID. A human then double checks things, approves it, then the bot flips it to a DWF CVE ID assuming it looks good. Things generally look good because the form makes it easy to do the right thing. And this is just version one of the form! It will keep getting better. If you submit IDs a few times you will get added to the allowlist and just get a DWF CVE ID right away skipping the CAN step. Long term there won’t be any humans involved because humans are slow, need to sleep, and get burnt out.

Here is the very first ID that was assigned CVE-2021-1000000. We were still debugging and testing things when the site was found and used by a researcher, so our timeline sort of got thrown out the window, but in the most awesome way possible! It’s very open source.

The project can be broken down into three main pieces. The workflow, the tooling, and the data.

The workflow is where conversations are meant to be held. Policies should be created, questions asked, new ideas brought up. It’s just a GitHub repo right now. The community will help decide how it grows and evolves. Everything is meant to happen in public. This is the best place to start out if you want to help. Feel free to just jump in and create issues to ask questions or make suggestions.

The tooling is where the two applications live that currently drive everything. Right now this is a bot written in python and a the web form written in Node.js. The form is what drives the https://iwantacve.org website. The bot is how data gets from the form into the GitHub data repo. Neither is spectacular code. It’s not meant to be, it’s an open source project and will get better with time. It’s good enough.

The data is where the DWF json is held. There’s not a lot to say about this one, I think it’s mostly self explanatory. The data format will need to be better defined, but that conversation belongs in the workflow.

The purpose of DWF is to build a community to define the future of security vulnerabilities. There is much work to be done. We would love it if you would lend a hand!

Projects shouldn’t write their own tools

Posted by Ben Cotton on March 30, 2021 12:39 PM

Over the weekend, the PHP project learned that its git server had been compromised. Attackers inserted malicious code into the repo. This is very bad. As a result, the project moved development to GitHub.

It’s easy to say that open source projects should run their own infrastructure. It’s harder to do that successfully. The challenges compound when you add in writing the infrastructure applications.

I understand the appeal. It’s zero-price (to write; you still need the hardware to run it). Bespoke software meets your needs exactly. And it can be a fun diversion from the main thing you’re working on: who doesn’t like going to chase a shiny for a little bit?

Of course, there’s always the matter of “the thing you wanted didn’t exist when you started the project.” PHP’s first release predates the launch of GitHub by 13 years. It’s 10 years older than git, even.

Of course, this means that at some point PHP moved from some other version control system to Git. That also means they could have moved from their homegrown platform to GitHub. I understand why they’d want to avoid the pain of making that switch, but sometimes it’s worthwhile.

Writing secure and reliable infrastructure is hard. For most projects, the effort and risk of writing their own tooling isn’t worth the benefit. If the core mission of your project isn’t the production of infrastructure applications, don’t write it.

Sidebar: Self-hosting

The question of whether or not to write your infrastructure applications is different from the question of whether or not to self-host. While the former has a pretty easy answer of “no”, the latter is mixed. Self-hosting still costs time and resources, but it allows for customization and integration that might be difficult with software-as-a-service. It also avoids being at the whims of a third party who may or may not share your project’s values. But in general, projects should do the minimum that they can reasonably justify. Sometimes that means running your own instances of an infrastructure application. Very rarely does it mean writing a bespoke infrastructure application.

The post Projects shouldn’t write their own tools appeared first on Blog Fiasco.

Joomla no Podman

Posted by Daniel Lara on March 30, 2021 11:43 AM

Bom, vamos rodar o Joomla via podman.

Vamos criar o nosso pod primeiramente:

$ podman pod create --name joomlapod --publish 8080:80

vamos rodar o mariadb:

$ podman run -dit --pod joomlapod \
-e MYSQL_DATABASE=joomla \
-e MYSQL_USER=joomlauser \
-e MYSQL_PASSWORD=joomlapassword \
--name mariadb docker.io/library/mariadb

Agora vamos rodar o Joomla:

$ podman run -dit --pod joomlapod \
-e JOOMLA_DB_USER=joomlauser \
-e JOOMLA_DB_PASSWORD=joomlapassword \
-e JOOMLA_DB_NAME=joomla \
--name joomla docker.io/library/joomla

Pronto só acessar a web http://<seu ip>:8080 e finalizar as configurações.

All systems go

Posted by Fedora Infrastructure Status on March 30, 2021 11:07 AM
Service 'Package maintainers git repositories' now has status: good: Everything seems to be working.

Student applications for Google Summer of Code 2021 are now open!

Posted by Felipe Borges on March 30, 2021 09:52 AM

It’s that time of the year when we see an influx of students interested in Google Summer of Code.

Some students may need some pointers to where to get started. I would like to ask GNOME contributors to be patient with the student’s questions and help them find where to get started.

Point them at https://wiki.gnome.org/Outreach/SummerOfCode/Students for overall information regarding the program and our project proposals. Also, help them find mentors through our communication channels.

Many of us have been Outreachy/GSoC interns and the positive experiences we had with our community were certainly an important factor making us long-term contributors.

If you have any doubts, you can ask them in the #soc channel or contact GNOME GSoC administrators.

Happy hacking!

Major service disruption

Posted by Fedora Infrastructure Status on March 30, 2021 09:50 AM
Service 'Package maintainers git repositories' now has status: major: Uploading to the lookaside is current not working and being investigated

Fedora Council March 2021 meeting

Posted by Fedora Community Blog on March 30, 2021 08:00 AM
Fedora community elections

In a normal year, the Fedora Council would have held a one-day meeting in Brno the day after DevConf.CZ. Since this isn’t a normal year, we held a half day virtual face-to-face earlier this month. Unlike the longer November meeting, this meeting focused on catching up on a few things instead of larger strategy planning. As usual, the minutes have been fed to Zodbot.

Using Fedora Linux to refer to the OS

As I wrote at the beginning of the month, I’d like our official communications to be more clear. Fedora is a community and when we use the word “Fedora” by itself, it should refer to the community. “Fedora Linux” is the operating system we produce. As part of the change proposal to reflect that in the os-release file, the community asked for the Fedora Council to vote on it. I presented this to the Fedora Council and we approved it.

Minimization Objective

The Council approved the https://docs.fedoraproject.org/en-US/minimization/ almost two years ago. It got off to a great start. In the end, it didn’t quite do everything Adam had planned, but it has still laid the foundation for future work. This Objective developed the Content Resolver, which was a key part of the new Enterprise Linux Next (ELN) build. In the future, we hope someone will pick up on this work to provide additional features, like filing bugs when dependency growth gets out of hand. I’ve asked Adam Šamalik to write a Community Blog post summarizing the objective. I speak for the whole Council when I say “thank you, Adam, for all of your contributions.”

Community Outreach Revamp Objective

The Council also heard from Mariana Balla and Sumantro Mukherjee, the co-leads of the Community Outreach Revamp Objective. They say they’ve accomplished about 20% of their goals so far, including cleaning up the membership of Fedora accounts groups. They’re working to develop role handbooks to help get people started. Sumantro and Mariana meet weekly with Marie Nordin, the Fedora Community Action and Impact Coordinator (FCAIC).

Several Council members are current or former Ambassadors, so we shared our views on the Objective. This includes making sure people feel like their work is recognized. We’ve also found that people are doing work and just not telling us, so we’re working on ways to make it easier for people to share their accomplishments.

What’s next?

We spent a few minutes at the end thinking about ideas for future Objectives. If you have an idea, please discuss it with the Council. Right now we have one active Objective, and we’d like to have between two and four. And remember that Objectives do not have to be engineering-driven!

The Council will continue to hold regular meetings on IRC every two weeks and monthly public video meetings. Given the pandemic situation, we will hold our next face-to-face virtually sometime in the second quarter.

The post Fedora Council March 2021 meeting appeared first on Fedora Community Blog.

Running do_ scripts from yocto

Posted by Adam Young on March 29, 2021 07:51 PM

I wanted to see how my work had diverged from the standard Raspberry Pi build. Specifically, the image creation stage is failing in my work. I can run the script in the original (upstream) version by doing thing following.

  • run tmux so I can get a devshell.
  • run the recipe with the devshell task
  • run teh generated script.

For my work, that looks like this:

bitbake core-image-minimal  -c devshell

The general pattern seems to be that bitbake creates a directory under work with the name of the Triple: distro-vendor-os (I think, that is from memory. I am sure I have something wrong). For example, if I try to build the CentOS image that we are working on using the Linux Yocto Kernel on AARCH64 (ARM). I get a triple of


and my current work is generating


Under that is the top level recipe name. There are other top level packages generated there, and I don’t know the reason. On the Pi build, I get:

base-files init-ifupdown packagegroup-core-boot shadow-securetty
core-image-minimal keymaps rpi-bootfiles sysvinit-inittab
depmodwrapper-cross linux-raspberrypi rpi-config

But I do know that all the logs and scripts I need were put into the directory

Using KDC Proxy to authenticate users

Posted by Luc de Louw on March 29, 2021 02:52 PM

How to authenticate users with Kerberos when port 88 is not available in a DMZ? Use an HTTPS server as a proxy. IPA comes with an integrated KDC Proxy and it’s simple to make use of it. A typical use case is a cross-domain trust with AD, where the Linux clients are not allowed to directly talk to AD because of firewall and/or security policy restrictions. Another use-case is where clients in a DMZ are not allowed to directly communicate ....Read More

The post Using KDC Proxy to authenticate users appeared first on Luc de Louw's Blog.