Fedora People

(Practically!) Half Way Through Internship

Posted by Madeline Peck on July 16, 2020 12:00 PM

Can you believe that tomorrow on the 17th it’s officially half way through my summer internship?! I can’t :O

Let’s see what have I been up to?

Over the weekend I finished creating a really low key zine for my 11 year old neighbor.

<figure class=" sqs-block-image-figure image-block-outer-wrapper image-block-v2 design-layout-stack combination-animation-none individual-animation-none individual-text-animation-none image-position-left " data-scrolled="data-scrolled" data-test="image-block-v2-outer-wrapper"> IMG_2254.PNG </figure>

It ended up being 20 pages I believe, I went over each feature of the face and it turned out pretty well for around 13 hours of work spaced out over two ish weeks. My parents have a black and white printer so I made sure all the art was on a grey scale so I could give a physical copy. I think if I actually made proper zines of these I’d want to add and change a few things so I wouldn’t feel guilty for charging a few bucks but this was a great test run :)

<figure class=" sqs-block-image-figure image-block-outer-wrapper image-block-v2 design-layout-stack combination-animation-none individual-animation-none individual-text-animation-none image-position-left " data-scrolled="data-scrolled" data-test="image-block-v2-outer-wrapper"> </figure> <figure class=" sqs-block-image-figure intrinsic "> IMG_2348.PNG </figure> <figure class=" sqs-block-image-figure intrinsic "> snow white.jpg </figure>

On Monday and Tuesday I finished wrapping up all the changes that our technical reviewers made and then after figuring out how to easiest make a pdf- sent it out with a little doodle. Maybe this would be helpful for other people to include for time sensitive emails haha.

After working on simplifying the Flock logos and remembering how hard the bezier tool in vector programs are- Mo recommended me some really great tutorials and practice programs which I’ve been doing.


Posted by Bodhi on July 16, 2020 10:25 AM

Release 5.4.1

GSOC Progress Report for Linux System Roles

Posted by Fedora Community Blog on July 16, 2020 08:00 AM
This is an generic banner that goes with Weekly Updates

Student: Elvira García Ruiz Fedora Account: egruiz IRC: elvira (#fedora-summer-coding, #systemroles) Summary June was my first month as a GSOC student, and I must say is has been a tough but fun ride! I’ve been working at improving the Linux System Roles Network Role. My main focus for this summer is being able to improve […]

The post GSOC Progress Report for Linux System Roles appeared first on Fedora Community Blog.

Debugging React Native

Posted by farhaan on July 16, 2020 04:11 AM

Recently I have been trying to dabble with mobile application development. I wanted to do something on a cross platform domain, so mostly I am being lazy where I wanted to write the application once and make it work for both iOS and Android. I had a choice to choose between Flutter which is a comparatively new framework and React Native which has been here for a while. I ended up choosing React Native and thanks to Geeky Ants for Native Base the development became really easy.

The way I am approaching developing my application is, by a Divide and Conquer strategy, where I have divide the UI part of the application into different component and I am developing one component at a time. I am using StoryBook for achieving the same. This is a beautiful utility which you should check out. Here we can visualise each small component and see how it will render.

While developing this application I was constantly facing the issue of asking myself , “Did this function got called?” or “What is the value of this variable here?”. So going back to my older self I would have put a print statement or as in JavaScript put a console.log on it.(Read it like Jlo’s put a ring on it.)

<iframe src="https://giphy.com/embed/75yXYus44FMw8" title="put a ring on it"></iframe>

Having done that for sometime, I asked myself; Is there a better way to this . And the answer is “YES!”, there is always a better way we just need to find out. Now enters the hero of our story, “Debugger” there are various operations that I can perform like to put up break points – conditional and non conditional and analyse our code.

Let me quickly walk you through this, so VS Code has a react native plugin that has to be configured, to use in our project. Once that is done we are almost ready to use it. I faced few issues while getting it to work initially and so I thought having some pointers upfront might ease out the development for other people.

Before I go into deeper details, just a preview of how a debugger looks like on ReactNative with VS Code.

<figure class="wp-block-image size-large is-resized"><figcaption>Debugger In Action</figcaption></figure>

So let’s get to the meat of it and how to get it to work. First and foremost we need to download and install React Native Tools from VS Code extensions.

Once that is done we are are good to go, on the side bar you can see that there is a Debug button , when you click on it, VS code opens up a configuration file.

<figure class="wp-block-image size-large is-resized"><figcaption>launch.json</figcaption></figure>

There are various options you can play around but the one that worked the best for me was Attach to packager. Once the configuration is in, then we need start the packager, mostly it is done by npm start, but VS Code also provide an option in the action bar at the bottom.

<figure class="wp-block-image size-large is-resized"><figcaption>Action Bar</figcaption></figure>

Once the the packager is started we need to start our debugger, click on the Debug button on the sidebar and click on the Attach to packager icon. This will start your debugger from VS Code. Till now your action bar should be of blue color showing that the debugger has started but not active yet.

Then on the device where you are deploying the application you need to enable Debug Js and Vola! your debugger will be active. Debugger has helped me a lot. I could inspect each variable and see at what point of time what values it holds.

Or I can step into debugger and trace the flow of the control statement. I can even put a conditional break point and see when a condition is met or not.

<amp-fit-text height="80" layout="fixed-height" max-font-size="72" min-font-size="6">Debuggers has helped me a lot to make my development go faster, hope this blog helps the readers too.</amp-fit-text>

Happy Hacking!

Fedora Classroom Session: Git 101 with Pagure

Posted by Fedora Magazine on July 15, 2020 08:00 AM

The Fedora Classroom is a project to help people by spreading knowledge on subjects related to Fedora for others, If you would like to propose a session, feel free to open a ticket here with the tag classroom. If you’re interested in taking a proposed session, kindly let us know and once you take it, you will be awarded the Sensei Badge too as a token of appreciation. Recordings from the previous sessions can be found here.

We’re back with another awesome classroom on Git 101 with Pagure led by Akashdeep Dhar (t0xic0der).

About the session

In short, the Git 101 with Pagure session will be a guide for newcomers on how to get started with Git with the git forge Pagure used by the Fedora community. After finishing the session you will have the knowledge to manage Git and Pagure and generate the first contributions on the Fedora Project.

When and where

The Classroom session will be organized on Jul 17th, 17:00 UTC. Here’s a link to see what time it is in your timezone. The session will be streamed on Fedora Project’s YouTube channel.

Topics covered in the session

  • Version Control Systems
  • Why Git?
  • VCS Hosting Sites
  • Fedora Pagure
  • Exploring Pagure
  • Git Fundamentals

About the instructor

Akashdeep Dhar is a cybersecurity enthusiast with keen interests in networking, cloud computing and operating systems. He is currently in the final year of his computer science major with cybersecurity minor bachelor degree. He has over five years of experience in using GNU/Linux systems and is new to the Fedora community with contributions made so far in infrastructure, classroom and documentation.

If you miss the session, the recording will also be uploaded in the Fedora Project‘s YouTube channel.

We hope you can attend and enjoy this experience from some of the awesome people that work in Fedora Project. We look forward to seeing you in the Classroom session.

Photograph used in feature image is San Simeon School House by Anita RitenourCC-BY 2.0.


Posted by Colin Walters on July 14, 2020 10:00 PM

There’s been a lot of discussion on this proposed Fedora change for Workstation to use BTRFS.

First off, some background: I reprovision my workstation about every 2-3 months to avoid it becoming too much of a "pet". I took the opportunity for this reprovision to try out BTRFS again (it’d been years).

Executive summary

BTRFS should be an option, even an emphasized one. It probably shouldn’t be the default for Workstation, and shouldn’t be a default beyond that for server use cases (e.g. Fedora CoreOS).

Why are there multiple Linux filesystems?

There are multiple filesystems in the Linux kernel for good reasons. It’s basically impossible to optimize for all use cases at once, and there are fundamental tradeoffs to make. BTRFS in particular has a lot of features…and those features have costs. Not every use case needs those features, and the costs can be close to prohibitive for things like databases.

BTRFS is good for "pet" systems

There is this terminology in the industry of pets vs cattle – I once saw a talk that proposed "elephants vs ants" instead which is more appealing. Lately I tend to use "disposable" or "reprovisionable" for the second term.

I mentioned above I reprovision my workstation periodically, but it’s still somewhat of a "pet". I don’t have everything in config management yet (and probably never will); I change things often enough that it’s hard to commit to 100% discipline to record every change in git instead of just running a CLI or writing a file. But I have all the important stuff. (And I take backups of data separately of course.)

For people who don’t have much in configuration management – the server or desktop system that has years of individually built up changes (whether from people doing things manually over ssh or interactively via a GUI like Cockpit, being able to take a filesystem snapshot of things is an extremely compelling feature.

Another great BTRFS-style use case is storing data like your photos on a local drives instead of uploading them to the cloud, etc.

The BTRFS cost

Those features though come at a cost. And this back to the "pets" vs "disposable" systems and where the "source of truth" is. For users managing disposable systems, the source of truth isn’t the Unix filesystem – it’s most likely a form of GitOps. Or take the case of Kubernetes – it’s a cluster with the primary source being etcd.

And of course people are using storage systems like PostgreSQL or Ceph for data, or an object storage system.

The important thing to see here is that in these cases, the "source of truth" isn’t a single computer (a single Unix filesystem) – it’s a distributed cluster.

For all these databases, performance is absolutely critical. They don’t need the underlying filesystem to do much other than pass through writes to disk, because they are already managing things like duplication/checksumming/consistency at a higher level.

As most BTRFS users know (or have discovered the hard way) you really need to use nodatacow for these – effectively "turning off" a lot of BTRFS features.

Another example: virtual machine images which is an interesting one because the "pet" vs "disposable" discussion here becomes recursive – is the VM a pet or disposable, etc.

Not worth paying for reprovisionable systems

For people who manage "reprovisionable" systems, there’s usually not much value using BTRFS for things like operating system data or /etc (they can just blow it away and reprovision), and a clear cost where they need to either use nodatacow on the things that do matter (losing a lot of the BTRFS features for that data), or explicitly use e.g. xfs/ext4 for them, going back into a world of managing "mixed" storage.

In particular, I would strongly argue against defaulting to BTRFS for Fedora CoreOS because we are explicitly pushing people away from creating these types of "pet" systems.

To say this another way, I’ve seen some Internet discussion about this read the proposed change as applying beyond Fedora Workstation, and that’s wrong.

But if you e.g. want to use BTRFS anyways for Fedora CoreOS (perhaps using a separate subvolume for /var where persistent container data is stored) that would be mounted with nodatacow for things etcd that could make sense! We are quite close to finishing root filesystem reprovisioning in Ignition.

But a great option if you know you want/need it!

As I mentioned above, my workstation (FWIW a customized Silverblue-style system) is a seems like a nearly ideal use case for BTRFS. I’m not alone in that! I’m likely going to roll with it for a few months until the next reprovisioning time unless I hit some stumbling blocks.

However, I am already noticing the Firefox UI periodically lock up for seconds at a time, which wasn’t happening before. Since I happen to know Firefox uses SQLite (which like the other databases mentioned above, conflicts with btrfs), I tried this and yep:

walters@toolbox> find ~/.mozilla/ -type f -exec filefrag {} \; | grep -Ee '[0-9][0-9][0-9]+ extents found'
firefox/xxxx.default-release/storage/.../xxxx.sqlite: 1825 extents found

And that’s only a few days old! (I didn’t definitively tie the UI lockups to that, but I wouldn’t be surprised. I’d also hope Firefox isn’t writing to the database on the main thread, but I’m sure it’s hard for the UI to avoid blocking on some queries).

I just found this stackoverflow post with some useful tips around manually or automatically defragmenting but…it’s really difficult to say that all Fedora/Firefox users should need to discover this and make the difficult choice of whether they want BTRFS features or performance for individual files after the fact. Firefox upstream probably can’t unilaterally set the nodatacow option on their databases because some users might reasonably want consistent snapshots for their home directory. A lot of others though might use a separate backup system (or Firefox Sync) and much prefer performance, because they can just restore their browser state like bookmarks/history from backup if need be.

Random other aside: sqlite performance and f2fs

In a tangentially related "Linux filesystems are optimized for different things" thread, the f2fs filesystem mostly used by Android (AFAIK) has special APIs designed specifically for SQLite, because SQLite is so important to Android.


All Fedora variants are generic to a degree; I don’t think there will ever be just one Linux filesystem that’s the only sane choice. It makes total sense to have BTRFS as a prominent option for people creating desktops (and laptops and to a lesser degree servers).

The default however is an extremely consequential decision. It implies many years of dealing with the choice in later bug reports, etc. It really requires a true committment to that choice for the long term.

I’m not sure it makes sense to push even Linux workstation users towards a system that’s more "pet" oriented by default. How people create disposable systems (particularly for workstations) is a complex topic with a lot of tradeoffs; I’d love for the Fedora community to have more blog entries about this in the Magazine. One of those solutions might be e.g. using a BTRFS root and using send/receive to a USB drive for backups for example!

But others would be about the things I and others do to manage "disposable" systems: managing data in /home in git, using image systems like rpm-ostree for the base OS to replicate well known state instead of letting their package database be a "pet", storing development environment as a container image etc. Those work on any Unix filesystem without imposing any runtime cost. And that’s what I think most people provisioning new systems in 2020 should be doing.

Introducing pyage-rust, a Python module for age encryption

Posted by Kushal Das on July 14, 2020 10:09 AM

age is a simple, modern and secure file encryption tool, it was designed by @Benjojo12 and @FiloSottile.

An alternative interoperable Rust implementation is available at github.com/str4d/rage

pyage-rust is a Python module for age, this is built on top of the Rust crate. I am not a cryptographer, and I prefer to keep this important thing to the specialists :)

pyage-rust demo


I have prebuilt wheels for both Linux and Mac, Python 3.7 and Python 3.8.

python3 -m pip install pyage-rust==0.1.0

Please have a look at the API documentation to learn how to use the API.

Building pyage-rust

You will need the nightly rust toolchain (via https://rustup.rs).

python3 -m venv .venv
source .venv/bin/activate
python3 -m pip install requirements-dev.txt
maturin develop && maturin build

Missing items in the current implementation

I am yet to add ways to use ssh keys or alias features, but those will come slowly in the future releases.

My Outreachy Internship: The journey so far…

Posted by Fedora Community Blog on July 14, 2020 08:00 AM
Outreachy with Fedora, Fall 2016

Progress For the past several weeks I’ve been working on migrating Fedora Badges to Badgr. I have completed the following tasks so far: Wrote an SDK for communicating with badgr-server Tests for the SDK Scripts to add, issue, revoke badges Openshift templates for deployment I’m currently working on adding FAS authentication to badgr-server as well […]

The post My Outreachy Internship: The journey so far… appeared first on Fedora Community Blog.

Council policy proposal: Process for promoting Fedora deliverable to Edition

Posted by Fedora Community Blog on July 13, 2020 04:37 PM
Fedora community elections

With several Fedora deliverables ready (or nearly ready) to be promoted to Edition status, we need a policy for how this will work. After consulting with representatives from QA, Release Engineering, and Fedora IoT, I drafted a proposed process. The Council will begin voting on Tuesday 28 July in accordance with the policy change policy. […]

The post Council policy proposal: Process for promoting Fedora deliverable to Edition appeared first on Fedora Community Blog.

Home Network setup for OpenShift

Posted by Adam Young on July 13, 2020 03:45 PM

Here is how I currently have my machines connected. Posted here for documentation, and to get it straight in my own head.

<figure class="wp-block-image"><figcaption>State of my lab Network On July 12, 2020</figcaption></figure>

Getting the NUC (named Nuzleaf) seems to have been the critical factor to making progress. It can host enough VMs to run an OpenShift control plane, and it has Wireless, which means the setup is no longer dependant on a physical wire for external connectivity. I have been able to migrate most of the functions I need to run on the base OS of the NUC as opposed to running in VMs. right now this is:

Here’s a list of its capabilities

  • FreeIPA for DNS and Certificates
  • DHCP
  • TFTP
  • Generic Web Hosting for ISOs, Kickstarts, and ignition files.
  • PXE Provisioning (The three previous features combined)
  • Container Registry Mirroring (run in a container)
  • Virtual Machine Hosting
  • Routing between Different Physical and Virtual Networks

The NUC has one built in physical Ethernet port (Fedora labels this as enp3s0.) This Was originally used only for communication with the outside world. I’ve switched that function over to the Wireless network. However, I plan on using it for internal (laptop to NUC) communication that I am calling DMZ for the moment. I got a USB Ethernet adapter for the traffic to the three rack-mounted Dell r610 Servers.

I’ve allocated a Class C from the 192.168.X.X range to each of the networks on which Nuzleaf serves DHCP. 2 Of these are from DHCPD, and one is from the libvirt managed dnsmasq server. These are

DNS SubdomainIPv4 Subnet

Why do they get different subdomain? Because some of them, maybe most of them, are going to be put on multiple networks, and need to be distinguishable based on their IP address. For example, Nuzleaf is going to be on all of these networks, and needs to operate as the IDM server for machines on all of them. When Nuzleaf was initially setup, I put the following into the /etc/hosts file on it: nuzleaf.home.younglogic.net

This is allocated from the DHCP off the the WIFI network. However, that IP address is not accessible to machines not on that network.

It might end up that only the IDM server needs multiple Identities, but I suspect that the HAProxy node will as well. If I can collapse this down to a single subdomain, I will.

EDIT: OK, So even this IDM setup is making it hard.impossible to register Clients. I redid the IDM server so that instead of using the IP address from the Wireless router, it uses the IP address for the Default network from the dnsmasq associated wiuth Libvirt.

I also created an Firewalld zone called libvirt. It looks like this:

# firewall-cmd  --list-all --zone=libvirt
libvirt (active)
  target: ACCEPT
  icmp-block-inversion: no
  interfaces: virbr0 virbr1 virbr2 virbr3
  services: dhcp dhcpv6 dns freeipa-ldap freeipa-ldaps http kerberos kpasswd ldap ssh tftp
  protocols: icmp ipv6-icmp
  masquerade: no
  rich rules: 
	rule priority="32767" reject

Notice that this has all the internal virtual bridges for VM traffic. I might need to move the Rack network onto this firewall Zone as well.

The lesson learned here is that IDM should have a “good” name associated with the Subnet that it is managing.

Note that If I want to keep an entry for Nuzleaf in my Laptop’s /etc/hosts I can do so. However, the routing on Nuzleaf seems to make it such that I can connect to all hosts from my Laptop via all networks except “Default” which is natted.

Automating Network Devices with Ansible

Posted by Fedora Magazine on July 13, 2020 03:31 PM

Ansible is a great automation tool for system and network engineers, with Ansible we can automate small network to a large scale enterprise network. I have been using Ansible to automate both Aruba, and Cisco switches from my Fedora powered laptops for a couple of years. This article covers the requirements and executing a couple of playbooks.

Configuring Ansible

If Ansible is not installed, it can be installed using the command below

$ sudo dnf -y install ansible

Once installed, create a folder in your home directory or a directory of your preference and copy the ansible configuration file. For this demonstration, I will be using the following.

$ mkdir -pv /home/$USER/network_automation
$ sudo cp -v /etc/ansible.cfg /home/$USER/network_automation
$ cd /home/$USER/network_automation
$ sudo chown $USER.$USER && chmod 0600 ansible.cfg

To prevent lengthy commands from failing, edit the ansible.cfg and append the following lines. We must add the persistent connection and set the desired time in seconds for the command_timeout as demonstrated below. A use case where this is useful is when you are performing backups of a network device that has a lengthy configuration.

$ vim ansible.cfg
command_timeout = 300
connection_timeout = 30


If SELinux is enabled, you will need to install SELinux binding, which is required when using the copy module.

# Install SELinux bindings
dnf -y install python3-libselinux python3-libsemanage

Creating the inventory

The inventory holds the names of the network assets, and grouping of the assets are in square brackets [], below is a  sample inventory.

Core_A ansible_host=
Distro_A ansible_host=
Distro_B ansible_host=

Group vars can be used to address the common variables, for example, credentials, network operating system, and so on. Ansible document on inventory provides additional details.


Playbooks are Ansible’s configuration, deployment, and orchestration language. They can describe a policy you want your remote systems to enforce, or a set of steps in a general IT process. Ansible Playbook

Read Operations

Let us create a simple playbook to run a show command to read the configuration on a few switches.

  1 ---
  2 - name: Basic Playbook
  3   hosts: site_a
  4   connection: local
  6   tasks:
  7   - name: Get Interface Brief
  8     ios_command:
  9       commands:
 10         - show ip interface brief | e una
 11     register: interfaces
 13   - name: Print results
 14     debug:
 15       msg: "{{ interfaces.stdout[0] }}
<figure>Without Debug</figure>

<figure>With Debug</figure>

The above images show the differences without and with the debug module respectively.

Let’s break the playbook into three blocks, starting with lines 1 to 4.

  • The three dashes/hyphens starts the YAML document
  • The hosts defines the hosts or host groups, multiple groups are comma-separated
  • Connection defines the methodology to connect to the network devices. Another option is network_cli (recommended method) and will be used later in this article. See IOS Platform Options for more details.

Lines 6 to 11 starts the tasks, we will be using ios_command and ios_config. This play will execute the show command show ip interface brief | e una and save the output from the command into the interfaces variable, with the register key.

Lines 13 to 15, by default, when you execute a show command you will not see the output, though this is not used during automation. It is very useful for debugging; therefore, the debug module was used.

The below video shows the execution of the playbook. There are a couple of ways you can execute the playbook.

  • Passing arguments to the command line, for example, include -u <username> -k to prompt for the remote user credentials
ansible-playbook -i inventory show_demo.yaml -u admin -k
  • Include the credentials in the host or group vars
ansible-playbook -i inventory show_demo.yaml

Never store passwords in plain text. We recommend using SSH keys to authenticate SSH connections. Ansible supports ssh-agent to manage your SSH keys. If you must use passwords to authenticate SSH connections, we recommend encrypting them with Using Vault in Playbooks

<figure class="wp-block-video"><video controls="controls" src="https://fedoramagazine.org/wp-content/uploads/2020/06/Screencast-from-25-06-20-100214.webm"></video><figcaption>Passing arguments to the command line</figcaption></figure> <figure class="wp-block-video"><video controls="controls" src="https://fedoramagazine.org/wp-content/uploads/2020/06/Screencast-from-25-06-20-100811.webm"></video><figcaption>Credentials in the inventory</figcaption></figure>

If we want to save the output to a file, we will use the copy module as shown in the playbook below. In addition to using the copy module, we will include the backup_dir variable to specify the directory path.

- name: Get System Infomation
  hosts: site_a
  connection: network_cli
  gather_facts: no
    backup_dir: /home/eramirez/dev/ansible/fedora_magazine
  - name: get system interfaces
        - show ip int br | e una
    register: interface
  - name: Save result to disk
      content: "{{ interface.stdout[0] }}"
      dest: "{{ backup_dir }}/{{ inventory_hostname }}.txt"

To demonstrate the use of variables in the inventory, we will use plain text. This method Must not be used in production.

Core_A ansible_host=
Distro_A ansible_host=
Distro_B ansible_host=
<figure class="wp-block-video"><video controls="controls" src="https://fedoramagazine.org/wp-content/uploads/2020/06/Screencast-from-24-06-20-234555.webm"></video></figure>

Write Operations

In the previous section, we saw that we could get information from the network devices; in this section, we will write (add/modify) the configuration on these network devices. To make changes to the network device, we will be using the ios config module.

Let us create a playbook to configure a couple of interfaces in all of the network devices in site_a. We will first take a backup of the current configuration of all devices in site_a. Lastly, we will save the configuration.

- name: Get System Infomation
  hosts: site_a
  connection: network_cli
  gather_facts: no
    backup_dir: /home/eramirez/dev/ansible/fedora_magazine
  - name: Backup configs
      backup: yes
        filename: "{{ inventory_hostname }}_running_cfg.txt"
        dir_path: "{{ backup_dir }}"
  - name: get system interfaces
        - description Raspberry Pi
        - switchport mode access
        - switchport access vlan 100
        - spanning-tree portfast
        - logging event link-status
        - no shutdown
      parents: "{{ item }}"
      - interface FastEthernet1/12
      - interface FastEthernet1/13
  - name: Save switch configuration
      save_when: modified

Before we execute the playbook, we will first validate the interface configuration. We will then run the playbook and confirm the changes as illustrated below.

<figure class="wp-block-video"><video controls="controls" src="https://fedoramagazine.org/wp-content/uploads/2020/06/Screencast-from-25-06-20-113943.webm"></video></figure>


This article is a basic introduction to whet your appetite that demonstrates how Ansible is used to manage network devices. Ansible is capable of automating a vast network, which includes MPLS routing and performing validation before executing the next task.

Episode 205 – The State of Open Source Security with Alyssa Miller from Snyk

Posted by Josh Bressers on July 13, 2020 12:03 AM

Josh and Kurt talk to Alyssa Miller from Snyk about the State of Open Source Security 2020 report. Alyssa was the report author and has some great insight into the current trends we’re seeing in open source security. Some of the challenges developers face. We discuss the difficulty static and composition analysis scanners face. It’s a great conversation!

<audio class="wp-audio-shortcode" controls="controls" id="audio-1809-1" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_205_The_State_of_Open_Source_Security_with_Alyssa_Miller_from_Snyk.mp3?_=1" type="audio/mpeg">https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_205_The_State_of_Open_Source_Security_with_Alyssa_Miller_from_Snyk.mp3</audio>

Show Notes

Minor service disruption

Posted by Fedora Infrastructure Status on July 12, 2020 08:14 PM
Service 'Fedora Packages App' now has status: minor: down due to app replacement

virt-manager libvirt XML editor UI

Posted by Cole Robinson on July 12, 2020 04:00 AM

virt-manager 2.2.0 was released in June of last year. It shipped with a major new feature: libvirt XML viewing and editing UI for new and existing domain, pools, volumes, networks.

Every VM, network, and storage object page has a XML tab at the top. Here's an example with that tab selected from the VM Overview section:

VM XML editor

Here's an example of the XML view when just a disk is selected. Note it only shows that single device's libvirt XML:

Disk XML editor

By default the XML is not editable; notice the warning at the top of the first image. After editing is enabled, the warning is gone, like in the second image. You can enable editing via Edit->Preferences from the main Manager window. Here's what the option looks like:

XML edit preference

A bit of background: We are constantly receiving requests to expose libvirt XML config options in virt-manager's UI. Some of these knobs are necessary for <1% but uninteresting to the rest. Some options are difficult to set from the command line because they must be set at VM install time, which means switch from virt-manager to virt-install which is not trivial. And so on. When these options aren't added to the UI, it makes life difficult for those affected users. It's also difficult and draining to have these types of justification conversations on the regular.

The XML editing UI was added to relieve some of the pressure on virt-manager developers fielding these requests, and to give more power to advanced virt users. The users that know they need an advanced option are usually comfortable editing the libvirt XML directly. The XML editor doesn't detract from the existing UI much IMO, and it is uneditable by default to prevent less knowledgeable users from getting into trouble. It ain't gonna win any words for great UI, but the feedback has been largely positive so far.

Testing Netlify server-side analytics

Posted by Josef Strzibny on July 12, 2020 12:00 AM

I used server-side analytics for quite some time on this blog. With my recent blog migration I picked up on client-side analytics. I am now test driving Netlify server-side analytics alongside the client-side Fathom Analytics. This is what I have found.

My blog migration

When I stopped using WordPress and moved my blog over to Jekyll I also made a move to host the site on Netlify. Netlify is great for two reasons. A great UX that onboard you in seconds and the super generous free plan.

Using Netlify was a no brainer, but I did loose one thing by moving my blog. My server-side analytics. To combat this loss I researched my options.

My main criteria were:

  • privacy-friendliness towards my readers, and
  • a reasonable price.

Since Netlify charges $9/month per site, I looked elsewhere. I have a lot of small sites that do not warrant such a price, but could use a very simple visitor counter.

In the end, I gave my money to Fathom Analytics, because if I would settle for them long term, the yearly pricing made the most sense to me at that time. I summarized my research in a post about alternatives to Google Analytics.

Server-side analytics

All of these solutions like Fathom have one small problem. They are strictly client-side.

What does that mean?

Client-side analytics need to include a JavaScript snipped that might potentially not track every single visit if such tracker gets blocked by a browser, browser extension or if the user turned off JavaScript altogether. On the other hand, it’s easy to include on sites where you don’t control the backend (such as GitHub Pages, Netlify, etc.).

Server-side analytics reports data from server logs so they never miss a single hit. Not having to load a client-side script also gives your sites a better performance. In Netlify case specifically, they also don’t use sampling and respect GDPR.

Does that sound like a win? Why even bother?

The thing for me was that I suddenly saw a smaller number of visitors on my blog. It could have been the WordPress to Jekyll migration or it could be the switch to Fathom for analytics. I wouldn’t know. To find out, I decided to give the native Netlify analytics a go.

My last month with Fathom looked like this:


Not bad, but I remember having quite a few more visitors. Now, it’s also important to say that you might get some false positives from just aggregators and bots if you get to count every single hit, so I am not claiming that all that traffic before was completely legit.

Netflify Analytics

Nevertheless, I was super interested how Netlify compares and whether my blog engine migration hurt me.

The nice thing about Netlify is that you can have access to your site analytics in a mere click and on top of that you got to see the past month numbers immediately (so I did not have to wait one month to gather the data for this experiment).

Here is the main overview:


As you can see, the numbers are way higher. 11272 unique visitors and 57561 real hits. Again, the visits gather everything including many bots and aggregators, but the unique visitors number is also way bigger and I believe quite accurate.

Here is a distribution graph:


And here are the top performing pages:


Top pages is one of the most important metrics for me. If people like certain articles, I know I should be focusing more on this topics.

As you can see it’s quite funny that according to Fathom Analytics my most favorite articles are my Linux articles followed by Ruby ones. But on the Netlify side, it’s the privacy-oriented content and Elixir that take the lead.

I will blatantly say that Fathom is most likely misreporting here. While it shows articles that are constantly popular throughout the year, new fresh articles always do better that month especially if there are linked from newsletters and doing great at Reddit.

I feel like I should monitor the differences for a longer period of time. I understand the visitors vs hits disparity, but I yet have to wrap my head around such a difference for the most popular posts…

More than analytics

One thing I like about Netlify are accurate reports on 404 pages. If you don’t have your own 404 page and fallback to Netlify one, Fathom would never register that people were looking for something. And even with a custom redirect to 404.html it just shows up “404.html”.

Not all that helpful. Look at Netlify:


Because of better reporting of not-found pages, I was able to recover a very popular article that got lost in my migration! With just Fathom in place, I wouldn’t know. I would loose nice organic traffic from one of my regularly visited article without realizing it.

So will I switch to Netlify for my analytics need? I believe that my blog deserves it, but on the other hand, I have no way of justifying $9/month for every site I have. Do your own testing and due diligence. Then let me know, I am interested in what you think.

Fedora 32 : Upgrade from Fedora 31 to the new Fedora 32.

Posted by mythcat on July 11, 2020 07:21 PM
Today I managed to upgrade to Fedora 32.
After some time I started the system and used the tutorial from here.
I also used the following Linux commands:
#sudo dnf distro-sync --allowerasing
#dnf reinstall xorg-\* mesa\* libglvnd\*
In the following image, you can see the new upgrade works very well with my Cinnamon desktop environment.

Customize the information used for 1-click bug reports

Posted by Kiwi TCMS on July 11, 2020 11:20 AM

Kiwi TCMS integration with 3rd party bug trackers supports the 1-click bug report feature. However you may want to change how the initial information is structured or even what exactly is written in the initial comment. This article shows how to do this.

The default text used for 1-click bug reports gets compiled based on information present in the TestExecution - Product, Version, TestCase.text, etc. This is encapsulated in the tcms.issuetracker.base.IssueTrackerType._report_comment() method. You may extend the existing bug tracker integration code with your own customizations. In this example I've extended the KiwiTCMS bug tracker implementation but you can provide your own from scratch

# filename: mymodule.py
class ExtendedBugTracker(KiwiTCMS):
    def _report_comment(self, execution):
        comment = super()._report_comment(execution)

        comment += "----- ADDITIONAL INFORMATION -----\n\n"
        # fetch more info from other sources
        comment += "----- END ADDITIONAL INFORMATION -----\n"
        return comment

Then override the EXTERNAL_BUG_TRACKERS setting to include your customizations:


and change the bug tracker type, via https://tcms.example.com/admin/testcases/bugsystem/, to mymodule.ExtendedBugTracker.


  • Information how to change settings can be found here
  • mymodule.py may live anywhere on the filesystem but Python must be able to import it
  • It is best to bundle all of your customizations into a Python package and pip3 install it into your customized docker image
  • API documentation for bug tracker integration can be found here
  • Rebuilding the docker image is outside the scope of this article. Have a look at this Dockerfile for inspiration

Happy testing!

virt-convert tool removed in virt-manager.git

Posted by Cole Robinson on July 11, 2020 04:00 AM

The next release of virt-manager will not ship the virt-convert tool, I removed it upstream with this commit.

Here's the slightly edited quote from my original proposal to remove it:

virt-convert takes an ovf/ova or vmx file and spits out libvirt XML. It started as a code drop a long time ago that could translate back and forth between vmx, ovf, and virt-image, a long dead appliance format. In 2014 I changed virt-convert to do vmx -> libvirt and ovf -> libvirt which was a CLI breaking change, but I never heard a peep of a complaint. It doesn't do a particularly thorough job at its intended goal, I've seen 2-3 bug reports in the past 5 years and generally it doesn't seem to have any users. Let's kill it. If anyone has the desire to keep it alive it could live as a separate project that's a wrapper around virt-install but there's no compelling reason to keep it in virt-manager.git IMO

That mostly sums it up. If there's any users of virt-convert out there, you likely can get similar results by extracting the relevant disk image from the .vmx or .ovf config, pass it to virt-manager or virt-install, and let those tools fill in the defaults. In truth that's about all virt-convert did in to begin with.

Fedora program update: 2020-28

Posted by Fedora Community Blog on July 10, 2020 09:00 PM
Fedora Program Manager weekly report on Fedora Project development and progress

Here’s your report of what has happened in Fedora this week. The Nest With Fedora Call for Participation is now open. I have weekly office hours in #fedora-meeting-1. Drop by if you have any questions or comments about the schedule, Changes, elections, or anything else. Announcements CfPs Help wanted Upcoming meetings Releases CPE update Announcements […]

The post Fedora program update: 2020-28 appeared first on Fedora Community Blog.

Another try at a new Python module for OpenPGP aka johnnycanencrypt

Posted by Kushal Das on July 10, 2020 06:40 PM

Using OpenPGP from Python is a pain. There are various documentation/notes on the Internet explaining why, including the famous one from isis agora lovecraft where they explained why they changed the module name to pretty_bad_protocol.

sequoia-pgp is a Rust project to do OpenPGP from scratch in Rust, and as library first approach. You can see the status page to see how much work is already done.

Using this and Pyo3 project I started writing an experimental Python module for OpenPGP called Johnny Can Encrypt.

I just did an release of 0.1.0. Here is some example code.

>>> import johnnycanencrypt as jce
>>> j = jce.Johnny("secret.asc")
>>> data = j.encrypt_bytes("kushal 🐍".encode("utf-8"))
>>> print(data)


>>> result = j.decrypt_bytes(data.encode("utf-8"), "mysecretpassword")
>>> print(result.decode("utf-8"))
kushal 🐍

The readme of the project page has build instruction, and more details about available API calls. We can create new keypair (RSA4096). We can encrypt/decrypt bytes and files. We can also sign/verify bytes/files. The code does not have much checks for error handling, this is super early stage.

You will need nettle (on Fedora) and libnettle on Debian (and related development packages) to build it successfully.

I published wheels for Debian Buster (Python3.7), and Fedora 32 (Python3.8).

Issues in providing better wheels for pip install

The wheels are linked against system provided nettle library. And every distribution has a different version. Means even if I am building a python3.7 wheel on Debian, it will not work on Fedora. I wish to find a better solution to this in the coming days.

As I said earlier in this post, this is just starting of this project. It will take time to mature for production use. And because of Sequoia, we will have better defaults of cipher/hash options.

Kiwi TCMS 8.5

Posted by Kiwi TCMS on July 10, 2020 10:45 AM

We're happy to announce Kiwi TCMS version 8.5!

IMPORTANT: this is a medium sized release which includes many improvements, database migrations, translation updates and new tests. It is the third release to include contributions via our open source bounty program. You can explore everything at https://public.tenant.kiwitcms.org!

Supported upgrade paths:

5.3   (or older) -> 5.3.1
5.3.1 (or newer) -> 6.0.1
6.0.1            -> 6.1
6.1              -> 6.1.1
6.1.1            -> 6.2 (or newer)

Docker images:

kiwitcms/kiwi       latest  4379e2438e43    636 MB
kiwitcms/kiwi       6.2     7870085ad415    957 MB
kiwitcms/kiwi       6.1.1   49fa42ddfe4d    955 MB
kiwitcms/kiwi       6.1     b559123d25b0    970 MB
kiwitcms/kiwi       6.0.1   87b24d94197d    970 MB
kiwitcms/kiwi       5.3.1   a420465852be    976 MB

Changes since Kiwi TCMS 8.4


  • Update django from 3.0.7 to 3.0.8
  • Update django-colorfield from 0.3.0 to 0.3.2
  • Update django-modern-rpc from 0.12.0 to 0.12.1
  • Update django-simple-history from 2.10.0 to 2.11.0
  • Update mysqlclient from 1.4.6 to 2.0.1
  • Update python-gitlab from 2.2.0 to 2.4.0
  • Update python-bugzilla from 2.3.0 to 2.5.0
  • Add middleware to warn for unapplied migrations. Fixes Issue #1696 (Bryan Mutai)
  • Add "insert table" button to SimpleMDE toolbar. References Issue #1531 (Bryan Mutai)
  • Implement kiwitcms-django-plugin. Resolves Issue #693 (Bryan Mutai)
  • Add missing permission check for TestExecution.add_link() API method (Rosen Sasov)
  • Add missing permission check for TestExecution.remove_link() API method (Rosen Sasov)
  • Admin interface will now appear translated
  • Propagate server side API errors to the browser. Closes Issue #625, Issue #1333
  • Improvements for Status Matrix telemetry page:
    • Make the horizontal scroll bar at the bottom always visible
    • Make the header row always visible
    • Add button to show columns in reverse. Fixes Issue #1682
    • Make it possible to display TestExecutions from child TestPlans. Fixes Issue #1683


  • Update existing Bug tracker records to match the changes introduced with the new EXTERNAL_BUG_TRACKERS setting


  • Add EXTERNAL_BUG_TRACKERS setting which is a list of dotted class paths representing external bug tracker integrations. Plugins and Kiwi TCMS admins can now more easily include customized integrations

Refactoring & testing

  • Add new linter to check for label arguments in form field classes. Fixes Issue #738 (Bryan Mutai)
  • Add new linter to check if all forms inherit from ModelForm. Fixes Issue #1384 (Bryan Mutai)
  • Enable pylint plugin pylint.extensions.docparams and resolve errors. Fixes Issue #1192 (Bryan Mutai)
  • Migrate 'test-for-missing-migrations' from Travis CI to GitHub workflow. Fixes Issue #1553 (Bryan Mutai)
  • Add tests for tcms.bugs.api.add_tag(). References Issue #1597 (Mfon Eti-mfon)
  • Add tests for tcms.bugs.api.remove_tag(). References Issue #1597 (Mfon Eti-mfon)
  • Add test for tcms.testplans.views.Edit. References Issue #1617 (@cmbahadir)
  • Add tests for markdown2html(). Fixes Issue #1659 (Mariyan Garvanski)
  • Add test for Cyrillic support with MariaDB. References Issue #1770

Kiwi TCMS Enterprise v8.5-mt

  • Based on Kiwi TCMS v8.5
  • Update django-ses from 0.8.14 to 1.0.1
  • Update kiwitcms-tenants from 1.1.1 to 1.2
  • Update social-auth-app-django from 3.4.0 to 4.0.0
  • Start tagging non-Enterprise images of kiwitcms/kiwi - will be provided via separate private repository for enterprise customers. See here

For more info see https://github.com/MrSenko/kiwitcms-enterprise/#v85-mt-10-july-2020

Vote for Kiwi TCMS

Our website has been nominated in the 2020 .eu Web Awards and we've promised to do everything in our power to greet future FOSDEM visitors with an open source billboard advertising at BRU airport. We need your help to do that!

How to upgrade

Backup first! If you are using Kiwi TCMS as a Docker container then:

cd path/containing/docker-compose/
docker-compose down
docker-compose pull
docker-compose up -d
docker exec -it kiwi_web /Kiwi/manage.py migrate

Refer to our documentation for more details!

Happy testing!

Use DNS over TLS

Posted by Fedora Magazine on July 10, 2020 08:00 AM

The Domain Name System (DNS) that modern computers use to find resources on the internet was designed 35 years ago without consideration for user privacy. It is exposed to security risks and attacks like DNS Hijacking. It also allows ISPs to intercept the queries.

Luckily, DNS over TLS and DNSSEC are available. DNS over TLS and DNSSEC allow safe and encrypted end-to-end tunnels to be created from a computer to its configured DNS servers. On Fedora, the steps to implement these technologies are easy and all the necessary tools are readily available.

This guide will demonstrate how to configure DNS over TLS on Fedora using systemd-resolved. Refer to the documentation for further information about the systemd-resolved service.

Step 1 : Set-up systemd-resolved

Modify /etc/systemd/resolved.conf so that it is similar to what is shown below. Be sure to enable DNS over TLS and to configure the IP addresses of the DNS servers you want to use.

$ cat /etc/systemd/resolved.conf

A quick note about the options:

  • DNS: A space-separated list of IPv4 and IPv6 addresses to use as system DNS servers
  • FallbackDNS: A space-separated list of IPv4 and IPv6 addresses to use as the fallback DNS servers.
  • Domains: These domains are used as search suffixes when resolving single-label host names, ~. stand for use the system DNS server defined with DNS= preferably for all domains.
  • DNSOverTLS: If true all connections to the server will be encrypted. Note that this mode requires a DNS server that supports DNS-over-TLS and has a valid certificate for it’s IP.

NOTE: The DNS servers listed in the above example are my personal choices. You should decide which DNS servers you want to use; being mindful of whom you are asking IPs for internet navigation.

Step 2 : Tell NetworkManager to push info to systemd-resolved

Create a file in /etc/NetworkManager/conf.d named 10-dns-systemd-resolved.conf.

$ cat /etc/NetworkManager/conf.d/10-dns-systemd-resolved.conf

The setting shown above (dns=systemd-resolved) will cause NetworkManager to push DNS information acquired from DHCP to the systemd-resolved service. This will override the DNS settings configured in Step 1. This is fine on a trusted network, but feel free to set dns=none instead to use the DNS servers configured in /etc/systemd/resolved.conf.

Step 3 : start & restart services

To make the settings configured in the previous steps take effect, start and enable systemd-resolved. Then restart NetworkManager.

CAUTION: This will lead to a loss of connection for a few seconds while NetworkManager is restarting.

$ sudo systemctl start systemd-resolved
$ sudo systemctl enable systemd-resolved
$ sudo systemctl restart NetworkManager

NOTE: Currently, the systemd-resolved service is disabled by default and its use is opt-in. There are plans to enable systemd-resolved by default in Fedora 33.

Step 4 : Check if everything is fine

Now you should be using DNS over TLS. Confirm this by checking DNS resolution status with:

$ resolvectl status
MulticastDNS setting: yes                 
  DNSOverTLS setting: yes                 
      DNSSEC setting: yes                 
    DNSSEC supported: yes                 
  Current DNS Server:             
         DNS Servers:             
Fallback DNS Servers:             

/etc/resolv.conf should point to

$ cat /etc/resolv.conf
# Generated by NetworkManager
search lan

To see the address and port that systemd-resolved is sending and receiving secure queries on, run:

$ sudo ss -lntp | grep '\(State\|:53 \)'
State     Recv-Q    Send-Q       Local Address:Port        Peer Address:Port    Process                                                                         
LISTEN    0         4096     *        users:(("systemd-resolve",pid=10410,fd=18))

To make a secure query, run:

$ resolvectl query fedoraproject.org
fedoraproject.org:                  -- link: wlp58s0
                           -- link: wlp58s0


-- Information acquired via protocol DNS in 36.3ms.
-- Data is authenticated: yes

BONUS Step 5 : Use Wireshark to verify the configuration

First, install and run Wireshark:

$ sudo dnf install wireshark
$ sudo wireshark

It will ask you which link device it have to begin capturing packets on. In my case, because I use a wireless interface, I will go ahead with wlp58s0. Set up a filter in Wireshark like tcp.port == 853 (853 is the DNS over TLS protocol port). You need to flush the local DNS caches before you can capture a DNS query:

$ sudo resolvectl flush-caches

Now run:

$ nslookup fedoramagazine.org

You should see a TLS-encryped exchange between your computer and your configured DNS server:

<figure class="aligncenter size-large"></figure>

Poster in Cover Image Approved for Release by NSA on 04-17-2018, FOIA Case # 83661

Nest With Fedora CfP open

Posted by Fedora Community Blog on July 10, 2020 08:00 AM

In a normal year, we’d be getting ready for my favorite event: Flock to Fedora. But as we’re all aware, this is anything but a normal year. Despite this—or perhaps because of this—we still want to bring the community together to share ideas, make plans, and form the bonds that put the Friends in Fedora. […]

The post Nest With Fedora CfP open appeared first on Fedora Community Blog.

Minor service disruption

Posted by Fedora Infrastructure Status on July 09, 2020 05:32 PM
Service 'ABRT Server' now has status: minor: down due to colo move

Minor service disruption

Posted by Fedora Infrastructure Status on July 09, 2020 05:32 PM
Service 'ABRT Server' now has status: minor: down due to colo move

Major service disruption

Posted by Fedora Infrastructure Status on July 09, 2020 05:31 PM
Service 'ABRT Server' now has status: major: down due to colo move

Major service disruption

Posted by Fedora Infrastructure Status on July 09, 2020 05:31 PM
Service 'ABRT Server' now has status: major: down due to colo move

All systems go

Posted by Fedora Infrastructure Status on July 09, 2020 05:29 PM
New status good: Everything seems to be working. for services: Ipsilon, Badges, Blockerbugs, Package Updates Manager, Fedora Infrastructure Cloud, COPR Build System, Documentation website, Fedora elections, Account System, Fedora Messaging Bus, Fedora Calendar, Fedora pastebin service, The Koji Buildsystem, Koschei Continuous Integration, Kerberos, Mailing Lists, Module Build Service, Mirror List, Mirror Manager, Fedora Packages App, Pagure, Fedora People, Package maintainers git repositories, Fedora Container Registry, ABRT Server, Fedora websites, Fedora Wiki, Zodbot IRC bot

All systems go

Posted by Fedora Infrastructure Status on July 09, 2020 05:29 PM
New status good: Everything seems to be working. for services: Ipsilon, Badges, Blockerbugs, Package Updates Manager, Fedora Infrastructure Cloud, COPR Build System, Documentation website, Fedora elections, Account System, Fedora Messaging Bus, Fedora Calendar, Fedora pastebin service, The Koji Buildsystem, Koschei Continuous Integration, Kerberos, Mailing Lists, Module Build Service, Mirror List, Mirror Manager, Fedora Packages App, Pagure, Fedora People, Package maintainers git repositories, Fedora Container Registry, ABRT Server, Fedora websites, Fedora Wiki, Zodbot IRC bot

PHP version 7.2.32, 7.3.20 and 7.4.8

Posted by Remi Collet on July 09, 2020 02:37 PM

RPMs of PHP version 7.4.8 are available in remi repository for Fedora 32 and remi-php74 repository for Fedora 30-31 and Enterprise Linux  7 (RHEL, CentOS).

RPMs of PHP version 7.3.20 are available in remi repository for Fedora 30-31 and remi-php73 repository for Enterprise Linux  6 (RHEL, CentOS).

RPMs of PHP version 7.2.32 are available in remi-php72 repository for Enterprise Linux  6 (RHEL, CentOS).

emblem-notice-24.pngNo security fix this month.

emblem-important-2-24.pngPHP version 7.1 have reached its end of life and is no longer maintained by the PHP project.

These versions are also available as Software Collections in the remi-safe repository and as module for Fedora 30-32 and EL-8.

Version announcements:

emblem-notice-24.pngInstallation: use the Configuration Wizard and choose your version and installation mode.

Replacement of default PHP by version 7.4 installation (simplest):

yum-config-manager --enable remi-php74
yum update

or, the modular way (Fedora and EL 8):

dnf module enable php:remi-7.4
dnf update php\*

Parallel installation of version 7.4 as Software Collection

yum install php74

Replacement of default PHP by version 7.3 installation (simplest):

yum-config-manager --enable remi-php73
yum update php\*

or, the modular way (Fedora and EL 8):

dnf module enable php:remi-7.3
dnf update php\*

Parallel installation of version 7.3 as Software Collection

yum install php73

And soon in the official updates:

emblem-important-2-24.pngTo be noticed :

  • EL-8 RPMs are build using RHEL-8.2
  • EL-7 RPMs are build using RHEL-7.8
  • EL-6 RPMs are build using RHEL-6.10
  • EL-7 builds now use libicu62 (version 62.1)
  • EL builds now uses oniguruma5php (version 6.9.5, instead of outdated system library)
  • oci8 extension now uses Oracle Client version 19.6 (excepted on EL-6)
  • a lot of new extensions are also available, see the PECL extension RPM status page


Base packages (php)

Software Collections (php72 / php73 / php74)

Quarantine Slump

Posted by Madeline Peck on July 09, 2020 12:00 PM

I’ve been desperately trying to search for the post I saw that captured this more eloquently, but summed up it asked the question, ‘Are you experiencing caution restlessness? Accidentally being more careless, seeing more friends even at a distance?’ It made me think, I haven’t left my house in over a month? But I have interacted with the same neighbors semi regularly and have I been making sure to do so safely?

If you’re guilty of this as I am, it’s important to make sure we’re looking after ourselves. Getting adequate sleep, eat three meals a day, and implement a simple exercise routine are the first steps. And be kind to yourself in this time :) If you want, lemme know how you are in a comment!

This week I wrapped up making a video for the July Madness Intern Video contest with the theme “school spirit”

You can vote by liking my comment with the video in it here on the mojo page!

Last week I submitted some logo ideas to the Nest ticket and I had to clean up the ones they enjoyed. Went through some inkscape tutorials to refresh and see if I was doing anything slower then some tips and tricks.

Attended Boston’s lightning talks and got me thinking of potentially doing my own towards the end of the summer.

Going through the changes for the coloring book so I can send them out in an email to the reviewers by Friday hopefully, and speaking of the reviewers.

And I asked the technical reviewers to send pictures of themselves so we could put a credit page with drawings of them, thanking them. So I have three sketches because I have three selfies haha.

<figure class=" sqs-block-image-figure intrinsic "> Adjustments.jpeg </figure>

I know this week I struggled with staying motivated however someone just recommended me the podcast ‘Creative Pep Talk’ which I think will be a huge help to listen to for fun while working, and remembering you can’t wait for motivation to do work :).

Insider 2020-07: TLS; capabilities; 3.27;

Posted by Peter Czanik on July 09, 2020 11:41 AM

Dear syslog-ng users,

This is the 83rd issue of syslog-ng Insider, a monthly newsletter that brings you syslog-ng-related news.


Simplifying CA handling in syslog-ng TLS connections

When talking to users about the TLS-encrypted message transfer, almost everyone immediately complains about configuring a certificate authority (CA) in syslog-ng. You needed to create a hash and create a symbolic link to the CA file based on the hash. Not anymore. While this old method is still available, there is now a much easier way: the new ca-file() option.


Working around Linux capabilities problems for syslog-ng

No, SELinux is not the cause of all permission troubles on Linux. For example, syslog-ng makes use of the capabilities system on Linux to drop as many privileges as possible, as early as possible. But it might cause problems in some corner cases, as even when running as root, syslog-ng cannot read files owned by a different user. Learn from this blog how you can figure out if you have a SELinux or capabilities problem and how to fix it if you do.


Figuring out where a message arrived, and other syslog-ng 3.27 tricks

Version 3.27 of syslog-ng has brought many smaller, but useful features to us. The new Sumo Logic destination was already covered in an earlier blog. You can now also check exactly where a message arrived on a network source (IP address, port and protocol). Rewriting the facility of a syslog message was also made easy. For a complete list of new features and changes, check the release notes at https://github.com/syslog-ng/syslog-ng/releases/tag/syslog-ng-3.27.1

You can learn more about these features at https://www.syslog-ng.com/community/b/blog/posts/figuring-out-where-a-message-arrived-and-other-syslog-ng-3-27-tricks



Your feedback and news, or tips about the next issue are welcome. To read this newsletter online, visit: https://syslog-ng.com/blog/

nbdkit tar filter

Posted by Richard W.M. Jones on July 09, 2020 11:32 AM

nbdkit is our high performance liberally licensed Network Block Device server, and OVA files are a common pseudo-standard for exporting virtual machines including their disk images.

A .ova file is really an uncompressed tar file:

$ tar tf rhel.ova

Since tar files usually store their content unmangled, this opens an interesting possibility for reading (or even writing) the embedded disk image without needing to unpack the tar. You just have to work out the offset of the disk image within the tar file. virt-v2v has used this trick to save a copy when importing OVAs for years.

nbdkit has also included a tar plugin which can access a file inside a local tar file, but the problem is what if the tar file doesn’t happen to be a local file? (eg. It’s on a webserver). Or what if it’s compressed?

To fix this I’ve turned the plugin into a filter. Using nbdkit-tar-filter you can unpack even non-local compressed tar files:

$ nbdkit curl http://example.com/qcow2.tar.xz \
         --filter=tar --filter=xz tar-entry=disk.qcow2

(To understand how filters are stacked, see my FOSDEM talk from last year). Because in this example the disk inside the tarball is a qcow2 file, it appears as qcow2 on the wire, so:

$ guestfish --ro --format=qcow2 -a nbd://localhost

Welcome to guestfish, the guest filesystem shell for
editing virtual machine filesystems and disk images.

Type: ‘help’ for help on commands
      ‘man’ to read the manual
      ‘quit’ to quit the shell

><fs> run
><fs> list-filesystems 
/dev/sda1: ext2
><fs> mount /dev/sda1 /
><fs> ll /
total 19
drwxr-xr-x   3 root root  1024 Jul  6 20:03 .
drwxr-xr-x  19 root root  4096 Jul  9 11:01 ..
-rw-rw-r--.  1 1000 1000    11 Jul  6 20:03 hello.txt
drwx------   2 root root 12288 Jul  6 20:03 lost+found

Fedora documentation is now multilingual

Posted by Fedora Community Blog on July 09, 2020 08:00 AM
Fedora Localization Project

The Fedora project documentation website provides a lot of end-users content. All of this content is now translateable, providing a powerful tool for our multilingual communication. Writers will continue to work as usual. The publishing tools automatically convert content and push it to the translation platform. Then, translated content is automatically published. Restoring a multilingual […]

The post Fedora documentation is now multilingual appeared first on Fedora Community Blog.

Getting Back Into Blogging

Posted by Adam Miller on July 09, 2020 04:46 AM

Getting Back Into Blogging

Something I've always told myself is that I'd start blogging again, about tech or just kind of whatever seemed interesting to me at the time. I'm now making a commitment to myself that I will blog once a week. Not every blog post will be the most amazing thing anyone has ever read and it's reasonable to think that a lot of people will simple ignore a lot of what I say and I'm alright with that. If you're here, welcome! If not ... you don't even know this, but that's also cool.

I'll likely talk about Open Source Software, Linux, Ansible, and technology in general so if none of those things interest you then maybe this isn't the blog for you to keep tabs on. If so, grab the RSS or Atom feeds and follow along with the journey!

Until next time...

Running Rosetta@home on a Raspberry Pi with Fedora IoT

Posted by Fedora Magazine on July 08, 2020 08:00 AM

The Rosetta@home project is a not-for-profit distributed computing project created by the Baker laboratory at the University of Washington. The project uses idle compute capacity from volunteer computers to study protein structure, which is used in research into diseases such as HIV, Malaria, Cancer, and Alzheimer’s.

In common with many other scientific organizations, Rosetta@home is currently expending significant resources on the search for vaccines and treatments for COVID-19.

Rosetta@home uses the open source BOINC platform to manage donated compute resources. BOINC was originally developed to support the SETI@home project searching for Extraterrestrial Intelligence. These days, it is used by a number of projects in many different scientific fields. A single BOINC client can contribute compute resources to many such projects, though not all projects support all architectures.

For the example shown in this article a Raspberry Pi 3 Model B was used, which is one of the tested reference devices for Fedora IoT. This device, with only 1GB of RAM, is only just powerful enough to be able to make a meaningful contribution to Rosetta@home, and there’s certainly no way the Raspberry Pi can be used for anything else – such as running a desktop environment – at the same time.

It’s also worth mentioning at this point that the first rule of Raspberry Pi computing is to get the recommended power supply. It is important to get as close to the specified 2.5A as you can, and use a good quality micro-usb cable.

Getting Fedora IoT

To install Fedora IoT on a Raspberry Pi, the first step is to download the aarch64 Raw Image from the iot.fedoraproject.org download page.

Then use the arm-image-installer utility (sudo dnf install fedora-arm-installer) to write the image to the SD card. As always, be very sure which device name corresponds to your SD Card before continuing. Check the device with the lsblk command like this:

$ lsblk
sdb         8:16 1 59.5G  0 disk
└─sdb1      8:17 1 59.5G  0 part /run/media/gavin/154F-1CEC
nvme0n1     259:0 0 477G  0 disk
├─nvme0n1p1 259:1 0 600M  0 part

If you’re still not sure, try running lsblk with the SD card removed, then again with the SD card inserted and comparing the outputs. In this case it lists the SD card as /dev/sdb. If you’re really unsure, there are some more tips described in the Getting Started guide.

We need to tell arm-image-installer which image file to use, what type of device we’re going to be using, and the device name – determined above – to use for writing the image. The arm-image-installer utility is also able to expand the filesystem to use the entire SD card at the point of writing the image.

Since we’re not going to use the zezere provisioning server to deploy SSH keys to the Raspberry Pi, we need to specify the option to remove the root password so that we can log in and set it at first boot.

In my case, the full command was:

sudo arm-image-installer --image ~/Downloads/Fedora-IoT-32-20200603.0.aarch64.raw.xz --target=rpi3 --media=/dev/sdb --resizefs --norootpass

After a final confirmation prompt:

= Selected Image:                                 
= /var/home/gavin/Downloads/Fedora-IoT-32-20200603.0.aarc...
= Selected Media : /dev/sdb
= U-Boot Target : rpi3
= Root Password will be removed.
= Root partition will be resized
 Type 'YES' to proceed, anything else to exit now 

the image is written to the SD Card.

= Installation Complete! Insert into the rpi3 and boot.

Booting the Raspberry Pi

For the initial setup, you’ll need to attach a keyboard and mouse to the Raspberry Pi. Alternatively, you can follow the instructions for connecting with a USB-to-Serial cable.

When the Raspberry Pi boots up, just type root at the login prompt and press enter.

localhost login: root

The first task is to set a password for the root user.

[root@localhost~]# passwd
Changing password for user root.
New password: 
Retype new password:
passwd: all authentication tokens updated successfully

Verifying Network Connectivity

To verify the network connectivity, the checklist in the Fedora IoT Getting Started guide was followed. This system is using a wired ethernet connection, which shows as eth0. If you need to set up a wireless connection this can be done with nmcli.

ip addr will allow you to check that you have a valid IP address.

[root@localhost ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether b8:27:eb:9d:6e:13 brd ff:ff:ff:ff:ff:ff
inet brd scope global dynamic noprefixroute eth0
valid_lft 863928sec preferred_lft 863928sec
inet6 fe80::ba27:ebff:fe9d:6e13/64 scope link
valid_lft forever preferred_lft forever
3: wlan0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel state DOWN group default qlen 1000
link/ether fe:d3:c9:dc:54:25 brd ff:ff:ff:ff:ff:ff

ip route will check that the network has a default gateway configured.

[root@localhost ~]# ip route
default via dev eth0 proto dhcp metric 100 dev eth0 proto kernel scope link src metric 100 

To verify internet access and name resolution, use ping

[root@localhost ~]# ping -c3 iot.fedoraproject.org
PING wildcard.fedoraproject.org ( 56(84) bytes of data.
64 bytes from proxy14.fedoraproject.org ( icmp_seq=1 ttl=46 time=93.4 ms
64 bytes from proxy14.fedoraproject.org ( icmp_seq=2 ttl=46 time=90.0 ms
64 bytes from proxy14.fedoraproject.org ( icmp_seq=3 ttl=46 time=91.3 ms

--- wildcard.fedoraproject.org ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 90.043/91.573/93.377/1.374 ms

Optional: Configuring sshd so we can disconnect the keyboard and monitor

Before disconnecting the keyboard and monitor, we need to ensure that we can connect to the Raspberry Pi over the network.

First we verify that sshd is running

[root@localhost~]# systemctl is-active sshd

and that there is a firewall rule present to allow ssh.

[root@localhost ~]# firewall-cmd --list-all
public (active)
  target: default
  icmp-block-inversion: no
  interfaces: eth0
  services: dhcpv6-client mdns ssh
  masquerade: no
  rich rules: 

In the file /etc/ssh/sshd_config, find the section named

# Authentication

and add the line

PermitRootLogin yes

There will already be a line

#PermitRootLogin prohibit-password

which you can edit by removing the # comment character and changing the value to yes.

Restart the sshd service to pick up the change

[root@localhost ~]# systemctl restart sshd

If all this is in place, we should be able to ssh to the Raspberry Pi.

[gavin@desktop ~]$ ssh root@
The authenticity of host ' (' can't be established.
ECDSA key fingerprint is SHA256:DLdFaYbvKhB6DG2lKmJxqY2mbrbX5HDRptzWMiAUgBM.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '' (ECDSA) to the list of known hosts.
root@'s password: 
Boot Status is GREEN - Health Check SUCCESS
Last login: Wed Apr  1 17:24:50 2020
[root@localhost ~]#

It’s now safe to log out from the console (exit) and disconnect the keyboard and monitor.

Disabling unneeded services

Since we’re right on the lower limit of viable hardware for Rosetta@home, it’s worth disabling any unneeded services. Fedora IoT is much more lightweight than desktop distributions, but there are still a few optimizations we can do.

Like disabling bluetooth, Modem Manager (used for cellular data connections), WPA supplicant (used for Wi-Fi) and the zezere services, which are used to centrally manage a fleet of Fedora IoT devices.

[root@localhost /]# for serviceName in bluetooth ModemManager wpa_supplicant zezere_ignition zezere_ignition.timer zezere_ignition_banner; do 
   sudo systemctl stop $serviceName; 
   sudo systemctl disable $serviceName; 
   sudo systemctl mask $serviceName; 

Getting the BOINC client

Instead of installing the BOINC client directly onto the operating system with rpm-ostree, we’re going to use podman to run the containerized version of the client.

This image uses a volume mount to store its data, so we create the directories it needs in advance.

[root@localhost ~]# mkdir -p /opt/appdata/boinc/slots /opt/appdata/boinc/locale

We also need to add a firewall rule to allow the container to resolve external DNS names.

[root@localhost ~]# firewall-cmd --permanent --zone=trusted --add-interface=cni-podman0 success [root@localhost ~]# systemctl restart firewalld

Finally we are ready to pull and run the BOINC client container.

[root@localhost ~]# podman run --name boinc -dt -p 31416:31416 -v /opt/appdata/boinc:/var/lib/boinc:Z -e BOINC_GUI_RPC_PASSWORD="blah"  -e BOINC_CMD_LINE_OPTIONS="--allow_remote_gui_rpc"  boinc/client:arm64v8 
Trying to pull...


We can inspect the container logs to make sure everything is working as expected:

[root@localhost ~]# podman logs boinc
20-Jun-2020 09:02:44 [---] cc_config.xml not found - using defaults
20-Jun-2020 09:02:44 [---] Starting BOINC client version 7.14.12 for aarch64-unknown-linux-gnu
20-Jun-2020 09:02:44 [---] Checking presence of 0 project files
20-Jun-2020 09:02:44 [---] This computer is not attached to any projects
20-Jun-2020 09:02:44 Initialization completed

Configuring the BOINC container to run at startup

We can automatically generate a systemd unit file for the container with podman generate systemd.

[root@localhost ~]# podman generate systemd --files --name boinc

This creates a systemd unit file in root’s home directory.

[root@localhost ~]# cat container-boinc.service 
# container-boinc.service
# autogenerated by Podman 1.9.3
# Sat Jun 20 09:13:58 UTC 2020

Description=Podman container-boinc.service

ExecStart=/usr/bin/podman start boinc
ExecStop=/usr/bin/podman stop -t 10 boinc

WantedBy=multi-user.target default.target

We install the file by moving it to the appropriate directory.

[root@localhost ~]# mv -Z container-boinc.service  /etc/systemd/system
[root@localhost ~]# systemctl enable /etc/systemd/system/container-boinc.service
Created symlink /etc/systemd/system/multi-user.target.wants/container-boinc.service → /etc/systemd/system/container-boinc.service.
Created symlink /etc/systemd/system/default.target.wants/container-boinc.service → /etc/systemd/system/container-boinc.service.

Connecting to the Rosetta Stone project

You need to create an account at the Rosetta@home signup page, and retrieve your account key from your account home page. The key to copy is the “Weak Account Key”.

Finally, we execute the boinccmd configuration utility inside the container using podman exec, passing the Rosetta@home url and our account key.

[root@localhost ~]# podman exec boinc boinccmd --project_attach https://boinc.bakerlab.org/rosetta/ 2160739_cadd20314e4ef804f1d95ce2862c8f73

Running podman logs –follow boinc will allow us to see the container connecting to the project. You will probably see errors of the form

20-Jun-2020 10:18:40 [Rosetta@home] Rosetta needs 1716.61 MB RAM but only 845.11 MB is available for use.

This is because most, but not all, of the work units in Rosetta@Home require more memory than we have to offer. However, if you leave the device running for a while, it should eventually get some jobs to process. The polling interval seems to be approximately 10 minutes. We can also tweak the memory settings using BOINC manager to allow BOINC to use slightly more memory. This will increase the probability that Rosetta@home will be able to find tasks for us.

Installing BOINC Manager for remote access

You can use dnf to install the BOINC manager component to remotely manage the BOINC client on the Raspberry Pi.

[gavin@desktop ~]$ sudo dnf install boinc-manager

If you switch to “Advanced View” , you will be able to select “File -> Select Computer” and connect to your Raspberry Pi, using the IP address of the Pi and the value supplied for BOINC_GUI_RPC_PASSWORD in the podman run command, in my case “blah“.

<figure class="wp-block-image size-large"><figcaption>Press Shift+Ctrl+I to connect BOINC manager to a remote computer</figcaption></figure>

Under “Options -> Computing Preferences”, increase the value for “When Computer is not in use, use at most _ %”. I’ve been using 93%; this seems to allow Rosetta@home to schedule work on the pi, whilst still leaving it just about usable. It is possible that further fine tuning of the operating system might allow this percentage to be increased.

<figure class="wp-block-image size-large"><figcaption>Using the Computing Preferences Dialog to set the memory threshhold</figcaption></figure>

These settings can also be changed through the Rosetta@home website settings page, but bear in mind that changes made through the BOINC Manager client override preferences set in the web interface.


It may take a while, possibly several hours, for Rosetta@home to send work to our newly installed client, particularly as most work units are too big to run on a Raspberry Pi. COVID-19 has resulted in a large number of new computers being joined to the Rosetta@home project, which means that there are times when there isn’t enough work to do.

When we are assigned some work units, BOINC will download several hundred megabytes of data. This will be stored on the SD Card and can be viewed using BOINC manager.

<figure class="wp-block-image size-large"></figure>

We can also see the tasks running in the Tasks pane:

<figure class="wp-block-image size-large"></figure>

The client has downloaded four tasks, but only one of them is currently running due to memory constraints. At times, two tasks can run simultaneously, but I haven’t seen more than that. This is OK as long as the tasks are completed by the deadline shown on the right. I’m fairly confident these will be completed as long as the Raspberry Pi is left running. I have found that the additional memory overhead created by the BOINC Manager connection and sshd services can reduce parallelism, so I try to disconnect these when I’m not using them.


Rosetta@home, in common with many other distributed computing projects, is currently experiencing a large spike in participation due to COVID-19. That aside, the project has been doing valuable work for many years to combat a number of other diseases.

Whilst a Raspberry Pi is never going to appear at the top of the contribution chart, I think this is a worthwhile project to undertake with a spare Raspberry Pi. The existence of work units aimed at low-spec ARM devices indicates that the project organizers agree with this sentiment. I’ll certainly be leaving mine running for the foreseeable future.

Cockpit 223

Posted by Cockpit Project on July 08, 2020 12:00 AM

Cockpit is the modern Linux admin interface. We release regularly. Here are the release notes from version 223.

Webserver: Standard-conformant lifetime of web server certificate

Cockpit’s web server creates a self-signed certificate 0-self-signed.cert on startup if the administrator did not already install one. They now are valid for one year, following the official standard. The previous lifetime of 100 years for a self-signed certificate (or 10 years with sscg) may not be accepted by some browsers in the future.

An expired 0-self-signed.cert is now renewed automatically.

Certificate authentication against Active Directory

Cockpit now officially supports authentication with client certificates (commonly through Smart Cards) on machines that are joined to a Microsoft or Samba Active Directory domain. The necessary setup steps are described in the guide.

Try it out

Cockpit 223 is available now:

CfP: deployment, backup and upgrade best practices

Posted by Kiwi TCMS on July 07, 2020 01:55 PM

Kiwi TCMS is opening a call for proposals: Tell us how do you deploy, backup and upgrade your Kiwi TCMS containers? What environment do you use, how do you migrate data or ensure the system is always up and running? How do you go about testing if a newer version doesn't break the features that you use ? What best practices have you identified that can help others?

We are going to collect your feedback and update the existing documentation.

Please submit your responses here: https://docs.google.com/forms/d/e/1FAIpQLSe-kioT_e3UHwV5irwLroR2Jsk5oYM_Ls6acVeLVcBn7Kpt7Q/viewform. All fields are optional, including your email address!

Thank you and happy testing!

How Fedora and Outreachy Helped Me Hone My Flexibility With Timelines

Posted by Fedora Community Blog on July 07, 2020 08:00 AM

Update: I’m in the seventh week of my Outreachy internship with Fedora! I am working to create a GraphQL API for Bodhi. The following image shows a Gantt chart of the ideal timeline that my mentors and I came up with to get the project up and running: Initial Tasks My mentors broke down these […]

The post How Fedora and Outreachy Helped Me Hone My Flexibility With Timelines appeared first on Fedora Community Blog.

Fedora 33 Btrfs by default Test Day 2020-07-08

Posted by Fedora Community Blog on July 07, 2020 07:59 AM
Fedora 33 SwapOnZRam

A new change proposal has been submitted for the Fedora 33 release cycle which entails usage of btrfs by default for Workstations and Spins across x86_64 and ARM architectures As a result, QA teams have organized a test day on Wed, July 08, 2020. Refer to the wiki page for links to the test cases […]

The post Fedora 33 Btrfs by default Test Day 2020-07-08 appeared first on Fedora Community Blog.

Fedora Community Blog monthly summary: June 2020

Posted by Fedora Community Blog on July 06, 2020 07:47 PM

This is the second in what I hope to make a monthly series summarizing the past month on the Community Blog. Please leave a comment below to let me know what you think. Stats In May, we published 13 posts. The site had 3,753 visits from 1,736 unique viewers. Readers wrote 1 comment. 119 visits […]

The post Fedora Community Blog monthly summary: June 2020 appeared first on Fedora Community Blog.

شرکت در رویداد AnsibleFest 2020

Posted by Fedora fans on July 06, 2020 06:30 AM


بی شک می توان گفت که Ansible یکی از بهترین Automation engine ها می باشد. Ansible دارای ماژول های فروانی می باشد که بوسیله ی آن می توان task ها و فرایندهای توسعه را خودکارسازی کرد. Ansible یک نرم افزار می باشد که می توان آن را در گروه Automation tool طبقه بندی کرد. در واقع Ansible یک ابزار Open Source و software provisioning, configuration management و application-deployment می باشد.

امسال رویداد AnsibleFest 2020 به صورت مجازی و آنلاین برگزار می شود و فرصت مناسبی می باشد تا طرفدارن Ansible و اعضای جامعه گرد هم جمع شده تا در مورد نوآوری ها و best practice ها صحبت کنند.

این رویداد در تاریخ 13 و 14 اکتبر برگزار خواهد شد که برای اطلاعات بیشتر در مورد رویداد AnsibleFest 2020 می توانید به لینک پایین مراجعه نمایید:


جهت ثبت نام در رویداد AnsibleFest 2020 نیز می توانید به لینک پایین مراجعه نمایید:



User-specific XKB configuration - part 2

Posted by Peter Hutterer on July 06, 2020 05:18 AM

This is the continuation from this post.

Several moons have bypassed us [1] in the time since the first post, and Things Have Happened! If you recall (and of course you did because you just re-read the article I so conveniently linked above), libxkbcommon supports an include directive for the rules files and it will load a rules file from $XDG_CONFIG_HOME/xkb/rules/ which is the framework for custom per-user keyboard layouts. Alas, those files are just sitting there, useful but undiscoverable.

To give you a very approximate analogy, the KcCGST format I described last time are the ingredients to a meal (pasta, mince, tomato). The rules file is the machine-readable instruction set to assemble your meal but it relies a lot on wildcards. Feed it "spaghetti, variant bolognese" and the actual keymap ends up being the various components put together: "pasta(spaghetti)+sauce(tomato)+mince". But for this to work you need to know that spag bol is available in the first place, i.e you need the menu. This applies to keyboard layouts too - the keyboard configuration panel needs to present a list so the users can clickedy click-click on whatever layout they think is best for them.

This menu of possible layouts is provided by the xkeyboard-config project but for historical reasons [2], it is stored as an XML file named after the ruleset: usually /usr/share/X11/xkb/rules/evdev.xml [3]. Configuration utilities parse that file directly which is a bit of an issue when your actual keymap compiler starts supporting other include paths. Your fancy new layout won't show up because everything insists on loading the system layout menu only. This bit is part 2, i.e. this post here.

If there's one thing that the world doesn't have enough of yet, it's low-level C libraries. So I hereby present to you: libxkbregistry. This library has now been merged into the libxkbcommon repository and provides a simple C interface to list all available models, layouts and options for a given ruleset. It sits in the same repository as libxkbcommon - long term this will allow us to better synchronise any changes to XKB handling or data formats as we can guarantee that the behaviour of both components is the same.

Speaking of data formats, we haven't actually changed any of those which means they're about as easy to understand as your local COVID19 restrictions. In the previous post I outlined the example for the KcCGST and rules file, what you need now with libxkbregistry is an XKB-compatible XML file named after your ruleset. Something like this:

$ cat $HOME/.config/xkb/rules/evdev.xml
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE xkbConfigRegistry SYSTEM "xkb.dtd">
<xkbConfigRegistry version="1.1">
<description>US (Banana)</description>
<group allowMultipleSelection="true">
<description>Custom options</description>
<description>Map Tilde to nothing</description>
<description>Map Z to K</description>
This looks more complicated than it is: we have models (not shown here), layouts which can have multiple variants and options which are grouped together in option group (to make options mutually exclusive). libxkbregistry will merge this with the system layouts in what is supposed to be the most obvious merge algorithm. The simple summary of that is that you can add to existing system layouts but you can't modify those - the above example will add a "banana" variant to the US keyboard layout without modifying "us" itself or any of its other variants. The second part adds two new options based on my previous post.

Now, all that is needed is to change every user of evdev.xml to use libxkbregistry. The gnome-desktop merge request is here for a start.

[1] technically something that goes around something else doesn't bypass it but the earth is flat, the moon is made of cheese, facts don't matter anymore and stop being so pedantic about things already!
[2] it's amazing what you can handwave away with "for historical reasons". Life would be better if there was less history to choose reasons from.
[3] there's also evdev.extras.xml for very niche layouts which is a separate file for historical reasons [2], despite there being a "popularity" XML attribute

Episode 204 – What Would Apple Do?

Posted by Josh Bressers on July 06, 2020 12:03 AM

Josh and Kurt talk about some recent security actions Apple has taken. Not all are good, but in general Apple is doing things to benefit their customers (their customers are not advertisers). We also discuss some of the challenges when your customers are advertisers.

<audio class="wp-audio-shortcode" controls="controls" id="audio-1803-2" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_204_What_Would_Apple_Do.mp3?_=2" type="audio/mpeg">https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_204_What_Would_Apple_Do.mp3</audio>

Show Notes

Pocket Lisp Computer

Posted by Thomas Fitzsimmons on July 05, 2020 06:04 PM

I recently built three Lisp Badge computers with some help from my kids. I bought a hot air soldering station and learned TQFP soldering. The kids did some through-hole and SMT soldering and really enjoyed it!

The hardware assembly and debugging process was really fun, other than worrying several times that I had put too much heat into a component, or set the wrong programmable fuse. During that phase I received some advice from the board’s designer, which really helped.

I’ve learned from the hardware people at work to always order extra parts, and I did, including an extra PCB. I was half expecting to damage stuff while learning, so I was really happy that we ended up with all three boards fully working, after locating and fixing some cold solder joints.

It was challenging as DIY projects go, since the Lisp Badge is not available as a kit. But ever since I saw the Technoblogy post about it, I knew I had to attempt building one, and it was worth it. Other than the display, compatible parts were all available from Digi-Key, and I got the PCBs from OSH Park.

The result is a really neat little computer. Here is a picture showing “big text” support that I added:

Three Lisp Badge computers displaying (lisp-badge) in large text split across the screens.

I also added support for building standalone on Debian (Arduino-Makefile), made the audio buzzer work, and wrote an example of how to play a tune on the buzzer. I published the changes to my sourcehut.

It’s fun to try writing small programs on the badge itself, within the constraints of its minimal uLisp REPL.

dns-tor-proxy 0.2.0 aka DoH release

Posted by Kushal Das on July 05, 2020 05:31 AM

I just now released 0.2.0 of the dns-tor-proxy tool. The main feature of this release is DNS over HTTPS support. At first I started writing it from scratch, and then decided to use modified code from the amazing dns-over-https project instead.


demo of the DoH support in the tool

✦ ❯ ./dns-tor-proxy -h
Usage of ./dns-tor-proxy:
      --doh                 Use DoH servers as upstream.
      --dohaddress string   The DoH server address. (default "https://mozilla.cloudflare-dns.com/dns-query")
  -h, --help                Prints the help message and exists.
      --port int            Port on which the tool will listen. (default 53)
      --proxy string        The Tor SOCKS5 proxy to connect locally, IP:PORT format. (default "")
      --server string       The DNS server to connect IP:PORT format. (default "")
  -v, --version             Prints the version and exists.
Make sure that your Tor process is running and has a SOCKS proxy enabled.

Now you can pass --doh flag to enable DoH server usage, by default it will use https://mozilla.cloudflare-dns.com/dns-query. But you can pass any server using --dohaddress flag. I found the following servers are working well over Tor.

  • https://doh.libredns.gr/dns-query
  • https://doh.powerdns.org
  • https://dns4torpnlfs2ifuz2s2yf3fc7rdmsbhm6rw75euj35pac6ap25zgqad.onion/dns-query
  • https://dnsforge.de/dns-query

The release also has a binary executable for Linux x86_64. You can verify the executable using the signature file available in the release page.

Keeping secrets on Linux with Password Safe

Posted by Josef Strzibny on July 05, 2020 12:00 AM

How do you keep your secrets safe on Linux? Should you just go with an online service? What if you prefer an offline option with the possibility of online backups? GNOME’s Password Safe lets you neatly organize your secrets offline with a simple Keepass file to backup.

If you are like me, you can get a bit disorganized at times. But keeping passwords and SSH keys safe and backed up is something I had to standardized on. I also did not see a reason to manage them separately in many different applications. I don’t automatically trust online services with something that important neither.

I am on Fedora and a while back I discovered and started to use GNOME’s Password Safe. Password Safe is a small and neat application for your secret management on Linux.

You can install it with DNF on Fedora 32:

$ sudo dnf install gnome-passwordsafe

It’s also available on Flathub.

KeePass is an open-source encrypted password database format based on XML and you can think of Password Safe as a KeePass Linux client as it uses KeePass v.4 format to encrypt your secrets. As a GNOME app it also perfectly integrates with your GNOME shell desktop.

Let’s look at how this small GNOME app looks like and what features it has.

Password Safe requires to set up a passphrase that you will use to unlock the KeePass file:


It will let you organize various secrets into folders:


You can view and create new secrets inside them. A typical secret will be a password, which Password Safe can generate for you:


You can however also directly save key-value pairs and files (handy for SSH keys):


You can attach a specific icon and color to neatly organize all your secrets. Password Safe will also automatically lock itself with inactivity so your secrets stay safe.

I like two things about Password Safe:

  • The fact that it’s an easy to use offline application specifically designed for GNOME (so it’s a beautiful app to look at).
  • And the fact it’s simply built around using KeePass format for encryption so there is no vendor lockin and the file itself can be easily backed up with your other files online in the cloud.

Give Password Safe a go and let me know what you think!

شرکت در رویداد Kubernetes Beginner و Kubernetes Elementary

Posted by Fedora fans on July 04, 2020 06:30 AM


منابع گوناگونی برای یادگیری Kubernetes وجود دارد که قبلا چند کتاب در همین زمینه معرفی شد. Kubernetes یک Production-Grade Container Orchestration می باشد که به راحتی می توان container ها را توسعه، Scale و مدیریت کرد.

همواره Red Hat’s developers اقدام به برگزاری رویدادهای آنلاین می کند که یکی از این رویداد ها با نام Kubernetes Beginner می باشد که قرار است در تاریخ 6 July برگزار گردد. این رویداد به چهار زبان English, Spanish, French و Brazilian Portuguese ارائه خواهد شد.

موضوعاتی که قرار است در این رویداد یاد بگیرید به شرح زیر می باشند:


رویداد دیگر که نام آن Kubernetes Elementary می باشد قرار است در تاریخ 8 July برگزار گردد. موضوعاتی که قرار است در این رویداد یاد بگیرید به شرح زیر می باشند:

  • Building images
  • Resource Limits
  • Rolling update delivery
  • Liveness and readiness probes
  • Environment variables and ConfigMaps

برای اطلاعات بیشتر در مورد این رویداد ها و ثبت نام می توانید به لینک پایین مراجعه کنید:



Fedora program update: 2020-27

Posted by Fedora Community Blog on July 03, 2020 04:00 PM
Fedora Program Manager weekly report on Fedora Project development and progress

Here’s your report of what has happened in Fedora this week. I have weekly office hours in #fedora-meeting-1. Drop by if you have any questions or comments about the schedule, Changes, elections, or anything else. Announcements Help wanted Upcoming meetings Releases CPE update Announcements Orphaned packages seeking maintainers Lenovo will soon be shipping laptops with […]

The post Fedora program update: 2020-27 appeared first on Fedora Community Blog.