Fedora People

Install PowerShell on Fedora Linux

Posted by Fedora Magazine on September 22, 2021 08:00 AM

PowerShell (also written pwsh) is a powerful open source command-line and object-oriented shell developed and maintained by Microsoft. It is syntactically verbose and intuitive for the user. This article is a guide on how to install PowerShell on the host and inside a Podman or Toolbox container.

Table of contents

Why use PowerShell

PowerShell, as the name suggests, is powerful. The syntax is verbose and semantically clear to the end user. For those that don’t want to write long commands all the time, most commands are aliased. The aliases can be viewed with Get-Alias or here.

The most important difference between PowerShell and traditional shells, however, is its output pipeline. While normal shells output strings or character streams, PowerShell outputs objects. This has far reaching implications for how command pipelines work and comes with quite a few advantages.

Demonstration

The following examples illustrate the verbosity and simplicity. Lines that start with the pound symbol (#) are comments. Lines that start with PS > are commands, PS > being the prompt:

# Return all files greater than 50MB in the current directory.
## Longest form
PS > Get-Childitem | Where-Object Length -gt 50MB
## Shortest form (with use of aliases)
PS > gci | ? Length -gt 40MB
## Output looks like this
    Directory: /home/Ozymandias42/Downloads 
Mode                 LastWriteTime         Length Name 
----                 -------------         ------ ---- 
-----          20/08/2020    13:55     2000683008 40MB-file.img


# In order: get VMs, get snapshots, only select the last 3 and remove selected list:
PS > Get-VM VM-1 | Get-Snapshot | Select-Object -Last 3 | Remove-Snapshot

What this shows quite well is that input-output reformatting with tools like cut, sed, awk or similar, which Bash scripts often need, is usually not necessary in PowerShell. The reason for this is that PowerShell works fundamentally different than traditional POSIX shells such as Bash, Zsh, or other shells like Fish. The commands of traditional shells are output as strings whereas in PowerShell they are output as objects.

Comparison between Bash and PowerShell

The following example illustrates the advantages of the object-output in PowerShell in contrast to the traditional string-output in Bash. Suppose you want a script that outputs all processes that occupy 200MB or more in RAM. With Bash, this might look something like this:

$ ps -eO rss | awk -F' ' \ 
    '{ if($2 >= (1024*200)) { \
        printf("%s\t%s\t%s\n",$1,$2,$6);} \
     }'

PID    RSS     COMMAND
A      B       C
[...]

The first obvious difference is readability or more specifically, semantic clarity. Neither ps nor awk are self-descriptive. ps shows the process status and awk is a text processing tool and language whose letters are the initials of its developers’ last names, Aho, Weinberger, Kernighan (see Wikipedia). Before contrasting it with PowerShell however, examine the script:

  • ps -e outputs all running processes;
  • -O rss outputs the default output of ps plus the amount of kilobytes each process uses, the rss field; this output looks like this:
    PID  RSS   S TTY TIME     COMMAND  1    13776 S ?   00:00:01 /usr/lib/systemd/systemd
  • | pipe operator uses the output of the command on the left side as input for the command on the right side.
  • awk -F’ ‘ declares “space” as the input field separator. So going with the above example, PID is the first, RSS the second and so on.
  • ‘{ if($2 >= (1024*200) is the beginning of the actual AWK-script. It checks whether field 2 (RSS) contains a number larger than or equal to 1024*200KB (204800KB, or 200MB);
  • { printf(“%s\t%s\t%s\n”,$1,$2,$6);} }’ continues the script. If the previous part evaluates to true, this outputs the first, second and sixth field (PID, RSS and COMMAND fields respectively).

With this in mind, step back and look at what was required for this script to be written and for it to work:

  • The input command ps had to have the field we wanted to filter against in its output. This was not the case by default and required us to use the -O flag with the rss field as argument.
  • We had to treat the output of ps as a list of input fields, requiring us to know their order and structure. Or in other words, we had to at least know that RSS would be the second field. Meaning we had to know how the output of ps would look beforehand.
  • We then had to know what unit the data we were filtering against was in as well as what unit the processing tool would work in. Meaning we had to know that the RSS field uses kilobytes and that awk does too. Otherwise we would not have been able to write the expression ($2 <= 1024*200)

Now, contrast the above with the PowerShell equivalent:

# Longest form
PS > Get-Process | Where-Object WorkingSet -ge 200MB
# Shortest form (with use of aliases)
PS > gps | ? ws -ge 200MB

NPM(K)    PM(M)      WS(M)     CPU(s)      Id  SI ProcessName
------    -----      -----     ------      --  -- -----------
     A        B          C          D       E   F           G
[...]

This first thing to notice is that we have perfect semantic clarity. The commands are perfectly self-descriptive in what they do.

Furthermore there is no requirement for input-output reformatting, nor is there concern about the unit used by the input command. The reason for this is that PowerShell does not output strings, but objects.

To understand this think about the following. In Bash the output of a command is equal to that what it prints out in the terminal. In PowerShell what is printed on the terminal is not equal to the information, that is actually available. This is, because the output-printing system in PowerShell also works with objects. So every command in PowerShell marks some of the properties of its output objects as printable and others not. However, it always includes all properties, whereas Bash only includes what it actually prints. One can think of it like JSON objects. Where output in Bash would be separated into “fields” by a delimiter such as a space or tab, it becomes an easily addressable object property in PowerShell, with the only requirement being, that one has to know its name. Like WorkingSet in the above example.

To see all available properties of a command’s output objects and their types, one can simply do something like:

PS > Get-Process | Get-Member

Install PowerShell

PowerShell is available in several package formats, including RPM used by Fedora Linux. This article shows how to install PowerShell on Fedora Linux using various methods.

I recommend installing it natively. But I will also show how to do it in a container. I will show using both the official Microsoft PowerShell container and a Fedora Linux 30 toolbox container. The advantage of the container-method is that it’s guaranteed to work, since all dependencies are bundled in it, and isolation from the host. Regardless, I recommend doing it natively, despite the official docs only explicitly stating Fedora Linux releases 28 to 30 as being supported.

Note: Supported means guaranteed to work. It does not necessarily mean incompatible with other releases. This means, that while not guaranteed, releases higher than 30 should still work. They did in fact work in our tests.

It is more difficult to set up PowerShell and run it in a container than to run it directly on a host. It takes more time to install and you will not be able to run host commands directly.

Install PowerShell on a host using the package manager

Method 1: Microsoft repositories

Installation is as straight-forward as can be and the procedure doesn’t differ from any other software installed through third party repositories.

It can be split into four general steps:

  1. Adding the new repository’s GPG key
  2. Adding repository to DNF repository list
  3. Refreshing DNF cache to include available packages from the new repository
  4. Installing new packages

Powershell is then launched with the command pwsh.

$ sudo rpm --import https://packages.microsoft.com/keys/microsoft.asc
$ curl https://packages.microsoft.com/config/rhel/7/prod.repo | sudo tee /etc/yum.repos.d/microsoft.repo
$ sudo dnf makecache
$ sudo dnf install powershell
$ pwsh

To remove the repository and packages, run the following.

$ sudo rm /etc/yum.repos.d/microsoft.repo
$ sudo dnf remove powershell

Method 2: RPM file

This method is not meaningfully different from the first method. In fact it adds the GPG key and the repository implicitly when installing the RPM file. This is because the RPM file contains the link to both in it’s metadata.

First, get the .rpm file for the version you want from the PowerShell Core GitHub repository. See the readme.md
“Get Powershell” table for links.

Second, enter the following:

$ sudo dnf install powershell-<version>.rhel.7.<architecture>.rpm

Substitute <version> and <architecture> with the version and architecture you want to use respectively, for example powershell-7.1.3-1.rhel.7.x86_64.rpm.

Alternatively you could even run it with the link instead, skipping the need to download it first.

$ sudo dnf install https://github.com/PowerShell/PowerShell/releases/download/v<version>/powershell-<version>.rhel.7.<architecture>.rpm

To remove PowerShell, run the following.

$ sudo dnf remove powershell

Install via container

Method 1: Podman container

Podman is an Open Container Initiative (OCI) compliant drop-in replacement for Docker.

Microsoft provides a PowerShell Docker container. The following example will use that container with Podman.

For more information about Podman, visit Podman.io. Fedora Magazine has a tag dedicated to Podman.

To use PowerShell in Podman, run the following script:

$ podman run \
 -it \
 --privileged \
 --rm \
 --name powershell \
 --env-host \
 --net=host --pid=host --ipc=host \
 --volume $HOME:$HOME \
 --volume /:/var/host \
 mcr.microsoft.com/powershell \
 /usr/bin/pwsh -WorkingDirectory $(pwd)

This script creates a Podman container for PowerShell and immediately attaches to it. It also mounts the /home and the host’s root directories into the container so they’re available there. However, the host’s root directory is available in /var/host.

Unfortunately, you can only indirectly run host commands while inside the container. As a workaround, run chroot /var/host to chroot to the root and then run host commands.

To break the command down, everything is mandatory unless specified:

  • -it creates a persistent environment that does not kick you out when you enter it;
  • --privileged gives extended privileges to the container (optional);
  • --rm removes the container when you exit;
  • --name powershell sets the name of the container to powershell;
  • --env-host sets all host environment variables to the container’s variables (optional);
  • --volume $HOME:$HOME mounts the user directory;
  • --volume /:/var/host mounts the root directory to /var/host (optional);
  • --net=host --pid=host --ipc=host runs the process in the host’s namespaces instead of a separate set of namespaces for the contained process;
  • docker.io/microsoft/powershell enters the container;
  • /usr/bin/pwsh -WorkingDirectory $(pwd) enters the container in the current directory (optional).

Optional but very convenient: alias pwsh with the script to easily access the Podman container by typing pwsh.

To remove the PowerShell image, run the following.

$ podman rmi mcr.microsoft.com/powershell

Method 2: Fedora Linux Toolbox container

Toolbox is an elegant solution to setup persistent environments without affecting the host system as a whole. It acts as a wrapper around Podman and takes care of supplying a lot of the flags demonstrated in the previous method.<mark class="annotation-text annotation-text-yoast" id="annotation-text-9c913887-23f7-4870-8683-1380f53da81e"></mark> For this reason, Toolbox is a lot easier to use than Podman. It was designed to work for development and debugging. With Toolbox, you can run any command the same as you would directly on the Fedora Workstation host (including dnf).

The installation procedure is similar to the installation on the host methods, with the only difference being that those steps are done inside a container. Make sure you have the toolbox package installed.

Preparing and entering the Fedora 34 Toolbox container is a two step process:

  1. Creating the Fedora 34 Toolbox container
  2. Running the Fedora 34 Toolbox container
$ toolbox create --image registry.fedoraproject.org/f34/fedora-toolbox
$ toolbox enter --container fedora-toolbox

Then, follow the instructions at Method 1: Microsoft repositories.

Optional but very convenient: alias pwsh with toolbox run –container fedora-toolbox pwsh to easily access the Toolbox container by typing pwsh.

To remove the Toolbox container, make certain you have stopped the Toolbox session by entering exit and then run the following:

$ podman kill fedora-toolbox
$ toolbox rm fedora-toolbox

An Xorg release without Xwayland

Posted by Peter Hutterer on September 22, 2021 03:16 AM

Xorg is about to released.

And it's a release without Xwayland.

And... wait, what?

Let's unwind this a bit, and ideally you should come away with a better understanding of Xorg vs Xwayland, and possibly even Wayland itself.

Heads up: if you are familiar with X, the below is simplified to the point it hurts. Sorry about that, but as an X developer you're probably good at coping with pain.

Let's go back to the 1980s, when fashion was weird and there were still reasons to be optimistic about the future. Because this is a thought exercise, we go back with full hindsight 20/20 vision and, ideally, the winning Lotto numbers in case we have some time for some self-indulgence.

If we were to implement an X server from scratch, we'd come away with a set of components. libxprotocol that handles the actual protocol wire format parsing and provides a C api to access that (quite like libxcb, actually). That one will just be the protocol-to-code conversion layer.

We'd have a libxserver component which handles all the state management required for an X server to actually behave like an X server (nothing in the X protocol require an X server to display anything). That library has a few entry points for abstract input events (pointer and keyboard, because this is the 80s after all) and a few exit points for rendered output.

libxserver uses libxprotocol but that's an implementation detail, we can ignore the protocol for the rest of the post.

Let's create a github organisation and host those two libraries. We now have: http://github.com/x/libxserver and http://github.com/x/libxprotocol [1].

Now, to actually implement a working functional X server, our new project would link against libxserver hook into this library's API points. For input, you'd use libinput and pass those events through, for output you'd use the modesetting driver that knows how to scream at the hardware until something finally shows up. This is somewhere between outrageously simplified and unacceptably wrong but it'll do for this post.

Your X server has to handle a lot of the hardware-specifics but other than that it's a wrapper around libxserver which does the work of ... well, being an X server.

Our stack looks like this:


+------------------------+
| xserver [libxserver]|--------[ X client ]
| |
|[libinput] [modesetting]|
+------------------------+
| kernel |
+------------------------+
Hooray, we have re-implemented Xorg. Or rather, XFree86 because we're 20 years from all the pent-up frustratrion that caused the Xorg fork. Let's host this project on http://github.com/x/xorg

Now, let's say instead of physical display devices, we want to render into an framebuffer, and we have no input devices.


+------------------------+
| xserver [libxserver]|--------[ X client ]
| |
| [write()] |
+------------------------+
| some buffer |
+------------------------+
This is basically Xvfb or, if you are writing out PostScript, Xprint. Let's host those on github too, we're accumulating quite a set of projects here.

Now, let's say those buffers are allocated elsewhere and we're just rendering to them. And those buffer are passed to us via an IPC protocol, like... Wayland!


+------------------------+
| xserver [libxserver]|--------[ X client ]
| |
|input events [render]|
+------------------------+
| |
+------------------------+
| Wayland compositor |
+------------------------+
And voila, we have Xwayland. If you swap out the protocol you can have Xquartz (X on Macos) or Xwin (X on Windows) or Xnext/Xephyr (X on X) or Xvnc (X over VNC). The principle is always the same.

Fun fact: the Wayland compositor doesn't need to run on the hardware, you can play display server babushka until you run out of turtles.

In our glorious revisioned past all these are distinct projects, re-using libxserver and some external libraries where needed. Depending on the projects things may be very simple or get very complex, it depends on how we render things.

But in the end, we have several independent projects all providing us with an X server process - the specific X bits are done in libxserver though. We can release Xwayland without having to release Xorg or Xvfb.

libxserver won't need a lot of releases, the behaviour is largely specified by the protocol requirements and once you're done implementing it, it'll be quite a slow-moving project.

Ok, now, fast forward to 2021, lose some hindsight, hope, and attitude and - oh, we have exactly the above structure. Except that it's not spread across multiple independent repos on github, it's all sitting in the same git directory: our Xorg, Xwayland, Xvfb, etc. are all sitting in hw/$name, and libxserver is basically the rest of the repo.

A traditional X server release was a tag in that git directory. An XWayland-only release is basically an rm -rf hw/*-but-not-xwayland followed by a tag, an Xorg-only release is basically an rm -rf hw/*-but-not-xfree86 [2].

In theory, we could've moved all these out into separate projects a while ago but the benefits are small and no-one has the time for that anyway.

So there you have it - you can have Xorg-only or XWayland-only releases without the world coming to an end.

Now, for the "Xorg is dead" claims - it's very likely that the current release will be the last Xorg release. [3] There is little interest in an X server that runs on hardware, or rather: there's little interest in the effort required to push out releases. Povilas did a great job in getting this one out but again, it's likely this is the last release. [4]

Xwayland - very different, it'll hang around for a long time because it's "just" a protocol translation layer. And of course the interest is there, so we have volunteers to do the releases.

So basically: expecting Xwayland releases, be surprised (but not confused) by Xorg releases.

[1] Github of course doesn't exist yet because we're in the 80s. Time-travelling is complicated.
[2] Historical directory name, just accept it.
[3] Just like the previous release...
[4] At least until the next volunteer steps ups. Turns out the problem "no-one wants to work on this" is easily fixed by "me! me! I want to work on this". A concept that is apparently quite hard to understand in the peanut gallery.

Building RHEL packages with Tito

Posted by Jakub Kadlčík on September 22, 2021 12:00 AM

Are you a Fedora packager and consider Tito to be a valuable asset in your toolbox? Do you know it can be used for maintaining RHEL packages as well? Or any downstream packaging? I didn’t. This article explains how it can be done.

Disclaimer: I maintain a dozen of Fedora packages for years but I am fairly new to RHEL packaging. I do not claim to be an expert or an authority on this topic. This article is subjective and describes my personal workflow for updating RHEL packages.

Fedora vs RHEL packaging

Apart from different hostnames, service names, and a great focus on quality assurance, there is only one difference relevant to the topic at hand. That is, in the majority of cases (unless there is a good reason to do so), the package sources tarball is not being changed within an RHEL major release. While this may sound insignificant, it is the only reason for this whole article, so let me elaborate.

We have an imaginary upstream project foo in version 1.0. This project gets packaged into Fedora as foo-1.0-1 (i.e. package name is foo, its upstream version is 1.0 and this is the first release of this version in Fedora). When this package gets included in RHEL, its NVR is going to be the same, foo-1.0-1. So far there is no difference.

Updating this package is when it gets tricky. Upstream publishes version 1.1. In Fedora, we take the new upstream sources as they are, and build a package foo-1.1-1 on top of them. In RHEL, we want to avoid changing the sources. Instead, we create a patch (or series of patches) that modifies the original sources into the newly published ones. Therefore the new package in RHEL will be foo-1.0-2 (the version number remains the same, release is incremented).

We can choose to do all this patching labor manually or let Tito help us.

Initial setup

This initial setup needs to be done only once for each package. It’s a bit lengthy but the payoff is worth it.

Create an intermediate git repository

First, create an empty git repository on some internal forge (e.g. GitLab) and clone it to your computer.

git clone git@some-internal-url.com:bar/foo.git ~/git/rhel/foo
cd ~/git/rhel/foo

In case that you use a different email address for internal purposes, configure your git credentials.

git config user.name "John Doe"
git config user.email "jdoe@company.ex"

Add an upstream repository as a new remote for this git project. We will use this remote only for pulling, so make sure to use its HTTPS variant instead of the SSH one. It will help us prevent accidental pushing of sensitive information out to the world.

git remote add upstream https://github.com/bar/foo.git

Pull everything from upstream.

git fetch --all

Go and see what is the version (ignore the release number) of our package in RHEL, and point the main branch to the Tito tag associated with this version. For example, if the package name is foo and its version is 1.5-3, run the following command.

git reset --hard foo-1.5-1

And finally, push everything to the internal repository.

git push --follow-tags

Configure Tito

Edit the .tito/tito.props file and update the builder and tagger variables accordingly.

builder = tito.distributionbuilder.DistributionBuilder
tagger = tito.tagger.ReleaseTagger

The DistributionBuilder handles the patches generation (from the current RHEL version into the latest upstream version) when building a package. The ReleaseTagger increments release number instead of a version number when tagging a new package.

Edit the .tito/releasers.conf and append the following releaser.

[rhel]
releaser = tito.release.DistGitReleaser
branches = rhel-8.5.0

In this example I am specifying rhel-8.5.0 branch, please insert the branch that you maintain your package in. In the case of multiple branches, use spaces as separators.

Update the spec file

There may be some RHEL specific changes to our package spec file in the internal DistGit, that we wouldn’t like to get lost. Let’s assume that the latest upstream version is 1.7-1 and the latest RHEL version is 1.5-3.

  1. Find the spec file for your package in the internal DistGit service
  2. Append (from the top) all the changelog entries recorded between 1.5-1 and 1.5-3.
  3. Set the current RHEL release number. In this example Release: 3%{?dist}
  4. Perform any additional changes that were made in the RHEL spec file
  5. In case the RHEL spec file contains some RHEL-specific patches, do not copy the patch files and add PatchN: records in the spec file. Instead, perform those changes directly in this repository and commit them.
  6. Commit all the changes that we made to the spec file and in the .tito configuration directory

Cherry-pick upstream changes

Now, we are going to cherry-pick the upstream changes that we would like to include in RHEL. Let’s see what changes were made between the version we have in RHEL and the upstream one.

git log master..upstream/master

For each commit that you want in RHEL, run

git cherry-pick <hash>

It is a good idea to avoid commits generated by Tito or any commits incrementing the version number or modifying the changelog.

Build and test the package locally

To make sure we made all changes correctly, we are going to build the package locally and test it works as expected before irreversibly pushing anything to DistGit and eventually embarrassing ourselves with amends.

tito build --srpm --test

Build the package locally in Mock, or in the internal Copr instance which provides an actual RHEL chroots.

mock -r rocky-8-x86_64 /tmp/tito/foo-1.5-3.git.0.da1346d.fc33.src.rpm

Examine the built package, try to install it (in Docker, Mock, etc) and test that it works as expected.

Push the changes

If you are sure, tag the package, and push the changes.

tito tag
git push --follow-tags origin

By this point, we should be able to build a non-test SRPM package. We won’t need it but it is a good idea to make sure it works.

tito build --srpm

Push everything into the internal DistGit and submit builds for all predefined branches.

tito release rhel

When asked if you want to edit the commit message, proceed with yes. You must reference a ticket requesting this update, e.g.

Resolves rhbz#123456

Just make sure all the submitted builds succeeded and continue with the rest of the update process.

Consequent updates

I haven’t done this part yet, thus it will be explained right after I get through it. The general idea is to run

git fetch --all

and Update the spec file, cherry-pick upstream changes, build the package locally and then push the changes.

Troubleshooting

Diff contains binary files

If any of the patches that DistributionBuilder generates should contain binary files, you will end up with a fatal error (with a rather nice wording).

ERROR: You are doomed. Diff contains binary files. You can not use this builder

In this case, I would suggest trying UpstreamBuilder instead.

builder = tito.builder.UpstreamBuilder

The difference between those two is that DistributionBuidler generates one patch per upstream version and therefore if any of the intermediate patches contains binary files, the build fails. Whereas UpstreamBuilder always generates only one patch file at all times, therefore any intermediate changes don’t matter, the patch will simply contain changes for the resulting upstream state (i.e. if the latest upstream release is alright, the build will succeed).

Meet Kiwi TCMS at WebSummit 2021 in Lisbon

Posted by Kiwi TCMS on September 21, 2021 01:15 PM

Kiwi TCMS is happy to announce that our first post-COVID live presence will be at WebSummit 2021, Nov 1-4 in Lisbon, Portugal. We're joining as a featured startup as part of the ALPHA program in category Enterprise Software Solutions.

Kiwi TCMS will have an on-site presence during the exhibition (1 day) where you can easily find us. We've also applied to the Startup Showcase track where you can see Alex present on stage. In addition, if all goes well our team will be joined by Alexandre Neto of QCooperative who is leading the effort to adopt Kiwi TCMS for testing the QGIS open source project. More on that here.

Exact schedules are still unknown at this stage so you will have to ping us via email/Twitter or find us on the conference floor if you want to meet.

Below is our video submission to the organizers:

<iframe allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen="allowfullscreen" frameborder="0" height="315" src="https://www.youtube.com/embed/yIyhkcJ8How" title="YouTube video player" width="560"> </iframe>


If you like what we're doing and how Kiwi TCMS supports various communities please help us!

The ARM developers workstation: Why the SoftIron OverDrive 1000 is still relevant

Posted by Peter Czanik on September 21, 2021 12:29 PM

The promise of “boring” ARM hardware has been with us for almost a decade. And a couple of years ago it really arrived: easy to use, standards compliant ARM hardware is now available on the market. However, not for everyone. When it comes to buying ARM hardware you still need to decide if it is “boring” or it is affordable. There was one notable exception, the SoftIron OverDrive 1000. It had its limitations, but it was standards compliant right from day one, affordable, and easily available not just for large companies.

Why standards compliance is important?

Before standardization arrived to the ARM world, each and every board had to be supported separately. You needed a dedicated installer for each board often with its own documentation, as each board booted in a different way and you needed different workarounds while installing the operating system. The situation improved over the years, but it is still far from ideal.

The good news? With standards, installation is as easy as on an x86_64 machine.

The bad news? There are many. First of all: standards compliant new machines are out of reach for most individual users or developers. Most of the boards / computers available to developers are not standards compliant. Or became standards compliant only years after release, when they are already mostly obsolete. Which means a lot of extra effort to get these supported by Linux distributions or BSD variants. Effort, that could be spent on polishing the distro instead of making ground work to get the given hardware up and running. The actual hardware is cheaper, but one pays a good part of the difference back with the extra time needed to get the software up and running.

The SoftIron OverDrive 1000

The OverDrive 1000 was announced by SoftIron during the annual openSUSE conference in 2016. It had some severe limitations on extensibility: no video, just two USB ports, no PCIe. However it is the first ARM machine fully standards compliant from day one, but still easily available AND affordable by individual users and developers. And unfortunately also the last one.

I had access to ARM boards that were better than the OverDrive 1000 in one way or in another. But often even using the latest firmware and OS image from the board vendor I could not get a board up and running. With the OverDrive 1000 I rarely had such problems. Even if it was not listed among the supported and/or tested boards, the OverDrive 1000 just worked perfectly in the vast majority of cases.

My focus is application support. If someone reports that syslog-ng has problems on a given version of SLES, FreeBSD, Fedora or Debian running on ARM I do not want to spend days to figure out how to install the given operating system and upgrade or downgrade the firmware along the way as needed by the given OS. The focus is the application, the OS install should be straightforward on the first try and ready in half an hour, including software download. The OverDrive 1000 provided me this convenience starting at day one. And even if the hardware is now completely obsolete, it still works reliably.

Trigger

You might wonder where the idea for this blog came from, especially that I already promised a couple of other topics already. The trigger was a tweet about a freshly released ARM workstation and the related thread: https://twitter.com/marypcbuk/status/1438151262994845702

it's not for anyone who would think about the price; it's a development platform for car makers like Volkswagen

If you really want, you can find the related the announcement on Twitter. I do not want to advertise them here. A friend of mine suggested to put a note here, even if I have never done this before: if they send you the machine, you could make an exception and post a review on your blog :-)

This kind of attitude hinders the wider adoption of ARM in enterprise grade servers. I talked to many people who are aware of the advantages of ARM and even have the financial resources to buy those servers. But without affordable standards compliant and ready to use testing hardware they just do not make the jump. In the x86_64 world they use a small cluster of HP Microserver boxes as a test environment for larger projects. For ARM there is nothing similar available at the moment.

A modern replacement for the OverDrive 1000 is badly needed:

  • ready to use hardware (not just board)
  • standards compliant from day one
  • affordable to individuals and smaller companies
  • easy to buy even by individuals

tmt hint 02: under the hood

Posted by Fedora Community Blog on September 21, 2021 08:00 AM

After making the first steps with tmt and investigating the provisioning options let’s now dive together a little bit more and look Under The Hood to see how plans, tests and stories work together on a couple of examples.

Plans

Use Plans to enable testing and group relevant tests together. They describe how to discover tests for execution, how to provision the environment, how to prepare it for testing, how to execute tests, report results, and finally how to finish the test job.

provision:
    how: container
    image: fedora:33
prepare:
    how: install
    package: wget
execute:
    how: tmt
    script: wget http://example.org/

Tests

Tests define attributes which are closely related to individual test cases. This includes the test script, framework, directory path where the test should be executed, maximum test duration , and packages required to run the test. Here’s an example of test metadata:

summary: Fetch an example web page
test: wget http://example.org/
require: wget
duration: 1m

Stories

Use Stories to track implementation, test and documentation coverage for individual features or requirements. Thanks to this, you can track everything in one place, including the project implementation progress.

story:
    As a user I want to see more detailed information for
    particular command.
example:
  - tmt test show -v
  - tmt test show -vvv
  - tmt test show --verbose

Core attributes

Core attributes cover general metadata. This includes: summary or description for describing the content; the enabled attribute for disabling and enabling tests, plans, and, stories; and the link key which tracks relations between objects.

description:
    Different verbose levels can be enabled by using the
    option several times.
link:
  - implemented-by: /tmt/cli.py
  - documented-by: /tmt/cli.py
  - verified-by: /tests/core/dry

Have a look at the whole chapter to learn more details and get some more context. See the Fedora Guide to learn even more about enabling tmt tests in the CI.

The post tmt hint 02: under the hood appeared first on Fedora Community Blog.

Next Open NeuroFedora meeting: 27 September 1300 UTC

Posted by The NeuroFedora Blog on September 20, 2021 08:00 AM
Photo by William White on Unsplash

Photo by William White on Unsplash.


Please join us at the next regular Open NeuroFedora team meeting on Monday 27 September at 1300UTC in #fedora-neuro on IRC (Libera.chat). The meeting is a public meeting, and open for everyone to attend. You can join us over:

You can use this link to convert the meeting time to your local time. Or, you can also use this command in the terminal:

$ date --date='TZ="UTC" 1300 2021-09-27'

The meeting will be chaired by @shaneallcroft. The agenda for the meeting is:

We hope to see you there!

A new set of OpenType shaping rules for Malayalam script

Posted by Rajeesh K Nambiar on September 20, 2021 05:30 AM

TLDR; research and development of a completely new OpenType layout rules for Malayalam traditional orthography.

Writing OpenType shaping rules is hard. Writing OpenType shaping rules for advanced (complex) scripts is harder. Writing OpenType shaping rules without causing any undesired ligature formations is even harder.

Background

The shaping rules for SMC fonts abiding v2 of Malayalam OpenType speification (mlm2 script tag) were polished in large part by me over many years, fixing shaping errors and undesired ligature formations. It still left some hard to fix bugs. Driven by the desire to fix such difficult bugs in RIT fonts and the copyright fiasco, I have set out to write a simplified OpenType shaping rules for Malayalam from scratch. Two major references helped in that quest: (1) a radically different approach I have tried few years ago but failed with mlym script tag (aka Windows XP era shaping); (b) a manuscript by R. Chithrajakumar of Rachana Aksharavedi who devised the ‘definitive character set’ for Malayalam script. The idea of ‘definitive character set’ is that it contains all the valid characters in a script and it doesn’t contain any (invalid) characters not in the script. By the definition; I wanted to create the new shaping rules in such a way that it does not generate any invalid characters (for e.g. with a detached u-kar).

<figure class="wp-block-image size-large"><figcaption>Fig. 1. Samples of Malayalam definitive character set listing by R. Chithrajakumar, circa 1999. Source: K.H. Hussain.</figcaption></figure>

“Simplify, simplify, simplify!”

Henry David Thoreau

It is my opinion that a lot of complexity in the Malayalam shaping largely comes from Indic OpenType shaping specification largely follows Devanagari, which in turn was adapted from ISCII, which has (in my limited understanding) its root in component-wise metal type design of ligature glyphs. Many half, postbase and other shaping rules have their lineage there. I have also heard similar concerns about complexity expressed by others, including Behdad Esfahbod, FreeFont maintainer et al.

Implementation

As K.H. Hussain once rightly noted, the shaping rules were creating many undesired/unnecessary ligature glyphs by default, and additional shaping rules (complex contextual lookups) are written to avoid/undo those. A better, alternate approach would be: simply don’t generate undesired ligatures in the first place.

“Invert, always invert.”

Carl Gustav Jacob Jacobi

Around December 2019, I set out to write a definitive set of OpenType shaping rules for traditional script set of Malayalam. Instead of relying on many different lookup types such as pref, pstf, blwf, pres, psts and myriad of complex contextual substitutions, the only type of lookup required was akhn — because the definitive character set contains all ligatures of Malayalm and those glyphs are designed in the font as a single glyph — no component based design.

The draft rules were written in tandem with RIT-Rachana redesign effort and tested against different shaping engines such as HarfBuzz, Allsorts, XeTeX, LuaHBTeX and DirectWrite/Uniscribe for Windows. Windows, being Windows (also being maintainers of OpenType specification), indeed did not work as expected adhering to the specification. Windows implementation clearly special cased the pstf forms of യ (Ya, 0D2F) and വ (Va, 0D35). To make single set of shaping rules work with all these shaping engines, the draft rules were slightly amended, et voila — it worked in all applications and OSen that use any of these shaping engines. It was decided to drop support for mlym script which was deprecated many years ago and support only mlm2 specification which fixed many unfixable shortcomings of mlym. One notable shaping engine which doesn’t work with these rules is Adobe text engine (Lipika?), but they have recently switched to HarfBuzz. That covers all major typesetting applications.

<figure class="wp-block-gallery columns-3 is-cropped"><figcaption class="blocks-gallery-caption">Fig. 2. Samples of the OpenType shaping rules for definitive characters set of Malayalam traditional orthography.</figcaption></figure>

Testing fonts developed using this new set of shaping rules for Malayalam indeed showed that they do not generate any undesired ligatures in the first place. In addition, compared to the previous shaping rules, it gets rid of 70+ lines of complex contextual substitutions and other rules, while remaining easy to read and maintain.

<figure class="wp-block-image size-large">Old vs new shaping rules in Rachana<figcaption>Fig. 3. Old vs new shaping rules in RIT Rachana.</figcaption></figure>

Application support

This new set of OpenType layout rules for Malayalam is tested to work 100% with following shaping engines:

  1. HarfBuzz
  2. Allsorts
  3. DirectWrite/Uniscribe (Windows shaping engine)

And GUI toolkits/applications:

  1. Qt (KDE applications)
  2. Pango/GTK (GNOME applications)
  3. LibreOffice
  4. Microsoft Office
  5. XeTeX
  6. LuaHBTeX
  7. Emacs
  8. Adobe InDesign (with HarfBuzz shaping engine)
  9. Adobe Photoshop
  10. Firefox, Chrome/Chromium, Edge browsers

Advantages

In addition, the advantages of the new shaping rules are:

  1. Adheres to the concept of ‘definitive character set’ of the language/script completely. Generate all valid conjunct characters and do not generate any invalid conjunct character.
  2. Same set of rules work fine without adjustments/reprogramming for ‘limited character set’ fonts. The ‘limited character set’ may not contain conjunct characters as extensive in the ‘definitive character set’; yet it would always have characters with reph and u/uu-kars formed correctly.
  3. Reduced complexity and maintenance (no complex contextual lookups, reverse chaining etc.). Write once, use in any fonts.
  4. Open source, libre software.

This new OpenType shaping rules program was released to public along with RIT Rachana few months ago, and also used in all other fonts developed by RIT. It is licensed under Open Font License for anyone to use and integrate into their fonts, please ensure the copyright statements are preserved. The shaping rules are maintained at RIT GitLab repository. Please create an issue in the tracker if you find any bugs; or send a merge request if any improvement is made.

Episode 289 – Who left this 0day on the floor?

Posted by Josh Bressers on September 20, 2021 12:01 AM

Josh and Kurt talk about an unusual number of really bad security updates. We even recorded this before the Azure OMIGOD vulnerability was disclosed. It’s certainly been a wild week with Apple and Chrome 0days, and a Travis CI secret leak. Maybe this is the new normal.

<audio class="wp-audio-shortcode" controls="controls" id="audio-2557-1" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_289_Who_left_this_0day_on_the_floor.mp3?_=1" type="audio/mpeg">https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_289_Who_left_this_0day_on_the_floor.mp3</audio>

Show Notes

Ruby for ebook publishing

Posted by Josef Strzibny on September 20, 2021 12:00 AM

A lot of times, people ask what’s Ruby good for apart from Rails. Ruby is great for various tasks from several different domains, and today, I would like to share how anybody can use Ruby in publishing ebooks.

Since I used some Ruby tasks in publishing my first-ever ebook Deployment from Scratch, it crossed my mind to write down why I think Ruby is great for publishing ebooks.

PDF publishing

There is a whole Ruby toolkit to publish technical content in AsciiDoc called Asciidoctor. It’s a great toolkit to produce PDF, EPUB 3, or even manual pages.

Here’s a list of what Asciidoctor can do for you in terms of a PDF (stolen from their page):

  • Custom fonts (TTF or OTF)
  • Full SVG support (thanks to prawn-svg)
  • PDF document outline (i.e., bookmarks)
  • Title page
  • Table of contents page(s)
  • Document metadata (title, authors, subject, keywords, etc.)
  • Configurable page size (e.g., A4, Letter, Legal, etc)
  • Internal cross-reference links
  • Syntax highlighting with Rouge (preferred), Pygments, or CodeRay
  • Cover pages
  • Page background color or page background image with named scaling
  • Page numbering
  • Double-sided (aka prepress) printing mode (i.e., margins alternate on recto and verso pages)
  • Customizable running content (header and footer)
  • “Keep together” blocks (i.e., page breaks avoided in certain block content)
  • Orphaned section titles avoided
  • Autofit verbatim blocks (as permitted by base_font_size_min setting)
  • Table border settings honored
  • Font-based icons
  • Auto-generated index
  • Automatic hyphenation (when enabled)
  • Permissive line breaking for CJK languages
  • Compression / optimization of output file

If you are thinking of publishing your first technical ebook, it’s strong contender. Just get familiar with the limitations before starting. You would use AsciiDoc the same way as Markdown, although the syntax is different:

= Hello, AsciiDoc!
Doc Writer <doc@example.com>

An introduction to http://asciidoc.org[AsciiDoc].

== First Section

* item 1
* item 2

[source,ruby]
puts "Hello, World!"

You then save AsciiDoc content with the .adoc extension and convert it by running asciidoctor (default backend generates HTML):

$ gem install asciidoctor-pdf
$ asciidoctor -b docbook5 mysample.adoc
$ asciidoctor -r asciidoctor-pdf -b pdf mysample.adoc

And because this post is about Ruby, you can call it from Ruby:

require 'asciidoctor'

Asciidoctor.convert_file 'mysample.adoc'

And also work with the generated content directly:

html = Asciidoctor.convert_file 'mysample.adoc', to_file: false, header_footer: true
puts html

My journey went from an old gitbook version that could still generate a PDF from Markdown to Pandoc to keep the Markdown I had and enhance it with LaTex. For anything new, I would look into Asciidoctor first. You can start with their AsciiDoc Writer’s Guide.

As a side note, Asciidoctor uses the Prawn toolkit, which you can use directly for several different things. I used Prawn to build InvoicePrinter for example.

Text transformations

Since I ended up with Pandoc and a mixture of Markdown and LaTex, my default EPUB version didn’t look good. See, in my text, I might have the following:

### Third headline

Text paragraph.

\cat{Something interesting that Tiger shares.}

I made the fictional character Tiger the Cat to make the heavily technical book feel ligher and entertaining. I needed to draw a box with a Tiger picture and text next to it. And I made a LaTex \cat macro to do just that.

To my surprise, the conversion to EPUB worked, but the result was horrible. So I needed to replace this macro with an HTML snipped for the EPUB version before the transformation happened.

So I wrote a little script to find these occurrences and produce new sources for the EPUB:

#!/usr/bin/ruby
require 'fileutils'

FileUtils.rm_rf 'epub_chapters'
FileUtils.mkdir 'epub_chapters'

Dir.glob('chapters/*.md') do |file|
  chapter = File.read file
  chapter.gsub!(
    /^\\cat{(.*)}$/,
    '<div class="cat"><div class="tiger"><img src=".."" /></div>\1</div>'
  )
  name = File.basename file
  File.write "epub_chapters/#{name}", chapter
end

There are many ways to pre- or post-process text, but this was my quick way to fix the EPUB version.

Landing page

Your book might sell better with an attractive landing page. If you plan on a full-blown website, I recommend looking into Ruby static site generator called Jekyll for which I wrote some tips.

When I built my simple landing page, I realized that my chapter list went outdated while I continued to publish beta content. To that end, I decided to keep the short chapter description within the chapter Markdown source like this:

# Processes

<!--
headline: A closer look at Linux processes. CPU and virtual memory, background processes, monitoring, debugging, systemd, system logging, and scheduled processes.
-->

Running a web application is essentially running a suite of related programs concurrently as processes. Spawning a program process can be as simple as typing its name into a terminal, but how do we ensure that this program won't stop at some point? We need to take a closer look at what Linux processes are and how to bring them back to life from failures.
...

And I wrote a Ruby script that takes this meta information, makes HTML out of it, and updates the landing page:

#!/usr/bin/ruby
require 'redcarpet'

BOOK_DIR="/home/strzibny/Projects/deploymentfromscratch"
CHAPTER_DIR="#{BOOK_DIR}/chapters"

class Chapter
  attr_reader :index, :title, :headline, :html

  def initialize(index:, title:, headline:, html:)
    @index = index
    @title = title
    @headline = headline
    @html = html
  end
end

chapters = []

Dir.glob("#{CHAPTER_DIR}/*.md").sort.drop(1).each.with_index(1) do |file, index|
  content = File.read(file)
  title = content.scan(/^\# (.*)$/).first&.first
  headline = content.scan(/^headline: (.*)$/).first&.first

  if title
    html = Redcarpet::Markdown.new(Redcarpet::Render::HTML.new).render(content)
    chapters << Chapter.new(index: index, title: title, headline: headline, html: html)
  end
end

# ...and later...

BOOK_PAGE = "../index.html"

sections = chapters.map do |chapter|
  <<-EOF
    <div class="chapter">
      <strong>#{chapter.index}. #{chapter.title}</strong>
      <p>
        #{chapter.headline}
      </p>
    </div>
  EOF
end.join("\n")

page = File.read BOOK_PAGE
new_page = page.gsub(
  /<!--CHAPTERS START-->.*<!--CHAPTERS END-->/m,
  "<!--CHAPTERS START-->\n#{sections}\n<!--CHAPTERS END-->"
)
File.open(BOOK_PAGE, "w") { |file| file.puts new_page }

So remember, you can work with your sources and automate the landing page management. I used redcarpet gem for Markdown processing, and they are also other useful gems like front_matter_parser.

PDF previews

While I was writing the alpha and beta releases of Deployment from Scratch, I wanted to send a preview from time to time. The obvious way is to limit the pages you render and perhaps use a PDF editor to insert something else. Or you can use Ruby.

Ruby ecosystem features a nice PDF toolkit called HexaPDF that can be used to cut the pages you want and interleave them with other pages (an introduction, a call to action, a reminder, or final words for the preview). An example:

#!/usr/bin/ruby
require 'hexapdf'

demo = HexaPDF::Document.open("output/book.pdf")

preview = HexaPDF::Document.new

demo.pages.each_with_index { |page, page_index|
  if [0].include? page_index
    blank = preview.pages.add.canvas
    blank.font('Amiri', size: 25, variant: :bold)
    blank.text("This is a preview of Deployment from Scratch", at: [20, 800])
    blank.font('Amiri', size: 20)
    blank.text("Follow the book updates at https://deploymentfromscratch.com/.", at: [20, 550])
    blank.text("Write me what you think at strzibny@gmail.com.", at: [20, 500])
    blank.text("Or catch me on Twitter at https://twitter.com/strzibnyj.", at: [20, 450])
    blank.font('Amiri', size: 10)
    blank.text("Copyright by Josef Strzibny. All rights reserved.", at: [20, 20])
  end
}

preview.write("output/preview.pdf", optimize: false)

If you don’t need to add custom content, you can also use HexaPDF from a command line to just merge various pages from one or many PDFs:

$ hexapdf merge output/toc.pdf --pages 1-10 output/book.pdf --force

Image previews

I covered cutting out PDF previews, but I also wanted to include nice little image previews for my landing page. To that end, I separated the final PDF into individual PDF pages and converted them to images.

Although there are various PDF utilities, it’s easy to stick with HexaPDF for the first part of the job:

#!/usr/bin/ruby
require 'fileutils'
require 'hexapdf'

FileUtils.rm_rf 'preview'
FileUtils.mkdir 'preview'

file = "output/deploymentfromscratch.pdf"

pdf = HexaPDF::Document.open(file)

pdf.pages.each_with_index do |page, index|
  target = HexaPDF::Document.new
  target.pages << target.import(page)
  target.write("preview/#{index+1}.pdf", optimize: true)
end

Once I have individual PDFs, I go through them again and convert them to images with Ruby binding to vips:

#!/usr/bin/ruby
require 'fileutils'
require 'vips'

Dir.glob('preview/*.pdf') do |file|
  im = Vips::Image.new_from_file file, scale: 2.5
  im.write_to_file("#{file}.jpg")
end

You can notice I had to increase the scale, otherwise the result is of poor quality.

Once I have individual images I just insert them to my landing page. But you can also extend your Ruby task to do it for you automatically.

Customers’ management

I built a waitlist of more than 600 people before releasing Deployment from Scratch, and many of the people on the list became customers. But you see, I use Gumroad for selling the book and Mailchimp for the waitlist. Two different products and two separate lists.

What if I want to send a reminder or special offer to people that didn’t buy the book yet? I certainly don’t want to bother my current customers with an email they don’t need. Or what if I want to find out the total conversion rate of the waitlist?

Both tools offer to export the dataset, so all we need is a little bit of Ruby:

#!/usr/bin/ruby
require 'csv'

# Customers from Gumroad
customers = 'customers_sep14_2021.csv'

# Waitlist
list = 'subscribed_segment_export_48995a2a64.csv'

customer_rows = CSV.read(customers)
buyer_emails = []

(1..customer_rows.count-1).each do |num|
  email = customer_rows[num][4]
  buyer_emails << email if email
end

list_emails = []
list_rows = CSV.read(list)

(1..list_rows.count-1).each do |num|
  email = list_rows[num][0]
  list_emails << email if email
end

# Who didn't bought the book yet
not_bought = list_emails - buyer_emails

puts not_bought.count

If this is something you might want to do often, you can extend this to use the APIs directly without going through the manual download of the dataset.

Maintainable tasks

Although I started with a Makefile to stay on top of all these tasks, if Make is not in your blood, there is nothing easier than writing these tasks as Rake tasks:

task :generate_pdf do
  `asciidoctor -r asciidoctor-pdf -b pdf mysample.adoc`
end

task :prepare_preview do
  # ..
end

And call it by running rake:

$ rake generate_pdf

Build environment

It’s a good idea to keep your book production environment intact. For one, new versions of various tools can break your original workflow and rendering. Or you might forget all the LaTex packages that got installed in the process.

As with other projects, you can use Vagrant, and it’s Ruby-powered Vagrantfile to keep your project in the same state and survive the unexpected. A starting Vagrantfile can be kept simple with just a little bit of Ruby and Bash:

Vagrant.configure(2) do |config|
  config.vm.box = "fedora-33-cloud"

  config.vm.synced_folder ".", "/vagrant", type: :nfs, nfs_udp: false

  config.vm.provision "shell", inline: <<-SHELL
sudo dnf update -y || :

# Install dependencies...
sudo dnf install ruby pandoc -y || :
SHELL

end

Conclusion

So there you have it – yet another domain where Ruby can help you, and perhaps even shines above the competition. A whole publishing toolkit, a Make-like build utility, PDF toolkits, and Ruby’s power to write simple scripts for text manipulation.

Google Professional Cloud Architect

Posted by Fabio Alessandro Locati on September 20, 2021 12:00 AM
After having renewed the Google Associate Cloud Engineer certification, it was the moment to renew the Google Professional Cloud Architect certification as well. Since I wanted to keep Windows on my laptop for the smallest amount of time possible, I decided to book the Professional Cloud Architect exam the day after the Associate Cloud Engineer one. On the exam day (18th of August), having had experience the previous day, I ensured to set up everything correctly.

Creating Quality Backtraces for Crash Reports

Posted by Michael Catanzaro on September 19, 2021 02:41 AM

Hello Linux users! Help developers help you: include a quality backtraces taken with gdb each and every time you create an issue report for a crash. If you don’t, most developers will request that you provide a backtrace, then ignore your issue until you manage to figure out how to do so. Save us the trouble and just provide the backtrace with your initial report, so everything goes smoother. (Backtraces are often called “stack traces.” They are the same thing.)

Don’t just copy the lower-quality backtrace you see in your system journal into your issue report. That’s a lot better than nothing, but if you really want the crash to be fixed, you should provide the developers with a higher-quality backtrace from gdb. Don’t know how to get a quality backtrace with gdb? Read on.

Future of Crash Reporting

Here are instructions for getting a quality backtrace for a crashing process on Fedora 35, which is scheduled to be released in October:

$ coredumpctl gdb
(gdb) bt full

Press ‘c’ (continue) when required. When it’s done printing, press ‘q’ to quit. That’s it! That’s all you need to know. You’re done. Two points of note:

  • When a process crashes, a core dump is caught by systemd-coredump and stored for future use. The coredumpctl gdb command opens the most recent core dump in gdb. systemd-coredump has been enabled by default in Fedora since Fedora 26, which was released four years ago. (It’s also enabled by default in RHEL 8.)
  • After opening the core dump, gdb uses debuginfod to automatically download all required debuginfo packages, ensuring the generated backtrace is useful. debuginfod is a new feature in Fedora 35.

If you’re not an inhabitant of the future, you are probably missing at least debuginfod today, and possibly also systemd-coredump depending on which operating system you are using, so we will also learn how to take a backtrace without these tools. It will be more complicated, of course.

systemd-coredump

If your operating system enables systemd-coredump by default, then congratulations! This makes reporting crashes much easier because you can easily retrieve a core dump for any recent crash using the coredumpctl command. For example, coredumpctl alone will list all available core dumps. coredumpctl gdb will open the core dump of the most recent crash in gdb. coredumpctl gdb 1234 will open the core dump corresponding to the most recent crash of a process with pid 1234. It doesn’t get easier than this.

Core dumps are stored under /var/lib/systemd/coredump. systemd-coredump will automatically delete core dumps that exceed configurable size limits (2 GB by default). It also deletes core dumps if your free disk space falls below a configurable threshold (15% free by default). Additionally, systemd-tmpfiles will delete core dumps automatically after some time has passed (three days by default). This ensures your disk doesn’t fill up with old core dumps. Although most of these settings seem good to me, the default 2 GB size limit is way too low in my opinion, as it causes systemd to immediately discard crashes of any application that uses WebKit. I recommend raising this limit to 20 GB by creating an /etc/systemd/systemd.conf.d/50-coredump.conf drop-in containing the following:

[Coredump]
ProcessSizeMax=20G
ExternalSizeMax=20G

The other settings are likely sufficient to prevent your disk from filling up with core dumps.

Sadly, although systemd-coredump has been around for a good while now and many Linux operating systems have it enabled by default, many still do not. Most notably, the Debian ecosystem is still not yet on board. To check if systemd-coredump is enabled on your system:

$ cat /proc/sys/kernel/core_pattern

If you see systemd-coredump, then you’re good.

To enable it in Debian or Ubuntu, just install it:

# apt install systemd-coredump

Ubuntu users, note this will cause apport to be uninstalled, since it is currently incompatible. Also note that I switched from $ (which indicates a normal prompt) to # (which indicates a root prompt).

In other operating systems, you may have to manually enable it:

# echo "kernel.core_pattern=|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h" > /etc/sysctl.d/50-coredump.conf
# /usr/lib/systemd/systemd-sysctl --prefix kernel.core_pattern

Note the exact core pattern to use changes occasionally in newer versions of systemd, so these instructions may not work everywhere.

Detour: Manual Core Dump Handling

If you don’t want to enable systemd-coredump, life is harder and you should probably reconsider, but it’s still possible to debug most crashes. First, enable core dump creation by removing the default 0-byte size limit on core files:

$ ulimit -c unlimited

This change is temporary and only affects the current instance of your shell. For example, if you open a new tab in your terminal, you will need to set the ulimit again in the new tab.

Next, run your program in the terminal and try to make it crash. A core file will be generated in the current directory. Open it by starting the program that crashed in gdb and passing the filename of the core file that was created. For example:

$ gdb gnome-chess ./core

This is downright primitive, though:

  • You’re going to have a hard time getting backtraces for services that are crashing, for starters. If starting the service normally, how do you set the ulimit? I’m sure there’s a way to do it, but I don’t know how! It’s probably easier to start the service manually, but then what command line flags are needed to properly do so? It will be different for each service, and you have to figure this all out for yourself.
  • Special situations become very difficult. For example, if a service is crashing only when run early during boot, or only during an initial setup session, you are going to have an especially hard time.
  • If you don’t know how to reproduce a crash that occurs only rarely, it’s inevitably going to crash when you’re not prepared to manually catch the core dump. Sadly, not all crashes will occur on demand when you happen to be running the software from a terminal with the right ulimit configured.
  • Lastly, you have to remember to delete that core file when you’re done, because otherwise it will take up space on your disk space until you do. You’ll probably notice if you leave core files scattered in ~/home, but you might not notice if you’re working someplace else.

Seriously, just enable systemd-coredump. It solves all of these problems and guarantees you will always have easy access to a core dump when something crashes, even for crashes that occur only rarely.

Debuginfo Installation

Now that we know how to open a core dump in gdb, let’s talk about debuginfo. When you don’t have the right debuginfo packages installed, the backtrace generated by gdb will be low-quality. Almost all Linux software developers deal with low-quality backtraces on a regular basis, because most users are not very good at installing debuginfo. Again, if you’re reading this in the future using Fedora 35 or newer, you don’t have to worry about this anymore because debuginfod will take care of everything for you. I would be thrilled if other Linux operating systems would quickly adopt debuginfod so we can put the era of low-quality crash reports behind us. But since most readers of this blog today will not yet have debuginfod enabled, let’s learn how to install debuginfo manually.

As an example, I decided to force gnome-chess to crash using the command killall -SEGV gnome-chess, then I ran coredumpctl gdb to open the resulting core dump in gdb. After a bunch of spam, I saw this:

Missing separate debuginfos, use: dnf debuginfo-install gnome-chess-40.1-1.fc34.x86_64
--Type <RET> for more, q to quit, c to continue without paging--
Core was generated by `/usr/bin/gnome-chess --gapplication-service'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0  0x00007fa23d8b55bf in __GI___poll (fds=0x5636deb06930, nfds=2, timeout=2830)
    at ../sysdeps/unix/sysv/linux/poll.c:29
29  return SYSCALL_CANCEL (poll, fds, nfds, timeout);
[Current thread is 1 (Thread 0x7fa23ca0cd00 (LWP 140177))]
(gdb)

If you are using Fedora, RHEL, or related operating systems, the line “missing separate debuginfos” is a good hint that debuginfo is missing. It even tells you exactly which dnf debuginfo-install command to run to remedy this problem! But this is a Fedora ecosystem feature, and you won’t see this on most other operating systems. Usually, you’ll need to manually locate the right debuginfo packages to install. Debian and Ubuntu users can do this by searching for and installing -dbg or -dbgsym packages until each frame in the backtrace looks good. You’ll just have to manually guess the names of which debuginfo packages you need to install based on the names of the libraries in the backtrace. Look here for instructions for popular operating systems.

How do you know when the backtrace looks good? When each frame has file names, line numbers, function parameters, and local variables! Here is an example of a bad backtrace, if I continue the gnome-chess example above without properly installing the required debuginfo:

(gdb) bt full
#0 0x00007fa23d8b55bf in __GI___poll (fds=0x5636deb06930, nfds=2, timeout=2830)
    at ../sysdeps/unix/sysv/linux/poll.c:29
        sc_ret = -516
        sc_cancel_oldtype = 0
#1 0x00007fa23eee648c in g_main_context_iterate.constprop () at /lib64/libglib-2.0.so.0
#2 0x00007fa23ee8fc03 in g_main_context_iteration () at /lib64/libglib-2.0.so.0
#3 0x00007fa23e4b599d in g_application_run () at /lib64/libgio-2.0.so.0
#4 0x00005636dd7b79a2 in chess_application_main ()
#5 0x00007fa23d7e7b75 in __libc_start_main (main=0x5636dd7aaa50 <main>, argc=2, argv=0x7fff827b6438, init=<optimized out>, fini=<optimized out>, rtld_fini=<optimized out>, stack_end=0x7fff827b6428)
    at ../csu/libc-start.c:332
        self = <optimized out>
        result = <optimized out>
        unwind_buf = 
              {cancel_jmp_buf = {{jmp_buf = {94793644186304, 829313697107602221, 94793644026480, 0, 0, 0, -829413713854928083, -808912263273321683}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x2, 0x7fff827b6438}, data = {prev = 0x0, cleanup = 0x0, canceltype = 2}}}
        not_first_call = <optimized out>
#6 0x00005636dd7aaa9e in _start ()

This backtrace has six frames, which shows where the code was during program execution when the crash occurred. You can see line numbers for frame #0 (poll.c:29) and #5 (libc-start.c:332), and these frames also show the values of function parameters and variables on the stack, which are often useful for figuring out what went wrong. These frames have good debuginfo because I already had debuginfo installed for glibc. But frames #1 through #4 do not look so useful, showing only function names and the library and nothing else. This is because I’m using Fedora 34 rather than Fedora 35, so I don’t have debuginfod yet, and I did not install proper debuginfo for libgio, libglib, and gnome-chess. (The function names are actually only there because programs in Fedora include some limited debuginfo by default. In many operating systems, you will see ??? instead of function names.) A developer looking at this backtrace is not going to know what went wrong.

Now, let’s run the recommended debuginfo-install command:

# dnf debuginfo-install gnome-chess-40.1-1.fc34.x86_64

When the command finishes, we’ll start gdb again, using coredumpctl gdb just like before. This time, we see this:

Missing separate debuginfos, use: dnf debuginfo-install avahi-libs-0.8-14.fc34.x86_64 colord-libs-1.4.5-2.fc34.x86_64 cups-libs-2.3.3op2-7.fc34.x86_64 fontconfig-2.13.94-2.fc34.x86_64 glib2-2.68.4-1.fc34.x86_64 graphene-1.10.6-2.fc34.x86_64 gstreamer1-1.19.1-2.1.18.4.fc34.x86_64 gstreamer1-plugins-bad-free-1.19.1-3.1.18.4.fc34.x86_64 gstreamer1-plugins-base-1.19.1-2.1.18.4.fc34.x86_64 gtk4-4.2.1-1.fc34.x86_64 json-glib-1.6.6-1.fc34.x86_64 krb5-libs-1.19.2-2.fc34.x86_64 libX11-1.7.2-3.fc34.x86_64 libX11-xcb-1.7.2-3.fc34.x86_64 libXfixes-6.0.0-1.fc34.x86_64 libdrm-2.4.107-1.fc34.x86_64 libedit-3.1-38.20210714cvs.fc34.x86_64 libepoxy-1.5.9-1.fc34.x86_64 libgcc-11.2.1-1.fc34.x86_64 libidn2-2.3.2-1.fc34.x86_64 librsvg2-2.50.7-1.fc34.x86_64 libstdc++-11.2.1-1.fc34.x86_64 libxcrypt-4.4.25-1.fc34.x86_64 llvm-libs-12.0.1-1.fc34.x86_64 mesa-dri-drivers-21.1.8-1.fc34.x86_64 mesa-libEGL-21.1.8-1.fc34.x86_64 mesa-libgbm-21.1.8-1.fc34.x86_64 mesa-libglapi-21.1.8-1.fc34.x86_64 nettle-3.7.3-1.fc34.x86_64 openldap-2.4.57-5.fc34.x86_64 openssl-libs-1.1.1l-1.fc34.x86_64 pango-1.48.9-2.fc34.x86_64

Yup, Fedora ecosystem users will need to run dnf debuginfo-install twice to install everything required, because gdb doesn’t list all required packages until the second time. Next, we’ll run coredumpctl gdb one last time. There will usually be a few more debuginfo packages that are still missing because they’re not available in the Fedora repositories, but now you’ll probably have enough to get a quality backtrace:

(gdb) bt full
#0  0x00007fa23d8b55bf in __GI___poll (fds=0x5636deb06930, nfds=2, timeout=2830)
    at ../sysdeps/unix/sysv/linux/poll.c:29
        sc_ret = -516
        sc_cancel_oldtype = 0
#1  0x00007fa23eee648c in g_main_context_poll
    (priority=, n_fds=2, fds=0x5636deb06930, timeout=, context=0x5636de7b24a0)
    at ../glib/gmain.c:4434
        ret = 
        errsv = 
        poll_func = 0x7fa23ee97c90 
        max_priority = 2147483647
        timeout = 2830
        some_ready = 
        nfds = 2
        allocated_nfds = 2
        fds = 0x5636deb06930
        begin_time_nsec = 30619110638882
#2  g_main_context_iterate.constprop.0
    (context=context@entry=0x5636de7b24a0, block=block@entry=1, dispatch=dispatch@entry=1, self=)
    at ../glib/gmain.c:4126
        max_priority = 2147483647
        timeout = 2830
        some_ready = 
        nfds = 2
        allocated_nfds = 2
        fds = 0x5636deb06930
        begin_time_nsec = 30619110638882
#3  0x00007fa23ee8fc03 in g_main_context_iteration
    (context=context@entry=0x5636de7b24a0, may_block=may_block@entry=1) at ../glib/gmain.c:4196
        retval = 
#4  0x00007fa23e4b599d in g_application_run
    (application=0x5636de7ae260 [ChessApplication], argc=-2105843004, argv=)
    at ../gio/gapplication.c:2560
        arguments = 0x5636de7b2400
        status = 0
        context = 0x5636de7b24a0
        acquired_context = 
        __func__ = "g_application_run"
#5  0x00005636dd7b79a2 in chess_application_main (args=0x7fff827b6438, args_length1=2)
    at src/gnome-chess.p/gnome-chess.c:5623
        _tmp0_ = 0x5636de7ae260 [ChessApplication]
        _tmp1_ = 0x5636de7ae260 [ChessApplication]
        _tmp2_ = 
        result = 0
...

I removed the last two frames because they are triggering a strange WordPress bug, but that’s enough to get the point. It looks much better! Now the developer can see exactly where the program crashed, including filenames, line numbers, and the values of function parameters and variables on the stack. This is as good as a crash report is normally going to get. In this case, it crashed when running poll() because gnome-chess was not actually doing anything at the time of the crash, since we crashed it by manually sending a SIGSEGV signal. Normally the backtrace will look more interesting.

Special Note for Arch Linux Users

Arch does not ship debuginfo packages, btw. To prepare a quality backtrace, you need to rebuild each package yourself with debuginfo enabled. This is a chore, to say the least. You will probably need to rebuild several system libraries in addition to the application itself if you want to get a quality backtrace. This is hardly impossible, but it’s a lot of work, too much to expect users to do for a typical bug report. You might want to consider attempting to reproduce your crash on another operating system in order to make it easier to get the backtrace. What a shame!

debuginfod for Fedora 34

Again, all of that manual debuginfo installation is no longer required as of Fedora 35, where debuginfod is enabled by default. It’s actually all ready to go in Fedora 34, just not enabled by default yet. You can try it early using the DEBUGINFOD_URLS environment variable:

$ DEBUGINFOD_URLS=https://debuginfod.fedoraproject.org/ coredumpctl gdb

Then you can watch gdb download the required debuginfo for you! Again, this environment variable will no longer be necessary in Fedora 35. (Technically, it will still be needed, but it will be configured by default.)

debuginfod for Debian Users

Debian users can use debuginfod, but it has to be enabled manually:

$ DEBUGINFOD_URLS=https://debuginfod.debian.net/ gdb

See here for more information. This requires Debian 11 “bullseye” or newer. If you’re using Ubuntu or other operating systems derived from Debian, you’ll need to wait until a debuginfod server for your operating system is available.

Flatpak

If your application uses Flatpak, you can use the flatpak-coredumpctl script to open core dumps in gdb. For most runtimes, including those distributed by GNOME or Flathub, you will need to manually install (a) the debug extension for your app, (b) the SDK runtime corresponding to the platform runtime that you are using, and (c) the debug extension for the SDK runtime. For example, to install everything required to debug Epiphany 40 from Flathub, you would run:

$ flatpak install org.gnome.Epiphany.Debug//stable
$ flatpak install org.gnome.Sdk//40
$ flatpak install org.gnome.Sdk.Debug//40

(flatpak-coredumpctl will fail to start if you don’t have the correct SDK runtime installed, but it will not fail if you’re missing the debug extensions. You’ll just wind up with a bad backtrace.)

The debug extensions need to exactly match the versions of the app and runtime that crashed, so backtrace generation may be unreliable after you install them for the very first time, because you would have installed the latest versions of the extensions, but your core dump might correspond to an older app or runtime version. If the crash is reproducible, it’s a good idea to run flatpak update after installing to ensure you have the latest version of everything, then reproduce the crash again.

Once your debuginfo is installed, you can open the backtrace in gdb using flatpak-coredumpctl. You just have to tell flatpak-coredumpctl the app ID to use:

$ flatpak-coredumpctl org.gnome.Epiphany

You can pass matches to coredumpctl using -m. For example, to open the core dump corresponding to a crashed process with pid 1234:

$ flatpak-coredumpctl -m 1234 org.gnome.Epiphany

Thibault Saunier wrote flatpak-coredumpctl because I complained about how hard it used to be to debug crashed Flatpak applications. Clearly it is no longer hard. Thanks Thibault!

Update: On newer versions of Debian and Ubuntu, flatpak-coredumpctl is included in the libflatpak-dev subpackage rather than the base flatpak package, so you’ll have to install libflatpak-dev first. But on older OS versions, including Debian 10 “buster” and Ubuntu 20.04, it is unfortunately installed as /usr/share/doc/flatpak/examples/flatpak-coredumpctl rather than /usr/bin/flatpak-coredumpctl due to a regrettable packaging choice that has been corrected in newer package versions. As a workaround, you can simply copy it to /usr/local/bin. Don’t forget to delete your copy after upgrading to a newer OS version, or it will shadow the packaged version.

Fedora Flatpaks

Flatpaks distributed by Fedora are different than those distributed by GNOME or by Flathub because they do not have debug extensions. Historically, this has meant that debugging crashes was impractical. The best solution was to give up.

Good news! Fedora’s Flatpaks are compatible with debuginfod, which means debug extensions will no longer be missed. You do still need to manually install the org.fedoraproject.Sdk runtime corresponding to the version of the org.fedoraproject.Platform runtime that the application uses, because this is required for flatpak-coredumpctl to work, but nothing else is required. For example, to get a backtrace for Fedora’s Epiphany Flatpak using a Fedora 35 host system, I ran:

$ flatpak install org.fedoraproject.Sdk//f34
$ flatpak-coredumpctl org.gnome.Epiphany
(gdb) bt full

(The f34 is not a typo. Epiphany currently uses the Fedora 34 runtime regardless of what host system you are using.)

That’s it!

Miscellany

At this point, you should know enough to obtain a high-quality backtrace on most Linux systems. That will usually be all you really need, but it never hurts to know a little more, right?

Alternative Types of Backtraces

At the top of this blog post, I suggested using bt full to take the backtrace because this type of backtrace is the most useful to most developers. But there are other types of backtraces you might occasionally want to collect:

  • bt on its own without full prints a much shorter backtrace without stack variables or function parameters. This form of the backtrace is more useful for getting a quick feel for where the bug is occurring, because it is much shorter and easier to read than a full backtrace. But because there are no stack variables or function parameters, it might not contain enough information to solve the crash. I sometimes like to paste the first few lines of a bt backtrace directly into an issue report, then submit the bt full version of the backtrace as an attachment, since an entire bt full backtrace can be long and inconvenient if pasted directly into an issue report.
  • thread apply all bt prints a backtrace for every thread. Normally these backtraces are very long and noisy, so I don’t collect them very often, but when a threadsafety issue is suspected, this form of backtrace will sometimes be required.
  • thread apply all bt full prints a full backtrace for every thread. This is what automated bug report tools generally collect, because it provides the most information. But these backtraces are usually huge, and this level of detail is rarely needed, so I normally recommend starting with a normal bt full.

If in doubt, just use bt full like I showed at the top of this blog post. Developers will let you know if they want you to provide the backtrace in a different form.

gdb Logging

You can make gdb print your session to a file. For longer backtraces, this may be easier than copying the backtrace from a terminal:

(gdb) set logging on

Memory Corruption

While a backtrace taken with gdb is usually enough information for developers to debug crashes, memory corruption is an exception. Memory corruption is the absolute worst. When memory corruption occurs, the code will crash in a location that may be far removed from where the actual bug occurred, rendering gdb backtraces useless for tracking down the bug. As a general rule, if you see a crash inside a memory allocation routine like malloc() or g_slice_alloc(), you probably have memory corruption. If you see magazine_chain_pop_head(), that’s called by g_slice_alloc() and is a sure sign of memory corruption. Similarly, crashes in GTK’s CSS machinery are almost always caused by memory corruption somewhere else.

Memory corruption is generally impossible to debug unless you are able to reproduce the issue under valgrind. valgrind is extremely slow, so it’s impractical to use it on a regular basis, but it will get to the root of the problem where gdb cannot. As a general rule, you want to run valgrind with --track-origins=yes so that it shows you exactly what went wrong:

$ valgrind --track-origins=yes my_app

When valgrinding a GLib application, including all GNOME applications, always use the G_SLICE=always-malloc environment variable to disable GLib’s slice allocator, to ensure the highest-quality diagnostics. Correction: the slice allocator can now detect valgrind and disable itself automatically.

When valgrinding a WebKit application, there are some WebKit-specific environment variables to use. Malloc=1 will disable the bmalloc allocator, GIGACAGE_ENABLED=0 will disable JavaScriptCore’s Gigacage feature (Update: turns out this is actually implied by Malloc=1), and WEBKIT_FORCE_SANDBOX=0 will disable the web process sandbox used by WebKitGTK or WPE WebKit:

$ Malloc=1 WEBKIT_FORCE_SANDBOX=0 valgrind --track-origins=yes epiphany

If you cannot reproduce the issue under valgrind, you’re usually totally out of luck. Memory corruption that only occurs rarely or under unknown conditions will lurk in your code indefinitely and cause occasional crashes that are effectively impossible to fix.

Another good tool for debugging memory corruption is address sanitizer (asan), but this is more complicated to use. Experienced users who are comfortable with rebuilding applications using special compiler flags may find asan very useful. However, because it can be very difficult to use,  I recommend sticking with valgrind if you’re just trying to report a bug.

Apport and ABRT

There are two popular downstream bug reporting tools: Ubuntu has Apport, and Fedora has ABRT. These tools are relatively easy to use — no command line knowledge required — and produce quality crash reports. Unfortunately, while the tools are promising, the crash reports go to downstream packagers who are generally either not watching bug reports, or else not interested or capable of fixing upstream software problems. Since downstream reports are very often ignored, it’s better to report crashes directly to upstream if you want your issue to be seen by the right developers and actually fixed. Of course, only report issues upstream if you’re using a recent software version. Fedora and Arch users can pretty much always safely report directly to upstream, as can Ubuntu users who are using the very latest version of Ubuntu. If you are an Ubuntu LTS user, you should stick with reporting issues to downstream only, or at least take the time to verify that the issue still occurs with a more recent software version.

There are a couple more problems with these tools. As previously mentioned, Ubuntu’s apport is incompatible with systemd-coredump. If you’ve read this far, you know you really want systemd-coredump enabled, so I recommend disabling apport until it learns to play ball with systemd-coredump.

The technical design of Fedora’s ABRT is currently better because it actually retrieves your core dumps from systemd-coredump, so you don’t have to choose between one or the other. Unfortunately, ABRT has many serious user experience bugs and warts. I can’t recommend it for this reason, but it if it works well enough for you, it does create some great downstream crash reports. Whether a downstream package maintainer will look at those reports is hit or miss, though.

What is a crash, really?

Most developers consider crashes on Unix systems to be program termination via a Unix signal that triggers creation of a core dump. The most common of these are SIGSEGV (segmentation fault, “invalid memory reference”) or SIBABRT (usually an intentional crash due to an assertion failure). Less-common signals are SIGBUS (“bad memory access”) or SIGILL (“illegal instruction”). Sandboxed applications might occasionally see SIGSYS (“bad system call”). See the manpage signal(7) for a full list. These are cases where you can get a backtrace to help with tracking down the issues.

What is not a crash? If your application is hanging or just not behaving properly, that is not a crash. If your application is killed using SIGTERM or SIGKILL — this can happen when systemd-oomd determines you are low on memory,  or when a service is taking too long to stop — this is also not a crash in the usual sense of the word, because you’re not going to be able to get a backtrace for it. If a website is slow or unavailable, the news might say that it “crashed,” but it’s obviously not the same thing as what we’re talking about here. The techniques in this blog post are no use for these sorts of “crashes.”

Conclusion

If you have systemd-coredump enabled and debuginfod installed and working, most crash reports will be simple.  Memory corruption is a frustrating exception. Encourage your operating system to enable systemd-coredump and debuginfod if it doesn’t already.  Happy crash reporting!

Friday’s Fedora Facts: 2021-37

Posted by Fedora Community Blog on September 17, 2021 07:24 PM

Here’s your weekly Fedora report. Read what happened this week and what’s coming up. Your contributions are welcome (see the end of the post)!

I have weekly office hours on Wednesdays in the morning and afternoon (US/Eastern time) in #fedora-meeting-1. Drop by if you have any questions or comments about the schedule, Changes, elections, or anything else. See the upcoming meetings for more information.

Announcements

CfPs

<figure class="wp-block-table">
ConferenceLocationDateCfP
Ohio Linux FestColumbus, OH, USearly Deccloses 1 Oct
DevConf.CZBrno, CZ & virtual28–29 Jancloses 24 Oct
SCaLEPasadena, CA, US5–8 Marcloses 30 Nov
</figure>

Help wanted

Upcoming test days

Prioritized Bugs

Upcoming meetings

Releases

<figure class="wp-block-table">
Releaseopen bugs
F335195
F344908
F35 (pre-release)1088
F36 (rawhide)6103
</figure>

Fedora Linux 35

Schedule

  • 2021-09-28 — Beta release target #2
  • 2021-10-05 — Final freeze begins
  • 2021-10-19 — Early Final target date
  • 2021-10-22 — Final target date #1

For the full schedule, see the schedule website.

Changes

See the ChangeSet page or Bugzilla for information on approved Changes.

<figure class="wp-block-table">
BZ StatusCount
ASSIGNED1
MODIFIED2
ON_QA41
</figure>

Blockers

<figure class="wp-block-table">
Bug IDComponentBug StatusBlocker Status
1989726mesaNEWAccepted (Beta)
1999321freeipaNEWAccepted (Beta)
1997310selinux-policyVERIFIEDAccepted (Beta)
2004604gnome-softwareNEWProposed (Beta)
</figure>

Fedora Linux 36

Changes

The table below lists proposed Changes. See the ChangeSet page or Bugzilla for information on approved Changes.

<figure class="wp-block-table">
ProposalTypeStatus
Enable exclude_from_weak_autodetect by default in LIBDNFSystem-Wideannounced
</figure>

Contributing

Have something you want included? You can file an issue or submit a pull request in the fedora-pgm/pgm_communication repo.

The post Friday’s Fedora Facts: 2021-37 appeared first on Fedora Community Blog.

Please nominate Kiwi TCMS at MLH Open Source Awards

Posted by Kiwi TCMS on September 17, 2021 06:40 PM

MLH Nomination

Last year Kiwi TCMS started partnering with the MLH Fellowship open source program. During the span of 3 semesters fellows received mentorship and career advice from us. They were also able to work on 20+ issues the majority of which have been complete.

For that we kindly ask the open source community to nominate Kiwi TCMS at the MLH Open Source Awards.

Steps to reproduce:

  1. Go to https://fellowship.mlh.io/opensourceawards
  2. Click the Submit a Nomination button
  3. Follow the instructions on screen!

Expected results:

  1. It should take you 2 minutes
  2. Your submission is recorded by MLH

Why are we doing this

MLH is recognizing extraordinary open source projects and communities. It is up to you, our community members and the general public to decide whether Kiwi TCMS qualifies or not. Winning this award will let us show what we do before a larger audience!

Thank you for supporting Kiwi TCMS and happy testing!


If you like what we're doing and how Kiwi TCMS supports various communities please help us!

Changes to Bugzilla queries

Posted by Fedora Community Blog on September 17, 2021 12:04 PM

On 13 September 2021, Red Hat’s Bugzilla team released updates to Bugzilla that included new functionality for pagination. There is also a change to the default number of results with the bug search API to support this feature. The default is now 20 but can be adjusted to 1000 by using the limit/offset parameters. 

Pagination for Bugzilla Data Tables

Bugzilla now supports Pagination for data tables, improving performance by not loading all results at one time. As users navigate through pages, results will be lazy-loaded. Users can perform bulk actions on the list by selecting the rows from the list. 

Changes to Bugzilla list API response

Authenticated users

The default Bug search API(REST/XMLRPC/JSONRPC) result in 20 bugs by default and users can change this by specifying the limit. The value of limit can be up to 1000 bugs. If you need results that are more than 1000, you can use the offset parameter. You can get default 1000 bugs by sending 0 as a limit parameter.

Additionally, they have introduced “total_matches”, “limit”, and “offset” values in the response. These give the total number of bugs qualified for the query and the number of results in the response.

You can also set your account’s default limit in user preferences. The account default can be set as high as 200. You can still use the limit parameter to get up to 1000 bugs.

Non-Authenticated users

Since many unauthenticated calls come to Bugzilla, which just scans for results, they have decided to limit the number of results to be 20 bugs; Buglist API will not respect limits in this case. If you authenticate your query, you can set the limit as described above.

Note: Depending on the number of fields and data requested, your query can timeout.

The post Changes to Bugzilla queries appeared first on Fedora Community Blog.

The syslog-ng Insider 2021-09: 3.34; OpenBSD; OpenSearch; http() destination;

Posted by Peter Czanik on September 17, 2021 11:04 AM

Dear syslog-ng users,

This is the 94th issue of syslog-ng Insider, a monthly newsletter that brings you syslog-ng-related news. Topics include:

  • Version 3.34.1 of syslog-ng available
  • Syslog-ng updated in OpenBSD ports
  • OpenSearch and syslog-ng
  • Creating a new http()-based syslog-ng destination: Seq

It is available at: https://www.syslog-ng.com/community/b/blog/posts/the-syslog-ng-insider-2021-09-3-34-openbsd-opensearch-http-destination

Open source game achievements

Posted by Fedora Magazine on September 17, 2021 08:00 AM

Learn how Gamerzilla brings an achievement system to open source games and enables all developers to implement achievements separate from the game platform.

Some open source games rival the quality of commercial games. While it is hard to match the quality of triple-a games, open source games compete effectively against the indie games. But, gamer expectations change over time. Early games included a high score. Achievements expanded over time to promote replay. For example, you may have completed a level but you didn’t find all the secrets or collect all the coins. The Xbox 360 introduced the first multi-game online achievement system. Since that introduction, many game platforms added an achievement system.

Open source games are largely left out of the achievement systems. You can publish an open source game on Steam, but it costs money and they focus on working with companies not the free software community. Additionally, this locks players into a non-free platform.

Commercial game developers are not well served either, since some players enjoy achievements and refuse to purchase from other stores due to the inability to share their accomplishments. This lock-in gives the power to the platform holder. Each platform has a different system forcing the developer to implement support and testing multiple times. Smaller platform are likely to be skipped entirely. Furthermore, the platform holder has access to the achievement data on all companies using their system which could be used for competitive advantage.

Architecture of Gamerzilla

Gamerzilla is an open source game achievement system which attempts to correct this situation. The design considered both open source and commercial games. You can run your own Gamerzilla server, use one provided by a game store, or even distributions, or other groups could run them. Where you buy the game doesn’t matter. The achievement data uploads to your Gamerzilla server.

Game achievements require two things, a game, and a Gamerzilla server. As game collections grow, however, that setup has a disadvantage. Each game needs to have credentials to upload to the Gamerzilla server. Many gamers turn to game launchers due to their large number of games and ability to synchronize with one or more stores. By adding Gamerzilla support to the launcher, the individual games no longer need to know your credentials. Session results will relay from the game launcher to the Gamerzilla server.

At one time, freegamedev.net provided the Hubzilla social networking system. We created an addon allowing us to jump start Gamerzilla development. Unfortunately server upgrades broke the service so freegamedev.net stopped offering it.

For Gamerzilla servers, two implementations exist. Maintaining Hubzilla is a complex task, so we developed a standalone Gamerzilla service using .Net and React. The API used by games remains the same so it doesn’t matter which implementation you connect to.

Game launchers development and support often lags. To facilitate adding support, we created libgamerzilla. The library handles all the interaction between the game launcher, games, and the Gamerzilla server. Right now only GameHub has an implementation with Gamerzilla support and merging into the project is pending. On Fedora Linux, libgamerzilla-server package serves as a temporary solution. It does not launch games but listens for achievements and relays them to your server.

Game support continues growing. As with game launchers, developers use libgamerzilla to handle the Gamerzilla integration. The library, written in C, is in use in a variety of languages like Python and nim. Games which already have an achievement system typically take only a few days to add support. For other games ,collecting all the information to award the achievements occupies the bulk of the implementation time.

Setting up a server

The easiest server to setup is the Hubzilla addon. That, however, requires a working Hubzilla site which is not the simplest thing to setup. The new .Net and React server can be setup relatively easily on Fedora Linux, although there are a lot of steps. The readme details all the steps. The long set of steps is, in part, due to the lack of a built release. This means you need to build the .Net and the React code. Once built, React code serves up directly in Apache. A new service runs the .Net piece. Apache proxies all requests to the Gamerzilla API for the new service.

With the setup steps done, Gamerzilla runs but there are no users. There needs to be an easy way to create an administrator and register new users. Unfortunately this piece does not exist yet. At this time, users must be entered directly using the sqlite3 command line tool. The instructions are in the readme. Users can be publicly visible or not. The approval flag allows new users to not use the system immediately but web registration still needs to be implemented The user piece is designed with replacement in mind. It would not be hard to replace backend/Service/UserService.cs to integrate with an existing site. Gaming web sites could use this to offer Gamerzilla achievements to their users.

Currently the backend uses a sqlite database. No performance testing has been done. We expect that larger installations may need to modify the system to use a more robust database system.

Testing the system

There is no game launcher easily available at the moment. If you install libgamerzilla-server, you will have the command gamerzillaserver available from the command line. The first time you run it, you enter your url and login information. Subsequent executions will simply read the information from the configuration file. There is currently no way to correct a mistake except deleting the file at .local/share/gamerzillaserver/server.cfg and running gamerzillaserver again.

Most games have no built releases with Gamerzilla support. Pinball Disc Room on itch.io does have support built in the Linux version. The web version has no achievements There are only two achievements in the game, one for surviving for ten seconds and the other for unlocking and using the tunnel. With a little practice you can get an achievement. You need to check your Gamerzila server as the game provides no visual notification of the achievement.

Currently no game packaged in Fedora Linux supports Gamerzilla. SuperTuxKart merged support but is still awaiting a new release. Seahorse adventures and Shippy 1984 added achievements but new releases are not packaged yet. Some games with support, we maintain independently as the developers ignore pull requests or other attempt to contact them.

Future work

Gamerzilla needs more games. A variety of games currently support the system. An addition occurs nearly every month. If you have a game you like, ask the developer to support Gamerzilla. If you are making a game and need help adding support, please let us now.

Server development proceeds at a slow pace and we hope to have a functional registration system soon. After that we may setup a permanent hosting site. Right now you can see our test server. Some people expressed concern with the .Net backend. The API is not very complex and could be rewritten in Python fairly easily.

The largest unknown remains game launchers. GameHub wants a generic achievement interface. We could try to work with them to get that implemented. Adding support to the itch.io app could increase interest in the system. Another possibility is to do away with the game launcher entirely. Perhaps adding something like the gamerzillaserver to Gnome might be possible. You would then configure your url and login information on a settings page. Any game launched could then record achievements.

Cool happenings in Fedora Workstation land

Posted by Christian F.K. Schaller on September 16, 2021 12:58 PM

Been some time since my last update, so I felt it was time to flex my blog writing muscles again and provide some updates of some of the things we are working on in Fedora in preparation for Fedora Workstation 35. This is not meant to be a comprehensive whats new article about Fedora Workstation 35, more of a listing of some of the things we are doing as part of the Red Hat desktop team.

NVidia support for Wayland
One thing we spent a lot of effort on for a long time now is getting full support for the NVidia binary driver under Wayland. It has been a recurring topic in our bi-weekly calls with the NVidia engineering team ever since we started looking at moving to Wayland. There has been basic binary driver support for some time, meaning you could run a native Wayland session on top of the binary driver, but the critical missing piece was that you could not get support for accelerated graphics when running applications through XWayland, our X.org compatibility layer. Which basically meant that any application requiring 3D support and which wasn’t a native Wayland application yet wouldn’t work. So over the last Months we been having a great collaboration with NVidia around closing this gap, with them working closely with us in fixing issues in their driver while we have been fixing bugs and missing pieces in the rest of the stack. We been reporting and discussing issues back and forth allowing us a very quickly turnaround on issues as we find them which of course all resulted in the NVidia 470.42.01 driver with XWayland support. I am sure we will find new corner cases that needs to be resolved in the coming Months, but I am equally sure we will be able to quickly resolve them due to the close collaboration we have now established with NVidia. And I know some people will wonder why we spent so much time working with NVidia around their binary driver, but the reality is that NVidia is the market leader, especially in the professional Linux workstation space, and there are lot of people who either would end up not using Linux or using Linux with X without it, including a lot of Red Hat customers and Fedora users. And that is what I and my team are here for at the end of the day, to make sure Red Hat customers are able to get their job done using their Linux systems.

Lightweight kiosk mode
One of the wonderful things about open source is the constant flow of code and innovation between all the different parts of the ecosystem. For instance one thing we on the RHEL side have often been asked about over the last few years is a lightweight and simple to use solution for people wanting to run single application setups, like information boards, ATM machines, cash registers, information kiosks and so on. For many use cases people felt that running a full GNOME 3 desktop underneath their application was either to resource hungry and or created a risk that people accidentally end up in the desktop session. At the same time from our viewpoint as a development team we didn’t want a completely separate stack for this use case as that would just increase our maintenance burden as we would end up having to do a lot of things twice. So to solve this problem Ray Strode spent some time writing what we call GNOME Kiosk mode which makes setting up a simple session running single application easy and without running things like the GNOME shell, tracker, evolution etc. This gives you a window manager with full support for the latest technologies such as compositing, libinput and Wayland, but coming in at about 18MB, which is about 71MB less than a minimal GNOME 3 desktop session. You can read more about the new Kiosk mode and how to use it in this great blog post from our savvy Edge Computing Product Manager Ben Breard. The kiosk mode session described in Ben’s article about RHEL will be available with Fedora Workstation 35.

high-definition mouse wheel support
A major part of what we do is making sure that Red Hat Enterprise Linux customers and Fedora users get hardware support on par with what you find on other operating systems. We try our best to work with our hardware partners, like Lenovo, to ensure that such hardware support comes day and date with when those features are enabled on other systems, but some things ends up taking longer time for various reasons. Support for high-definition mouse wheels was one of those. Peter Hutterer, our resident input expert, put together a great blog post explaining the history and status of high-definition mouse wheel support. As Peter points out in his blog post the feature is not yet fully supported under Wayland, but we hope to close that gap in time for Fedora Workstation 35.

Mouse with hires mouse

Mouse with HiRes scroll wheel

PipeWire
I feel I can’t do one of these posts without talking about latest developments in PipeWire, our unified audio and video server. Wim Taymans keeps working with rapidly growing PipeWire community to fix issues as they are reported and add new features to PipeWire. Most recently Wims focus has been on implementing support for S/PDIF passthrough support over both S/PDIF and HDMI connections. This will allow us to send undecoded data over such connections which is critical for working well with surround sound systems and soundbars. Also the PipeWire community has been working hard on further improving the Bluetooth support with bluetooth battery status support for head-set profile and using Apple extensions. aptX-LL and FastStream codec support was also added. And of course a huge amount of bug fixes, it turns out that when you replace two different sound servers that has been around for close to two decades there are a lot of corner cases to cover :). Make sure to check out two latest release notes for 0.3.35 and for 0.3.36 for details.

Screenshot of Easyeffects

EasyEffects is a great example of a cool new application built with PipeWire

Privacy screen
Another feature that we have been working on as a result of our Lenovo partnership is Privacy screen support. For those not familiar with this technology it is basically to allow you to reduce the readability of your screen when viewed from the side, so that if you are using your laptop at a coffee shop for instance then a person sitting close by will have a lot harder time trying to read what is on your screen. Hans de Goede has been shepherding the kernel side of this forward working with Marco Trevisan from Canonical on the userspace part of it (which also makes this a nice example of cross-company collaboration), allowing you to turn this feature on or off. This feature though is not likely to fully land in time for Fedora Workstation 35 so we are looking at if we will bring this in as an update to Fedora Workstation 35 or if it will be a Fedora Workstation 36 feature.

Penny

zink inside

Zink inside the penny


As most of you know the future of 3D graphics on Linux is the Vulkan API from the Khronos Group. This doesn’t mean that OpenGL is going away anytime soon though, as there is a large host of applications out there using this API and for certain types of 3D graphics development developers might still choose to use OpenGL over Vulkan. Of course for us that creates a little bit of a challenge because maintaining two 3D graphics interfaces is a lot of work, even with the great help and contributions from the hardware makers themselves. So we been eyeing the Zink project for a while, which aims at re-implementing OpenGL on top of Vulkan, as a potential candidate for solving our long term needs to support the OpenGL API, but without drowning us in work while doing so. The big advantage to Zink is that it allows us to support one shared OpenGL implementation across all hardware and then focus our HW support efforts on the Vulkan drivers. As part of this effort Adam Jackson has been working on a project called Penny.

Zink implements OpenGL in terms of Vulkan, as far as the drawing itself is concerned, but presenting that drawing to the rest of the system is currently system-specific (GLX). For hardware that already has a Mesa driver, we use GBM. On NVIDIA’s Vulkan (and probably any other binary stacks on Linux, and probably also like WSL or macOS + MoltenVK) we download the image from the GPU back to the CPU and then use the same software upload/display path as llvmpipe, which as you can imagine is Not Fast.

Penny aims to extend Zink by replacing both of those paths, and instead using the various Vulkan WSI extensions to manage presentation. Even for the GBM case this should enable higher performance since zink will have more information about the rendering pipeline (multisampling in particular is poorly handled atm). Future window system integration work can focus on Vulkan, with EGL and GLX getting features “for free” once they’re enabled in Vulkan.

3rd party software cleanup
Over time we have been working on adding more and more 3rd party software for easy consumption in Fedora Workstation. The problem we discovered though was that due to this being done over time, with changing requirements and expectations, the functionality was not behaving in a very intuitive way and there was also new questions that needed to be answered. So Allan Day and Owen Taylor spent some time this cycle to review all the bits and pieces of this functionality and worked to clean it up. So the goal is that when you enable third-party repositories in Fedora Workstation 35 it behaves in a much more predictable and understandable way and also includes a lot of applications from Flathub. Yes, that is correct you should be able to install a lot of applications from Flathub in Fedora Workstation 35 without having to first visit the Flathub website to enable it, instead they will show up once you turned the knob for general 3rd party application support.

Power profiles
Another item we spent quite a bit of time for Fedora Workstation 35 is making sure we integrate the Power Profiles work that Bastien Nocera has been working on as part of our collaboration with Lenovo. Power Profiles is basically a feature that allows your system to behave in a smarter way when it comes to power consumption and thus prolongs your battery life. So for instance when we notice you are getting low on battery we can offer you to go into a strong power saving mode to prolong how long you can use the system until you can recharge. More in-depth explanation of Power profiles in the official README.

Wayland
I usually also have ended up talking about Wayland in my posts, but I expect to be doing less going forward as we have now covered all the major gaps we saw between Wayland and X.org. Jonas Ådahl got the headless support merged which was one of our big missing pieces and as mentioned above Olivier Fourdan and Jonas and others worked with NVidia on getting the binary driver with XWayland support working with GNOME Shell. Of course this being software we are never truly done, there will of course be new issues discovered, random bugs that needs to be fixed, and of course also new features that needs to be implemented. We already have our next big team focus in place, HDR support, which will need work from the graphics drivers, up through Mesa, into the window manager and the GUI toolkits and in the applications themselves. We been investigating and trying out some things for a while already, but we are now ready to make this a main focus for the team. In fact we will soon be posting a new job listing for a fulltime engineer to work on HDR vertically through the stack so keep an eye out for that if you are interested in working on this. The job will be open to candidates who which to work remotely, so as long as Red Hat has a business presence in the country you live we should be able to offer you the job if you are the right candidate for us. Update:Job listing is now online for our HDR engineer.

BTW, if you want to see future updates and keep on top of other happenings from Fedora and Red Hat in the desktop space, make sure to follow me on twitter.

Anaconda accessibility improvements

Posted by rhinstaller on September 16, 2021 12:37 PM
<figure class="wp-block-image size-large">ALVA braille terminal<figcaption>ALVA braille terminal, CC-BY-SA via Wikipedia.</figcaption></figure>

Good news: We have started working on accessibility of Anaconda! For start, the braille terminals now somewhat work in text mode.

Current state

On the Workstation images, accessibility already was at the same level as a finished system would offer. Workstation media run a full Gnome session, with Orca available. The installer does not have to do anything. However, for the Server images the situation is different. The environment is heavily reduced: no sound, no Gnome, no Orca. That also means, no accessibility. Let’s change that!

The latest Fedora 35 beta nightly builds now have the brltty screen reader on Server images. Thus far, brltty is enabled only for the console, which requires Anaconda to be started in text mode. There is also no means to configure the brltty session, so autodetection must work for your braille terminal device.

<figure class="wp-block-image size-large">Anaconda text mode, systemctl brltty status shows Active</figure>

If you have a braille terminal, you can already try this out!

  1. Grab an “Everything boot” or “Server boot” image from the Fedora nightly compose finder page.
  2. Prepare a flash drive or DVD to boot from. Start the computer with the flash drive inserted.
  3. In the machine’s settings, choose to boot this time from the flash drive.
  4. Press e to edit the boot options. Unfortunately the boot menu itself is not read by brltty, so you will have to guess the timing.
  5. Type space and then inst.text to start Anaconda in text mode
  6. Confirm with Enter or Ctrl+X (the former for BIOS, latter for UEFI) and wait for Anaconda to start.

Once Anaconda starts, you should be able to read the screen using your device… or not. Tell us!

Admittedly, entering the text mode is near impossible for a sighted person, but treat this as only the first step.

Future plans

In the future, hopefully the Xorg driver for brltty could be added, so that GUI becomes somewhat accessible this way, too. That is definitely the preferred option: The text mode is heavily simplified in some areas, and getting there through the “silent” boot menu is not easy.

It is not clear yet if there needs to be some way to configure brltty. The method of doing so would involve boot options: Either directly by adding a new option – perhaps inst.brltty – or indirectly by adding a kickstart command which would require using the inst.ks boot option. However, given the inaccessibility of the boot menu, it is doubtful if this would help anything. Statistics on how many daily used braille terminals need special brltty configuration would be useful.

In the long run, it would be probably best if sound and Orca could be present on the Server images, too. That comes with its own set of problems, though. Right now the combined size of all the missing parts is a very tough proposition. The server images have been traditionally small, lean and trimmed down to fit a CD. While that is no longer strictly true, keeping the images small is still a goal.

Ultimately, neither the Anaconda developers nor Fedora maintainers and relengs are the target audience for this. If you have a stake in this matter, we would love to hear from you! You can find the Anaconda mailing list at anaconda-devel-list@redhat.com. Or just comment on this blog post.

Last but not least, let’s not forget Jaroslav Škarvada and Vojtěch Polášek who helped with this effort. Thank you!

Untitled Post

Posted by Zach Oglesby on September 15, 2021 06:33 PM

I should really uninstall Slack from my phone when I go on vacation.

How to check for update info and changelogs with rpm-ostree db

Posted by Fedora Magazine on September 15, 2021 08:00 AM

This article will teach you how to check for updates, check the changed packages, and read the changelogs with rpm-ostree db and its subcommands.

The commands will be demoed on a Fedora Silverblue installation and should work on any OS that uses rpm-ostree.

Introduction

Let’s say you are interested in immutable systems. Using a base system that is read-only while you build your use cases on top of containers technology sounds very attractive and it persuades you to select a distro that uses rpm-ostree.

You now find yourself on Fedora Silverblue (or another similar distro) and you want to check for updates. But you hit a problem. While you can find the updated packages on Fedora Silverblue with GNOME Software, you can’t actually read their changelogs. You also can’t use dnf updateinfo to read them on the command line, since there’s no DNF on the host system.

So, what should you do? Well, rpm-ostree has subcommands that can help in this situation.

Checking for updates

The first step is to check for updates. Simply run rpm-ostree upgrade –check:

$ rpm-ostree upgrade --check
...
AvailableUpdate:
Version: 34.20210905.0 (2021-09-05T20:59:47Z)
Commit: d8bab818f5abcfb58d2c038614965bf26426d55667e52018fcd295b9bfbc88b4
GPGSignature: Valid signature by 8C5BA6990BDB26E19F2A1A801161AE6945719A39
SecAdvisories: 1 moderate
Diff: 4 upgraded

Notice that while it doesn’t tell the updated packages in the output, it does show the Commit for the update as d8bab818f5abcfb58d2c038614965bf26426d55667e52018fcd295b9bfbc88b4. This will be useful later.

Next thing you need to do is find the Commit for the current deployment you are running. Run rpm-ostree status to get the BaseCommit of the current deployment:

$ rpm-ostree status
State: idle
Deployments:
● fedora:fedora/34/x86_64/silverblue
                   Version: 34.20210904.0 (2021-09-04T19:16:37Z)
                BaseCommit: e279286dcd8b5e231cff15c4130a4b1f5a03b6735327b213ee474332b311dd1e
              GPGSignature: Valid signature by 8C5BA6990BDB26E19F2A1A801161AE6945719A39
       RemovedBasePackages: ...
           LayeredPackages: ...
...

For this example BaseCommit is e279286dcd8b5e231cff15c4130a4b1f5a03b6735327b213ee474332b311dd1e.

Now you can find the diff of the two commits with rpm-ostree db diff [commit1] [commit2]. In this command commit1 will be the BaseCommit from the current deployment and commit2 will be the Commit from the upgrade checking command.

$ rpm-ostree db diff e279286dcd8b5e231cff15c4130a4b1f5a03b6735327b213ee474332b311dd1e d8bab818f5abcfb58d2c038614965bf26426d55667e52018fcd295b9bfbc88b4
ostree diff commit from: e279286dcd8b5e231cff15c4130a4b1f5a03b6735327b213ee474332b311dd1e
ostree diff commit to:   d8bab818f5abcfb58d2c038614965bf26426d55667e52018fcd295b9bfbc88b4
Upgraded:
  soundtouch 2.1.1-6.fc34 -> 2.1.2-1.fc34

The diff output shows that soundtouch was updated and indicates the version numbers. View the changelogs by adding –changelogs to the previous command:

$ rpm-ostree db diff e279286dcd8b5e231cff15c4130a4b1f5a03b6735327b213ee474332b311dd1e d8bab818f5abcfb58d2c038614965bf26426d55667e52018fcd295b9bfbc88b4 --changelogs
ostree diff commit from: e279286dcd8b5e231cff15c4130a4b1f5a03b6735327b213ee474332b311dd1e
ostree diff commit to:   d8bab818f5abcfb58d2c038614965bf26426d55667e52018fcd295b9bfbc88b4
Upgraded:
  soundtouch 2.1.1-6.fc34.x86_64 -> 2.1.2-1.fc34.x86_64
    * dom ago 29 2021 Uwe Klotz <uwe.klotz@gmail.com> - 2.1.2-1
    - Update to new upstream version 2.1.2
      Bump version to 2.1.2 to correct incorrect version info in configure.ac

    * sex jul 23 2021 Fedora Release Engineering <releng@fedoraproject.org> - 2.1.1-7
    - Rebuilt for https://fedoraproject.org/wiki/Fedora_35_Mass_Rebuild

This output shows the commit notes as well as the version numbers.

Conclusion

Using rpm-ostree db you are now able to have the functionality equivalent to dnf check-update and dnf updateinfo.

This will come in handy if you want to inspect detailed info about the updates you install.

Cockpit 253

Posted by Cockpit Project on September 15, 2021 12:00 AM

Cockpit is the modern Linux admin interface. We release regularly.

Here are the release notes from Cockpit 253 and cockpit-machines 252:

SELinux: Dismiss multiple alerts

SELinux alerts can now be selected and dismissed in bulk, which is simpler than having to expand and dismiss them individualy. Having a clean alert state is especially helpful when building policies, as newer alerts become more obvious.

screenshot of dismiss multiple alerts

Machines: Add support for renaming VMs

The Machines page can rename virtual machines.

screenshot of add support for renaming vms screenshot of add support for renaming vms

Try it out

Cockpit 253 and cockpit-machines 252 are available now:

The experience of writing a book

Posted by Pablo Iranzo Gómez on September 14, 2021 11:00 PM

I wanted to write about my experience (before I forget about it), and as some colleagues asked about it… here we go…

As published in the blog entry RHEL8 Administration book, some colleagues and I wrote a book on RHEL8 administration, which can be bought here.

Many years ago I started one about Linux, but every time a new paragraph was added, a lot of new ‘TO-DO’ items were appended as the information growth… and as it was a ‘solo’ project, I had other stuff to work on and was parked.

Later last year (2020), Miguel approached asking if I was interested in helping him with his book, he started it, but the schedule was a bit tight, not impossible, but, having to work on the book at night, once kids are sleeping, you might be tired of work, etc… was not the best one, so after some thinking about it, I told him that I was willing to help with the task, which automatically, duplicated the available time for each chapter.

Not all chapters were equal, I must admit, some took me more time to ‘start’, but I think it was a good experience, I learned a lot, and I think it will help others in the future.

Many times I end up spending time digging on issues, trying to find the right answer, and once found, it seemed pretty obvious that it should be the way since the beginning, thing is… to realize that, you need to spend the time digging, testing, etc… and, even if I try to publish some stuff on the blog about that sort of ‘tricks’ I tend to think that those are not helpful, so I end up not adding them most of the times.

With the book, while working on the chapters, I had time to revisit stuff, that was obvious in my head, but not for others that are in the process of learning, and to even refresh the status of projects that I wasn’t touching for a while.

For example, when I started working in consulting, I was doing a lot of Anaconda for automating installations, with custom scripts to detect the hardware, write to serial ports to show data on systems without monitors, but since I moved to more ‘professional’ setups, my experience switched more to the post-installation configuration using playbooks in Ansible to do get the things done.

Let’s back to talk about the book itself…

After we divided the chapters the work was more bearable and compatible with the time we had available, so the focus was working on the chapters. So, since January, I was enrolled in this effort :-)

As Miguel had an outline of the topics to cover for each chapter, it was easy to use the freedom we had to cover them, the examples, but of course, always having in sight the focus on the tasks that a system administrator would perform at RHCSA level. One of the metrics we had, was the expected page count, but also, it was flexible and some chapters were bigger than expected and some others, smaller, of course, depending on the topic and well…. we finally exceeded in about 100 pages the expected page count.

On the tech side, I’m used to Markdown and I’ve used it at several roles in my career, and lately, I was doing Asciidoctor. I’m not a big expert in Asciidoctor but it had many features that I felt being useful for the book, however, to my surprise, the tool of choice was a word processor with some custom styling. I understand, that using a regular word processor makes it easier for other writers, and having version control was also useful, but still was a bit strange for me, as I was used to code reviews, easy to perform on text files, and also, doing cross-references between different documents. Even when working with SuSE on the manuals, the choice was LaTeX, which was even harder but allowed rich features on the final rendering.

On the other side, the Packt team provided good guidance for starting:

  • Styling guide with some example documents demonstrating each style usage
  • Shared drive for uploading the files and work on reviews
  • And lot of support for the questions that arise

And what was more invaluable: guidance by several members of the team about the styles to use, tense, writing style, etc

The process was quite fast, once each chapter was submitted, in few days or a week, there were some reviews and later, a technical reviewer did the same, providing a feedback form on the chapter and some other things to fix or improve.

In the end, going chapter after chapter, it went quickly through… my biggest pain points were the chapters where I felt that there was nothing else to add, and writing just for increasing the page count was not something we had in mind, and of course, the Packt team supported us in the decision.

I think that the hardest chapter was the two on exercises… the book has two knowledge-check-in chapters and once the first was done, thinking about a second without repeating many of the stuff in the first one was not easy.

June arrived, all the chapters were delivered and some other reviews were still ongoing on the last chapters, so the book was almost finished and ready for the last steps.

The last stone in the road meant little effective work for us as it was mostly adjusting some of the wording and paperwork, but once we got clearance in September (close to three months to complete), we were able to move into the final stage.

Looking back, it has been a good experience, which somehow was shaking the way I was writing before (as writing Knowledge Base Articles, requires using another styling for the phrasing, etc.), going more direct and engaging more with the reader, but very positive.

Finally, when everything was done, another member of the Packt team did start over with the book, doing quality assurance, checking the content, clarity, etc, like a second Technical reviewer, before handing it over to the marketing team to work on the promotion of the book on social media.

Will I do this once again? of course!

Everybody Struggles

Posted by Fedora Community Blog on September 14, 2021 05:17 PM
<figure class="wp-block-image size-large is-resized"></figure>

Everybody struggles in life every now and then. Struggling is a part of life. It doesn’t matter what other people think or say, it all depends on people’s perspectives. That determines your personal struggle. We’re all pretty lazy when it comes down to it. If you’re struggling, it’s because you need to do more than you’re doing currently. But doing more of the wrong type of activity will only suffer you more.

This is the third week of my Outreachy internship, when I first contributed to my project I was very nervous and didn’t even have the confidence to make it and that fact of being a beginner was making me more nervous, and after making 5-6 contributions to Improve fedora QA dashboard, I tested then covid positive, I completely shattered and lost any hope of clearing Outreachy, but fortunately, those 5-6 contributions of mine made a significant difference and gave a good impression of my development skills set and finally my struggled paid off. The contribution round proved to be fruitful for me. I gained lots of confidence and learned how to google a bit 🙂

Then during the internship, I had my first video call meeting with my mentors. It was a pleasurable experience to meet them, other than by email or chat. I completed my task successfully with proper guidance from my mentors, but it was quite excruciating when I moved to my second task. I had to make the whole page from scratch, initially, literally, I was afraid if I could do this or not, many doubts were roaming around my head. 

But eventually, I completed it with a lot of struggle here, and after making the whole page from scratch we had a discussion that we should give this issue a try using another way. So, I built the same page two times but with a different approach. I had to google lots of stuff to know how to implement their approach. But let me put one thing here, there are no shortcuts in life. You have put in effort every day.

I was quite mad at first thinking why I have to do the same page from scratch but later I realized that I can now at least not call myself a beginner, I feel confident now, and I gained so much which is worthy enough, apart from my struggle, what’s most important is that I learned a lot. 

<figure class="wp-block-image size-large is-resized"></figure>

The post Everybody Struggles appeared first on Fedora Community Blog.

Unlocking the bootloader and disabling dm-verity on Android-X86 devices

Posted by Hans de Goede on September 14, 2021 04:12 PM
For the hw-enablement for Bay- and Cherry-Trail devices which I do as a side project, sometimes it is useful to play with the Android which comes pre-installed on some of these devices.

Sometimes the Android-X86 boot-loader (kerneflinger) is locked and the standard "Developer-Options" -> "Enable OEM Unlock" -> "Run 'fastboot oem unlock'" sequence does not work (e.g. I got the unlock yes/no dialog, and could move between yes and no, but I could not actually confirm the choice).

Luckily there is an alternative, kernelflinger checks a "OEMLock" EFI variable to see if the device is locked or not. Like with some of my previous adventures changing hidden BIOS settings, this EFI variable is hidden from the OS as soon as the OS calls ExitBootServices, but we can use the same modified grub to change this EFI variable. After booting from an USB stick with the relevant grub binary installed as "EFI/BOOT/BOOTX64.EFI" or "BOOTIA32.EFI", entering the
following command on the grub cmdline will unlock the bootloader:

setup_var_cv OEMLock 0 1 1

Disabling dm-verity support is pretty easy on these devices because they can just boot a regular Linux distro from an USB drive. Note booting a regular Linux distro may cause the Android "system" partition to get auto-mounted after which dm-verity checks will fail! Once we have a regular Linux distro running step 1 is to find out which partition is the android_boot partition to do this as root run:

blkid /dev/mmcblk?p#

Replacing the ? for the mmcblk number for the internal eMMC and then for # is 1 to n, until one of the partitions is reported as having 'PARTLABEL="android_boot"', usually "mmcblk?p3" is the one you want, so you could try that first.

Now make an image of the partition by running e.g.:

dd if=/dev/mmcblk1p3" of=android_boot.img

And then copy the "android_boot.img" file to another computer. On this computer extract the file and then the initrd like this:

abootimg -x android_boot.img
mkdir initrd
cd initrd
zcat ../initrd.img | cpio -i


Now edit the fstab file and remove "verify" from the line for the system partition. after this update android_boot.img like this:

find . | cpio -o -c -R 0.0 | gzip -9 > ../initrd.img
cd ..
abootimg -u android_boot.img -r initrd.img


The easiest way to test the new image is using fastboot, boot the tablet into Android and connect it to the PC, then run:

adb reboot bootloader
fastboot boot android_boot.img


And then from an "adb shell" do "cat /fstab" verify that the "verify" option is gone now. After this you can (optionally) dd the new android_boot.img back to the android_boot partition to make the change permanent.

Note if Android is not booting you can force the bootloader to enter fastboot mode on the next boot by downloading this file and then under regular Linux running the following command as root:

cat LoaderEntryOneShot > /sys/firmware/efi/efivars/LoaderEntryOneShot-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f

What I learned from Russian students: logging is important

Posted by Peter Czanik on September 14, 2021 11:17 AM

When I published my blog about openSUSE a couple of weeks ago, most questions I received in private were about the Russian students I mentioned. In that blog I quickly described how my interest in information security started, about 25 years ago. This blog gives you a bit of historical background and a few more details.

Historical background

It was 1995. I was studying at a university, but I was already running one of the servers of the faculty. It was a Linux box, and I also helped to run a FreeBSD server hosting the faculty web server. It was just three years after the Soviet army finally left Hungary. Our university had many students from Russia. While Hungarian students could attend the university for free, Russian students had to pay for their studies. As they were paying a lot, they could do anything, nobody punished their activities. And they did a lot of things, as they felt that they can still do anything in the ‘colonies’.

It was 1995, there was no Internet yet in the student dormitories. There was no Gmail, or any similar provider yet. Not even teachers received e-mail addresses automatically. Even if some people had computers at home, there was no Internet access yet from homes. Students could access servers from computer labs at the university. The Russian students had their own computer lab, where nobody else was allowed to enter.

It is 1995, the fifth consecutive year that funding was taken away from higher education. Which meant that faculties started to ask money from other faculties for their services. Russian students belonged to another faculty, so they could not get a user name on our servers.

Infosec is overrated

By that time even if I was running a couple of servers, I was just the same as the vast majority users even today. I mean, I thought that information security is overrated, ease of use, comfort are a way lot more important. It did not help either, that most of the commonly used protocols were not encrypted, like telnet, ftp, rsh and others. Even these protocols were often difficult to use from Windows machines. I was learning Linux and FreeBSD, and I was enabling all kinds of services. Using rsh between the two faculty UNIX servers was fun.

Logging is important

I checked the logfiles of the servers I managed occasionally, but mostly only to check if the hard drives were showing any signs of failure. While browsing the logs for hard drive errors, I came across some suspicious login messages. Logins from previously not seen unknown IP addresses. I knew that the addresses were from campus, so I asked around. It turned out, that they belonged to the Russian students laboratory. And talking to the user it turned out, that he was unaware that his account was used also by someone else.

The exact order of events is a kind of blurry, it was a quarter of a century ago. I started to check log messages not just for hard drive problems but also for security related events. I could see more and more logins from the Russian students laboratory. It was a kind of cat and mouse game, I was trying to keep unauthorized users out of the system. They kept coming back and started to do nasty things. Along the way I learned a lot about security:

  • Network sniffing: most of the university had a BNC network and was using hubs instead of switches. Combine these with non-encrypted protocols…
  • Keyboard loggers
  • Black market. Access for students of our faculty was free, they just had to ask for it. Sometimes minutes after they received access, there was a login from the Russian lab. Accounts on my servers had a good price…
  • Denial of Service: they tried all kinds of DoS attacks, like fork bombs, too many logins, etc.
  • Stepping stone for further attacks, so I got some not so kind e-mails asking for explanation

Turning on a firewall could have been an easy way out, but seeing the IP addresses of the Russian lab in the system logs was the perfect indicator of compromise for an account. The account got quickly disabled, either for life (see black market) or until a password change. In the second case I tried to investigate, how the password was stolen. And of course gave a quick education on security awareness. Showing my log messages I tried to ask for some help to stop the Russian students, but as I was just a first year student and Russian students were paying: nobody cared.

Next steps

After so many years I do not recall any more how I got the hint, but I was suggested that I visit the Russian students computer lab. I was not supposed to enter there, but as they were messing with my servers, I did not care. The door was open, I walked in and looked around. The /etc/passwd file of my Linux box was printed on the wall. Even if encrypted, but it contained the passwords. As also described in my openSUSE blog, this was a final push towards information security.

FreeBSD already had passwords separate from the user readable passwd file, so I knew the concept. I looked around and found that the Linux distribution called Jurix had shadow passwords. It was a brand new thing in the Linux world at that time. I quickly migrated my Linux server to Jurix and did all kinds of hardening along the way. I removed all non-essential services, like rsh. Even if most users kept using telnet and other insecure services, I started to use SSH, which was just released.

When Russian students realized that they cannot get into my servers easily any more, they even tried to bribe me for access – with a counterfeit gaming CD for Windows :-)

Epilogue

As you can see, I ended up on the defender side. I did lots of security hardening and built systems that ran securely even years after I abandoned them. Logging still takes an important role in my life: I work with syslog-ng. Russian students were a major PITA at that time, but I learned a lot about security while I was trying to keep them out of the servers I managed.

opensource.com: What was your first programming language?

Posted by Peter Czanik on September 13, 2021 12:45 PM

A couple of weeks ago editors of https://opensource.com/ sent a question to contributors: What was your first programming language? Thinking about the question brought back some nice memories about the beginnings. You can read my answer below:


What was your first programming language?

My first ever programming language was BASIC in the early eighties. One of my relatives bought a C64 for their kids to get started with learning computers. They only used it for gaming, and I was also invited. But they also had a book about BASIC, and I was curious and gave it a try. I wrote some short code, I did not even know how to save it, but it was exciting to see that the computer does what I say to it. This means that I was not paid to learn it, and it was not my choice. It was the language available to me. Obviously, when I got my first computer a few years later, an XT compatible box, I first wrote some code in GW-BASIC, the dialect of BASIC available with DOS.

What happened next?

The first time I really choose a programming language was Pascal. I asked around, checked some books, and it seemed to be a good compromise between features and difficulty. First, it was Turbo Pascal, and I coded all kinds of simple games and graphics in it. I loved Pascal, so in my university years, I even used it (well, FreePascal and Lazarus) for measurement automation and modeling how pollution spreads in groundwater.


You can read the rest of the answers at https://opensource.com/article/21/8/first-programming-language

The syslog-ng insider 2021-07: Alerting; CentOS alternatives; MongoDB;

Posted by Peter Czanik on September 13, 2021 12:36 PM

Better late than never I just put online the July syslog-ng newsletter. Topics include:

  • Sending alerts to Discord and others from syslog-ng using Apprise: blocks and Python templates
  • Rocky Linux, AlmaLinux, CentOS & syslog-ng
  • MongoDB support improved in syslog-ng 3.32

It is available at https://www.syslog-ng.com/community/b/blog/posts/insider-2021-07-alerting-centos-alternatives-mongodb

Next Open NeuroFedora meeting: 13 September 1300 UTC

Posted by The NeuroFedora Blog on September 13, 2021 10:53 AM
Photo by William White on Unsplash

Photo by William White on Unsplash.


Please join us at the next regular Open NeuroFedora team meeting on Monday 13 September at 1300UTC in #fedora-neuro on IRC (Libera.chat). The meeting is a public meeting, and open for everyone to attend. You can join us over:

You can use this link to convert the meeting time to your local time. Or, you can also use this command in the terminal:

$ date --date='TZ="UTC" 1300 today'

The meeting will be chaired by @ankursinha. The agenda for the meeting is:

We hope to see you there!

Stuart D Gathman: How do you Fedora?

Posted by Fedora Magazine on September 13, 2021 08:00 AM

We recently interviewed Fedora user Stuart D. Gathman on how he uses Fedora Linux. This is a part of a series on the Fedora Magazine where we profile users and how they use Fedora Linux to get things done. If you are interested in being interviewed for a further installment in this series you can contact us on the feedback form.

Who are you and What do you do ??

For 35 years, Stuart worked as a System programmer for a small company where his projects included database servers, device drivers, protocol stacks, expert systems, accounting systems, aged AR/AP reports, and EDI. Currently, he is doing hourly consulting work for small businesses.

Stuart’s childhood heroes were his Dad and George Müller. His favorite movies are “The Gods must be crazy” and “The mission”. He grew up in a pacifist denomination, so feels “The mission” movie is very relevant to him. He loves over roasted vegetables.

Composing and performing music, Mesh networking, and refurbishing discarded computers to run Fedora Linux are some of his spare time interests as well as history, especially ancient Western and 19th century English/American.

“Love/charity, Hope, Faith, Virtue, and knowledge” are the five qualities someone should possess, according to Stuart.

Fedora Community

Stuart’s first Linux was Red Hat Linux 3 in 1996. ”At first, I wasn’t sure how free it was going to work out but I took the plunge with Fedora 8 and was very pleased with the quality and stability. It was so nice to have more recent applications already packaged that required a lot of effort on my part to package for Red Hat Linux. One doesn’t need to be a programmer to contribute. There are other skills that go into making a great Linux distro. The most important skill that can be learned is how to submit a useful bug report. I honestly love the way Fedora Linux is organized, as is, and I keep hoping my Pinebook (ARM) onboard wifi and mic will start working with the next kernel release”, says Stuart.

The one person who influenced him to contribute to Fedora Linux was Caleb James DeLisle. Stuart had been building local RHL, RHEL, and Fedora Linux packages for personal and work repositories since 1996, but hadn’t taken the plunge to become an official Fedora packager. In 2015, he began researching how to decentralize network connections and on Mar 22, 2016, his Fedora Cjdns package was officially approved!. After that, he added more packages supporting decentralized communication, like OpenAS2, and more are on the way.

Some of the skills Stuart uses in his work are building software from source, RPM packaging (similar to programming), understanding and following Fedora Linux packaging guidelines, and learning unfamiliar programming languages sufficiently to build their applications from the source.

Stuart’s suggestions to newbies who want to become involved in the Fedora project are: “First, learn to file bug reports then, extend Fedora for your own use, write cheat sheets, create a diagram to help you and others understand an application or utility, add a cool photo to your desktop backgrounds, make your own sound effects, develop “best practices” for using applications and utilities for a particular job or project. Write about it for Fedora Magazine. Outline how you would explain the benefits of Fedora to various audiences”. His biggest concern is that Fedora Linux might fall victim to politics unrelated to the project goals.

What hardware do you use ??

Stuart is currently running a second-hand Dell Optiplex 790 desktop, a Raspberry Pi3, and Pinebook (ARM) small form factor notebook with an all-day battery which were all bought new. He also runs, a Dell Poweredge T310 VM host running Fedora Linux in virtual machines, a Dell Poweredge SC440, a Dell Inspiron 1440, a Thinkpad T61, (all of which were being discarded) and a refurbished Dell Latitude 3570 laptop. The Inspiron 1440 is his favorite as it is so comfortable to use, but the core 2 duo does not support hardware virtualization.

What software do you use??

Stuart is currently running Fedora Linux 33 and Fedora LInux 34. He wrote an article about using LVM for system upgrades for the Fedora Magazine that can allow easy booting between versions for testing new releases.

Stuart relies on a suite of applications in his work:

<figure class="wp-block-image size-large is-style-default"><figcaption>VYM for quickly building outlines</figcaption></figure> <figure class="wp-block-image size-full"><figcaption>Nheko for messaging</figcaption></figure> <figure class="wp-block-image size-large"><figcaption>Glabels for business cards</figcaption></figure>
<figure class="aligncenter size-full"><figcaption>Gkrellm</figcaption></figure>

Contribute at Fedora Linux 35 GNOME 41 Test Day

Posted by Fedora Community Blog on September 13, 2021 08:00 AM
Test gnome test week

Wednesday, 2021-09-09 is the Fedora 35 Gnome Test Day! As part of changes Gnome 41 in Fedora Linux 35, we need your help to test if everything runs smoothly!

Why Gnome Test Day?

We try to make sure that all the Gnome features are performing as they should. So it’s to see whether it’s working well enough and catch any remaining issues. It’s also pretty easy to join in: all you’ll need is Fedora Linux 35 (which you can grab from the wiki page).

We need your help!

All the instructions are on the wiki page, so please read through and come help us test! As always, the event will be in #fedora-test-day on Libera IRC.

Share this!

Help promote the Test Day and share the article in your own circles! Use any of the buttons below to help spread the word.

The post Contribute at Fedora Linux 35 GNOME 41 Test Day appeared first on Fedora Community Blog.

Episode 288 – Linux Kernel compiler warnings considered dangerous

Posted by Josh Bressers on September 13, 2021 12:01 AM

Josh and Kurt talk about some happenings in the Linux Kernel. There are some new rules around how to submit patches that goes against how GitHub works. They’re also turning all compiler warnings into errors. It’s really interesting to understand what these steps mean today, and what they could mean in the future.

<audio class="wp-audio-shortcode" controls="controls" id="audio-2552-2" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_288_Linux_Kernel_compiler_warnings_considered_dangerous.mp3?_=2" type="audio/mpeg">https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_288_Linux_Kernel_compiler_warnings_considered_dangerous.mp3</audio>

Show Notes

“[Book] Red Hat Enterprise Linux 8 Administration”

Posted by Pablo Iranzo Gómez on September 10, 2021 11:00 PM

After some time working on it (about 6 months for the main work and some more time for the reviews) with my colleagues Miguel and Scott, we’ve finally made it thanks to the support from our families and Packt, as well as several members of RH teams that gave the clearance to get it out!

The book targets users willing to learn skills to administer Red Hat Enterprise Linux or compatible systems. It is a hands-on guide to the administration and can be used as reference thanks to the real-life examples provided along the text.

It also features two chapters dedicated to exercises to check your knowledge acquired in the book.

Red Hat Enterprise Linux 8 Administration cover

Available at Amazon: Red Hat Enterprise Linux 8 Administration

Hope you like it!

Friday’s Fedora Facts: 2021-36

Posted by Fedora Community Blog on September 10, 2021 06:36 PM

Here’s your weekly Fedora report. Read what happened this week and what’s coming up. Your contributions are welcome (see the end of the post)!

I have weekly office hours on Wednesdays in the morning and afternoon (US/Eastern time) in #fedora-meeting-1. Drop by if you have any questions or comments about the schedule, Changes, elections, or anything else. See the upcoming meetings for more information.

Announcements

CfPs

<figure class="wp-block-table">
ConferenceLocationDateCfP
Ohio Linux FestColumbus, OH, USearly Deccloses 1 Oct
DevConf.CZBrno, CZ & virtual28–29 Jancloses 24 Oct
SCaLEPasadena, CA, US5–8 Marcloses 30 Nov
</figure>

Help wanted

Upcoming test days

Prioritized Bugs

Upcoming meetings

Releases

<figure class="wp-block-table">
Releaseopen bugs
F335208
F344715
F35 (pre-release)1114
F36 (rawhide)5980
</figure>

Fedora Linux 35

Schedule

  • 2021-09-21 — Beta release target #1
  • 2021-10-05 — Final freeze begins
  • 2021-10-19 — Early Final target date
  • 2021-10-22 — Final target date #1

For the full schedule, see the schedule website.

Changes

See the ChangeSet page or Bugzilla for information on approved Changes.

<figure class="wp-block-table">
BZ StatusCount
ASSIGNED1
MODIFIED2
ON_QA41
</figure>

Blockers

<figure class="wp-block-table">
Bug IDComponentBug StatusBlocker Status
1989726mesaNEWAccepted (Beta)
1999321freeipaNEWAccepted (Beta)
1999180bcm283x-firmwareASSIGNEDAccepted (Beta)
1996998gdmNEWAccepted (Beta)
2002609PackageKitNEWAccepted (Beta)
1997310selinux-policyASSIGNEDAccepted (Beta)
</figure>

Fedora Linux 36

Changes

The table below lists proposed Changes. See the ChangeSet page or Bugzilla for information on approved Changes.

<figure class="wp-block-table">
ProposalTypeStatus
</figure>

Contributing

Have something you want included? You can file an issue or submit a pull request in the fedora-pgm/pgm_communication repo.

The post Friday’s Fedora Facts: 2021-36 appeared first on Fedora Community Blog.

MAKE MORE with Inkscape – Ink/Stitch

Posted by Fedora Magazine on September 10, 2021 08:00 AM

Inkscape, the most used and loved tool of Fedora’s Design Team, is not just a program for doing nice vector graphics. With vector graphics (in our case SVG) a lot more can be done. Many programs can import this format. Also, Inkscape can do a lot more than just graphics. The first article of this series showed how to produce GCode with Inkscape. This article will examine another Inkscape extension – Ink/Stitch. Ink/Stitch is an extension for designing embroidery with Inkscape.

DIY Embroidery

In the last few years the do-it-yourself or maker scene has experienced a boom. You could say it all began with the inexpensive option of 3D printing; followed by also not expensive CNC machines and laser cutters/engravers. Also the prices for more traditionalmachines such as embroidery machines have fallen during recent years. Home embroidery machines are now available for 500 US dollars.

If you don’t want to or can’t buy one yourself, the nearest MakerSpace often has one. Even the prices for commercial single-head embroidery machines are down to 5,000 US dollars. They are an investment that can pay off quickly.

Software for Embroidery Design

Some of the home machines include their own software for designing embroidery. But most, if not all, of these applications are Windows-only. Also, the most used manufacturer-independent software of this area – Embird – is only available for Windows. But you could run it in Wine.

Another solution for Linux – Embroidermodde – is not really developed anymore. And this is after having had a fundraising campaign.

Today, only one solution is left – Ink/Stitch

<figure class="aligncenter size-full">The logo of the Ink/Stitch project<figcaption>The logo of the Ink/Stitch project</figcaption></figure>

Open Source and Embroidery Design

Ink/Stitch started out using libembroidery. Today pyembroidery is used. The manufacturers can’t be blamed for the prices of these machines and the number of Linux users. It is hardly worthwhile to develop applications for Linux.

The Embroidery File Format Problem


There is a problem with the proliferation of file formats for embroidery machines; especially among manufacturers that cook their own file format for their machines. In some cases, even a single manufacturer may use several different file formats.

  • .10o – Toyota embroidery machines
  • .100 – Toyota embroidery machines
  • .CSD – Poem, Huskygram, and Singer EU embroidery home sewing machines.
  • .DSB – Baruda embroidery machines
  • .JEF – MemoryCraft 10000 machines.
  • .SEW – MemoryCraft 5700, 8000, and 9000 machines.
  • .PES – Brother and Babylock embroidery home sewing machines.
  • .PEC – Brother and Babylock embroidery home sewing machines.
  • .HUS – Husqvarna/Viking embroidery home sewing machines.
  • .PCS – Pfaff embroidery home sewing machines.
  • .VIP – old Pfaff format also used by Husqvarna machines.
  • .VP3 – newer Pfaff embroidery home sewing machines.
  • .DST – Tajima commercial embroidery sewing machines.
  • .EXP – Melco commercial embroidery sewing machines.
  • .XXX – Compucon, Singer embroidery home sewing machines.
  • .ZSK – ZSK machines on the american market

This is just a small selection of the file formats that are available for embroidery. You can find a more complete list here. If you are interested in deeper knowledge about these file formats, see here for more information.

File Formats of Ink/Stitch

Ink/Stitch can currently read the following file formats: 100, 10o, BRO, DAT, DSB, DST, DSZ, EMD, EXP, EXY, FXY, GT, INB, JEF, JPX, KSM, MAX, MIT, NEW, PCD, PCM, PCQ, PCS, PEC, PES, PHB, PHC, SEW, SHV, STC, STX, TAP, TBF, U01, VP3, XXX, ZXY and also GCode as TXT file.

For the more important task of writing/saving your work, Ink/Stitch supports far fewer formats: DST, EXP, JEF, PEC, PES, U01, VP3 and of course SVG, CSV and GCode as TXT

Besides the problem of all these file formats, there are other problems that a potential stitch program has to overcome.

Working with the different kinds of stitches is one difficulty. The integration of tools for drawing and lettering is another. But why invent such a thing from scratch? Why not take an existing vector program and just add the functions for embroidery to it? That was the idea behind the Ink/Stitch project over three years ago.

Install Ink/Stitch

Ink/Stitch is an extension for Inkscape. Inkscape’s new functionality for downloading and installing extensions is still experimental. And you will not find Ink/Stitch among the extensions that are offered there. You must download the extension manually. After it is downloaded, unzip the package into your directory for Inkscape extensions. The default location is ~/.config/Inkscape/extensions (or /usr/share/inkscape/extensions for system-wide availability). If you have changed the defaults, you may need to check Inkscape’s settings to find the location of the extensions directory.

Customization – Install Add-ons for Ink/Stitch

The Ink/Stitch extension provides a function called Install Add-Ons for Inkscape, which you should run first.

The execution of this function – Extensions > Ink/Stitch > Thread Color Management > Install thread color palettes for Inkscape – will take a while.

Do not become nervous as there is no progress bar or a similar thing to see.

This function will install 70 color palettes of various yarn manufacturers and a symbol library for Ink/Stitch.

<figure class="wp-block-image size-large">Inkscape with the swatches dialogue open. Showing the Madeira Rayon color palette<figcaption>Inkscape with the swatches dialogue open, which shows the Madeira Rayon color palette</figcaption></figure>

If you use the download from Github version 2.0.0, the ZIP-file contains the color palette files. You only need to unpack them into the right directory (~/.config/inkscape/palettes/). If you need a hoop template, you can download one and save it to ~/.config/inkscape/templates.

The next time you start Inkscape, you will find it under File > New From Template.

Lettering with Ink/Stitch

The way that is by far the easiest and most widely used, is to get a embroidery design using the Lettering function of Ink/Stitch. It is located under Extensions > Ink/Stitch > Lettering. Lettering for embroidery is not simple. What you expect are so called satin stitched letters. For this, special font settings are needed.

<figure class="wp-block-image size-large">Inkscape with a "Chopin" glyph for satin stitching defined for the Lettering function<figcaption>Inkscape with a “Chopin” glyph for satin stitching defined for the Lettering function</figcaption></figure>

You can convert paths to satin stitching. But this is more work intensive than using the Lettering function. Thanks to the work of an active community, the May 2021 release of Ink/Stitch 2.0 brought more predefined fonts for this. An English tutorial on how to create such fonts can be found here.

Version 2.0 also brings functions (Extensions > Ink/Stitch > Font Management) to make managing these kinds of fonts easier. There are also functions for creating these kinds of fonts. But you will need knowledge about font design with Inkscape to do so. First, you create an an entire SVG font. It is then feed through a JSON script which converts the SVG font into the type of files that Ink/Stitch’s font management function works with.

<figure class="wp-block-image size-large">Ink/Stitch Lettering function dialogue and preview of the embroidery<figcaption>On the left side the Lettering dialogue and on the right the preview of this settings</figcaption></figure>

The function will open a dialogue window where you just have to put in your text, choose the size and font, and then it will render a preview.

Embroider Areas/Path-Objects

The easiest thing with Ink/Stitch, is to embroider areas or paths. Just draw your path. When you use shapes then you have to convert them and then run Extensions > Ink/Stitch > Fill Tools > Break Apart Fill Objects…

This breaks apart the path into its different parts. You have to use this function. The Path > Break apart function of Inkscape won’t work for this.

Next, you can run Ink/Stitch’s built-in simulator: Extensions > Ink/Stitch > Visualise and Export > Simulator/Realistic Preview.

<figure class="wp-block-image size-large">Fedora logo as Stitch Plan Preview<figcaption>The new Fedora logo as Stitch Plan Preview</figcaption></figure>

Be careful with the simulator. It takes a lot system resources and it will take a while to start. You may find it easier to use the function Extensions > Ink/Stitch > Visualise and Export > Stitch Plan Preview. The latter renders the threading of the embroidery outside of the document.

<figure class="wp-block-image size-full">Nicubunu's Fedora hat icon as embroidery. The angles for the stitches of the head part and the brim are different, so that it looks more realistic. The outline is done in Satin stitching<figcaption>Nicubunu’s Fedora hat icon as embroidery. The angles for the stitches of the head part and the brim are different so that it looks more realistic. The outline is done in Satin stitching</figcaption></figure>

Simple Satin and Satin Embroidery

Ink/Stitch will convert each stroke with a continuous line (no dashes) to what they call Zig-Zag or Simple Satin. Stitches are created along the path using the stroke width you have specified. This will work as long there aren’t too many curves on the path.

<figure class="wp-block-image size-large">Parameter setting dialogue and on the right the Fedora logo shape embroidered as Zig-Zag line<figcaption>Parameter setting dialogue and on the right the Fedora logo shape embroidered as Zig-Zag line</figcaption></figure>

This is simple. But it is by far not the best way. It is better to use the Satin Tools for this. The functions for the Satin embroidery can be found under Extensions > Satin Tools. The most important is the conversion function which converts paths to satin strokes.

<figure class="wp-block-image size-large"><figcaption>Fedora logo shape as Satin Line embroidery</figcaption></figure>

You can also reverse the stitch direction using Extensions > Satin Tools > Flip Satin Column Rails. This underlines the 3D effect satin embroidery gets, especially when you make puff embroidery. For machines that have this capability, you can also set the markings for the trims of jump stitches. To visualize these trims, Ink/Stitch uses the symbols that where installed from its own symbol library.

The Ink/Stitch Stitch Library

What is called the stitch library is simply the kind of stitches that Ink/Stitch can create. The Fill Stitch and Zig-Zag/Satin Stitch have already been introduced. But there are more.

  • Running Stitches: These are used for doing outline designs. The running stitch produces a series of small stitches following a line or curve. Each dashed line will be converted into a Running Stitch. The size of the dashes does not matter.
<figure class="wp-block-image size-full"> running stitch - each dashed line will be converted in such one<figcaption>A running stitch – each dashed line will be converted in such one</figcaption></figure>
  • Bean Stitches: These can also be used for outline designs or add details to a design. The bean stitch describes a repetition of running stitches back and forth. This results in thicker threading.
<figure class="wp-block-image size-full">bean stitches to create a thicker line<figcaption>Bean Stitches – creating a thicker line</figcaption></figure>
  • Manual Stitch: In this mode, Ink/Stitch will use each node of a path as a needle penetration point; exactly as they are placed.
<figure class="wp-block-image size-full">in manual mode each node will be the penetration point of the needle<figcaption>In manual mode – each node will be the needle penetration point</figcaption></figure>
  • E-Stitch: The main use for e-stitch is a simple but strong cover stitch for applique items. It is often used for baby cloths because their skin tends to be more sensitive.
<figure class="wp-block-image size-full">E-Stitch, soft but still strong connection. Used often for baby clothes<figcaption>E-Stitch mostly used for applications on baby cloths, soft but strong connection</figcaption></figure>

Embroidery Thread List

Some embroidery machines (especially those designed for commercial use) allow different threads to be fitted in advance according to what will be needed for the design. These machines will automatically switch to the right thread when needed. Some file formats for embroidery support this feature. But some do not. Ink/Stitch can apply custom thread lists to an embroidery design.

If you want to work on an existing design, you can import a thread list: Extensions > Ink/Stitch > Import Threadlist. Thread lists can also be exported: Save As different file formats as *.zip. You can also print them: Extensions > Ink/Stitch > Visualise and Export > Print PDF.

Conclusion

Writing software for embroidery design is not easy. Many functions are needed and diverse (sometimes closed-source) file formats make the task difficult. Ink/Stitch has managed to create a useful tool with many functions. It enables the user to get started with basic embroidery design. Some things could be done a little better. But it is definitely a good tool as-is and I expect that it will become better over time. Machine embroidery can be an interesting hobby and with Ink/Stitch the Fedora Linux user can begin designing breathtaking things.

Call for Projects and Mentors for Outreachy December-March Cohort

Posted by Fedora Community Blog on September 10, 2021 08:00 AM

Fedora will be participating in the upcoming round of Outreachy (December 2021-March 2022) and we are looking for more projects and mentors!

Being a community of diverse people from various backgrounds and different walks of life, the Fedora Project has been participating as a mentoring organization for Outreachy internships for years. The Outreachy program is instrumental in providing a rich experience in working with free and open-source software. Fedora is a proud participant.

If you have a project idea for the upcoming round of Outreachy, open a ticket in the mentored projects repository. Even if you don’t have a project idea, you can volunteer to be a mentor for a project. As a mentor, you will guide interns through the completion of the project. We are also looking for general mentors for the facilitation of proper communication of feedback and evaluation with the interns working on the selected projects.

Please submit your project ideas and mentorship availability ASAP. The deadline for projects to Outreachy is September 23rd 29th. The Mentored Projects Coordinators will review your ideas and help you prep your project proposal to be submitted to Outreachy.

Mentoring can be a fulfilling pursuit. It is beneficial for you, the intern and applicants, the Fedora Project, and the overall open source ecosystem. Join us in fostering the growth of our community and the love of open source!

The post Call for Projects and Mentors for Outreachy December-March Cohort appeared first on Fedora Community Blog.

PHP version 7.4.24RC1 and 8.0.11RC1

Posted by Remi Collet on September 10, 2021 05:21 AM

Release Candidate versions are available in testing repository for Fedora and Enterprise Linux (RHEL / CentOS) to allow more people to test them. They are available as Software Collections, for a parallel installation, perfect solution for such tests, and also as base packages.

RPM of PHP version 8.0.11RC1 are available as SCL in remi-test repository and as base packages in the remi-php80-test repository for Fedora 33-34 and Enterprise Linux.

RPM of PHP version 7.4.24RC1 are available as SCL in remi-test repository and as base packages in the remi-test repository for Fedora 33-34 or remi-php74-test repository for Enterprise Linux.

emblem-notice-24.pngPHP version 7.3 is now in security mode only, so no more RC will be released.

emblem-notice-24.pngInstallation : read the Repository configuration and choose your version.

Parallel installation of version 8.0 as Software Collection:

yum --enablerepo=remi-test install php80

Parallel installation of version 7.4 as Software Collection:

yum --enablerepo=remi-test install php74

Update of system version 8.0:

yum --enablerepo=remi-php80,remi-php80-test update php\*

or, the modular way (Fedora and EL 8):

dnf module reset php
dnf module enable php:remi-8.0
dnf --enablerepo=remi-modular-test update php\*

Update of system version 7.4:

yum --enablerepo=remi-php74,remi-php74-test update php\*

or, the modular way (Fedora and EL 8):

dnf module reset php
dnf module enable php:remi-7.4
dnf --enablerepo=remi-modular-test update php\*

Notice: version 8.0.9RC1 is also in Fedora rawhide for QA.

emblem-notice-24.png oci8 extension now use Oracle Client version 21.3

emblem-notice-24.pngEL-8 packages are built using RHEL-8.4

emblem-notice-24.pngEL-7 packages are built using RHEL-7.9

emblem-notice-24.pngRC version is usually the same as the final version (no change accepted after RC, exception for security fix).

emblem-notice-24.pngVersion 8.1.0RC1 is also available

Software Collections ( php74, php80)

Base packages (php)

[F35] Participez à la semaine de tests chargée : noyau 5.14, GNOME 41, Audio et internationalisation

Posted by Charles-Antoine Couret on September 09, 2021 10:57 PM

Aujourd'hui, cette semaine de début septembre, est une semaine dédiée à plusieurs tests : noyau 5.14, GNOME 41, Audio et internationalisation En effet, durant le cycle de développement, l'équipe d'assurance qualité dédie quelques journées autours de certains composants ou nouveautés afin de remonter un maximum de problèmes sur le sujet.

Elle fournit en plus une liste de tests précis à effectuer. Il vous suffit de les suivre, comparer votre résultat au résultat attendu et le notifier.

En quoi consiste ce test ?

Nous sommes proches de la diffusion de Fedora 35 édition Beta. De nombreuses nouveautés sont bien avancées dans leur développement et doivent être fiabilisés avant la version finale qui sortira fin octobre.

Les tests pour le noyau sont :

Pour GNOME 41 ils consistent en :

  • La détection de la mise à niveau de Fedora par GNOME Logiciels ;
  • Le verrouillage et déverrouillage d'écran ;
  • Le bon fonctionnement du navigateur Web, de Cartes, de Musique, de Disques et du Terminal ;
  • La connexion / déconnexion et changement d'utilisateurs ;
  • Le fonctionnement du son, notamment détection de la connexion ou déconnexion d'écouteurs ou casques audios ;
  • Le fonctionnement global du bureau : les activités, les paramètres, les extensions.
  • Possibilité de lancer les applications graphiques depuis le menu.

Pour l'audio, cela vérifie :

  • Vérifier que pipewire est bien utilisé par défaut ;
  • Le fonctionnement des applications avec de l'audio, avec ALSA ou Jack comme backend ;
  • La configuration du son dans le centre de contrôle de GNOME et de pavucontrol ;
  • La gestion des périphériques audio dont par Bluetooth.

Enfin pour l’internationalisation c'est :

  • Le bon fonctionnement d'ibus pour la gestion des entrées claviers ;
  • La personnalisation des polices de caractères ;
  • L'installation automatique des paquets de langues des logiciels installés suivant la langue du système ;
  • La traduction fonctionnelle par défaut des applications ;
  • Les nouvelles dépendances des paquets de langue pour installer les polices et les entrées de saisie nécessaires.

Comment y participer ?

Le calendrier pour cette semaine est comme suit :

Visitez ces pages le jour J, suivez les instructions et communiquez vos résultats dessus.

Si vous avez besoin d'aide lors du déroulement des tests, n'hésitez pas de faire un tour sur IRC pour recevoir un coup de main sur les canaux #fedora-test-days et #fedora-fr (respectivement en anglais et en français) sur le serveur Libera.

En cas de bogue, il est nécessaire de le rapporter sur le BugZilla. Si vous ne savez pas faire, n'hésitez pas à consulter la documentation correspondante.

De plus, si une semaine est dédiée à ces tests, il reste possible de les effectuer quelques jours plus tard sans problème ! Les résultats seront globalement d'actualité.

Figuring out the container runtime you are in

Posted by Roland Wolters on September 09, 2021 10:17 PM
<figure class="alignright size-thumbnail"></figure>

Containers are meant to keep processes contained. But there are ways to gather information about the host – like the actual execution environment you are running in.

Containers are pretty good at keeping everyone and everything inside their boundaries – thanks to SELinux, namespaces and so on. But they are not perfect. Thanks to a recent Azure security flaw I was made aware of a nice trick via the /proc/ file system to figure out what container runtime the container is running in.

The idea is that the started container inherits some /proc/ entries – among the entry for /proc/self/. If we are able to load a malicious container, we can use this knowledge to execute the container host binary and by that get information about the runtime.

As an example, let’s take a Fedora container image with podman (and all it’s library dependencies) installed. We can run it and check the version of crun inside:

❯ podman run --rm podman:v3.2.0 crun --version
crun version 0.20.1
commit: 0d42f1109fd73548f44b01b3e84d04a279e99d2e
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL

Note that I take an older version here to better highlight the difference between host and container binary!

If we now change the execution to /proc/self/exe, which points to the host binary, the result shows a different version:

❯ podman run --rm podman:v3.2.0 /proc/self/exe --version
crun version 1.0
commit: 139dc6971e2f1d931af520188763e984d6cdfbf8
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL

This is actually the version string of the binary on the host system:

❯ /usr/bin/crun --version
crun version 1.0
commit: 139dc6971e2f1d931af520188763e984d6cdfbf8
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL

With this nice little feature we now have insight into the version used – and as it was shown in the detailed write-up of the Azure security problem mentioned above, this insight can be crucial to identify and use the right CVEs to start lateral movement.

Of course this requires us to be able to load containers of our choice, and to have the right libraries inside the container. This example is simplified because I knew about the target system and there was a container available with everything I needed. For a more realistic attack container image, check out whoc.

I’m looking for YOUR stories from Fedora history

Posted by Fedora Community Blog on September 09, 2021 06:27 PM

Hey everyone! In a couple of weeks, I’m going to be giving a talk at Open Source Summit called “35 Fedora Releases in 30 minutes“.

This is basically going to be a whirlwind tour of our history. I was there for a lot of it, but not all, and certainly not from all perspectives. In preparation, I’d like to get more of your input. If you’re interested in sharing what you remember about Fedora Core 3 (Heidelberg), or Fedora Linux 8 (Werewolf!), or even F23 or F27 or whatever, or anywhere in between, I’d love to hear from you.

If you’ve got something interesting to share, please contact me. Reply to this post, send me a message to @mattdm:fedora.im on matrix, send email, tweet at me, etc. We can set up a video call, or you can just send ideas, or whatever.

The post I’m looking for YOUR stories from Fedora history appeared first on Fedora Community Blog.

Contribute at the Fedora Linux 35 Test Week for Kernel 5.14

Posted by Fedora Community Blog on September 09, 2021 08:00 AM
Fedora Linux 35 Kernel 5.14

The kernel team is working on final integration for kernel 5.6. This version was just recently released, and will arrive soon in Fedora. As a result, the Fedora kernel and QA teams have organized a test week from Sunday, Sept 12, 2021 through Sunday, Sept 19, 2021. Refer to the wiki page for links to the test images you’ll need to participate. Read below for details.

How does a test week work?

A test week is an event where anyone can help make sure changes in Fedora work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed before, this is a perfect way to get started.

To contribute, you only need to be able to do the following things:

  • Download test materials, which include some large files
  • Read and follow directions step by step

The wiki page for the kernel test day has a lot of good information on what and how to test. After you’ve done some testing, you can log your results in the test day web application. If you’re available on or around the day of the event, please do some testing and report your results. We have a document which provides all the steps written.

Happy testing, and we hope to see you on test day.

The post Contribute at the Fedora Linux 35 Test Week for Kernel 5.14 appeared first on Fedora Community Blog.

Autofirma en Linux

Posted by Pablo Iranzo Gómez on September 08, 2021 11:00 PM

Recientemente, estuve usando un ordenador que no utilizo con frecuencia, pero necesitaba poner una firma en un PDF (y la persona receptora no sólo quería ver la firma ‘digital’, sino tener una imagen ‘firmada’).

Autofirma, descargable desde el portal de firma electrónica tiene por suerte una versión para Fedora en formato RPM y que permite realizar esta función.

Lamentablemente, a pesar de haberlo instalado y que se instalaron algunas dependencias adicionales, el programa no arrancaba como correspondía.

Finalmente, cuando volví a usar mi equipo habitual, vi que faltaba una dependencia, Java estaba instalado, pero al parecer, necesitaba también el paquete java-11-openjdk además del ya instalado java-1.8.0-openjdk.

Espero que encuentres esta nota si te falla!

QCoro 0.2.0 Release Announcement

Posted by Daniel Vrátil on September 08, 2021 02:00 PM

Just about a month after the first official release of QCoro, a library that provides C++ coroutine support for Qt, here’s 0.2.0 with some big changes. While the API is backwards compatible, users updating from 0.1.0 will have to adjust their #include statements when including QCoro headers.

QCoro 0.2.0 brings the following changes:

Library modularity

The code has been reorganized into three modules (and thus three standalone libraries): QCoroCore, QCoroDBus and QCoroNetwork. QCoroCore contains the elementary QCoro tools (QCoro::Task, qCoro() wrapper etc.) and coroutine support for some QtCore types. The QCoroDBus module contains coroutine support for types from the QtDBus module and equally the QCoroNetwork module contains coroutine support for types from the QtNetwork module. The latter two modules are also optional, the library can be built without them. It also means that an application that only uses let’s say QtNetwork and has no DBus dependency will no longer get QtDBus pulled in through QCoro, as long as it only links against libQCoroCore and libQCoroNetwork. The reorganization will also allow for future support of additional Qt modules.

Headers clean up

The include headers in QCoro we a bit of a mess and in 0.2.0 they all got a unified form. All public header files now start with qcoro (e.g. qcorotimer.h, qcoronetworkreply.h etc.), and QCoro also provides CamelCase headers now. Thus users should simply do #include <QCoroTimer> if they want coroutine support for QTimer.

The reorganization of headers makes QCoro 0.2.0 incompatible with previous versions and any users of QCoro will have to update their #include statements. I’m sorry about this extra hassle, but with this brings much needed sanity into the header organization and naming scheme.

Docs update

The documentation has been updated to reflect the reorganization as well as some internal changes. It should be easier to understand now and hopefully will make it easier for users to start with QCoro now.

Internal API cleanup and code de-duplication

Historically, certain types types which can be directly co_awaited with QCoro, for instance QTimer has their coroutine support implemented differently than types that have multiple asynchronous operations and thus have a coroutine-friendly wrapper classes (like QIODevice and it’s QCoroIODevice wrapper). In 0.2.0 I have unified the code so that even the coroutine support for simple types like QTimer are implemented through wrapper classes (so there’s QCoroTimer now)


Download

You can download QCoro 0.2.0 here or check the latest sources on QCoro GitHub.

More About QCoro

If you are interested in learning more about QCoro, go read the documentation, look at the first release announcement, which contains a nice explanation and example or watch recording of my talk about C++20 coroutines and QCoro this years’ Akademy.

Apps for daily needs part 5: video editors

Posted by Fedora Magazine on September 08, 2021 08:00 AM

Video editing has become a popular activity. People need video editors for various reasons, such as work, education, or just a hobby. There are also now many platforms for sharing video on the internet. Almost all social media and chat messengers provide features for sharing videos. This article will introduce some of the open source video editors that you can use on Fedora Linux. You may need to install the software mentioned. If you are unfamiliar with how to add software packages in Fedora Linux, see my earlier article Things to do after installing Fedora 34 Workstation. Here is a list of a few apps for daily needs in the video editors category.

Kdenlive

When anyone asks about an open source video editor on Linux, the answer that often comes up is Kdenlive. It is a very popular video editor among open source users. This is because its features are complete for general purposes and are easy to use by someone who is not a professional.

Kdenlive supports multi-track, so you can combine audio, video, images, and text from multiple sources. This application also supports various video and audio formats without having to convert them first. In addition, Kdenlive provides a wide variety of effects and transitions to support your creativity in producing cool videos. Some of the features that Kdenlive provides are titler for creating 2D titles, audio and video scopes, proxy editing, timeline preview, keyframeable effects, and many more.

<figure class="wp-block-image size-large">Apps for daily needs: video editor: kdenlive</figure>

More information is available at this link: https://kdenlive.org/en/


Shotcut

Shotcut has more or less the same features as Kdenlive. This application is a general purposes video editor. It has a fairly simple interface, but with complete features to meet the various needs of your video editing work.

Shotcut has a complete set of features for a video editor, ranging from simple editing to high-level capabilities. It also supports various video, audio, and image formats. You don’t need to worry about your work history, because this application has unlimited undo and redo. Shotcut also provides a variety of video and audio effects features, so you have freedom to be creative in producing your video works. Some of the features offered are audio filters, audio mixing, cross fade audio and video dissolve transition, tone generator, speed change, video compositing, 3 way color wheels, track compositing/blending mode, video filters, etc.

<figure class="wp-block-image size-large">Apps for daily needs: video editor: shotcut</figure>

More information is available at this link: https://shotcut.org/


Pitivi

Pitivi will be the right choice if you want a video editor that has an intuitive and clean user interface. You will feel comfortable with how it looks and will have no trouble finding the features you need. This application is classified as very easy to learn, especially if you need an application for simple editing needs. However, Pitivi still offers a variety of features, like trimming & cutting, sound mixing, keyframeable audio effects, audio waveforms, volume keyframe curves, video transitions, etc.

<figure class="wp-block-image size-large">Apps for daily needs: video editor: pitivi</figure>

More information is available at this link: https://www.pitivi.org/


Cinelerra

Cinelerra is a video editor that has been in development for a long time. There are tons of features for your video work such as built-in frame render, various video effects, unlimited layers, 8K support, multi camera support, video audio sync, render farm, motion graphics, live preview, etc. This application is maybe not suitable for those who are just learning. I think it will take you a while to get used to the interface, especially if you are already familiar with other popular video editor applications. But Cinelerra will still be an interesting choice as your video editor.

<figure class="wp-block-image size-large">Apps for daily needs: video editor: cinelerra</figure>

More information is available at this link: http://cinelerra.org/


Conclusion

This article presented four video editor apps for your daily needs that are available on Fedora Linux. Actually there are many other video editors that you can use in Fedora Linux. You can also use Olive (Fedora Linux repo), OpenShot (rpmfusion-free) , Flowblade (rpmfusion-free) and many more. Each video editor has its own advantages. Some are better at correcting color, while others are better at a variety of transitions and effects. Some are better when it comes to how easy it is to add text. Choose the application that suits your needs. Hopefully this article can help you to choose the right video editors. If you have experience in using these applications, please share your experience in the comments.

Working with Raspberry PI4 systems

Posted by Stephen Smoogen on September 07, 2021 02:43 PM

While my current work is aimed at ARM-64 hardware, many of the boards are not Server Ready Hardware and thus do not have things like UEFI to boot, ACPI to query the hardware stack, or various other things which are added later as firmware updates. They also end up having ‘developer kit boards’ of US$6000.00+ systems which having one at home is hard to justify. {Sorry kid, no college this semester… Dad bought himself a board that the customer may dump next week.}

In looking for proxy systems, my team has been focusing first on the classic ARM for small projects: The Raspberry Pi. The raspberry pi4 with 4 GB of ram works out as a proxy for all kinds of ‘low-end’ systems where you may need to play with a small GPU and try to make it work with a Server Ready operating system like CentOS Stream.

Getting the hardware

 

There are several places to get the hardware, and while I used to get things from Adafruit, but they did not have an IOT kit set up for the 4 series. I ended up going with CanaKit from Amazon just so I could get a bundle of parts together. Going from past experience with MMC cards burning out after a couple of days, I bought 3 32 gig cards to put the OS on. So far the cards have lasted longer than I expected but that just means they will burn out any day now.

When getting a raspberry pi, I highly recommend making sure you get the correct power supply, a USB 2 serial connector for the GPIO, and if you are using an external drive, a seperately powered disk drive. I have found that while the Raspberry Pi4 uses a larger power supply than the 1,2, or 3 series… attaching a non-powered USB drive can be problematic on boot (the ssd one I had would cause me to just have the rainbow picture of doom).

Setting up the hardware

 

For the raspberry pi4 if you are using it to compile or build things, make sure you have correctly sized heat dispensors for the CPU and possibly a fan (and maybe a replacement fan for when the first one dies). Then attach a serial cable to pins 6 (ground),8 (txd),10 (txd). Make sure you do not attach anything to 1,2,3 as you could be looking for a new pi or computer. The serial is useful for when you attempt to boot a new kernel config and the HDMI connector was not functional afterwords.

On another computer attach the USB connector and you can use the screen or minicom commands to see output from the system on boot. On my test system, I was able to capture the following:

 
$ screen -c /dev/null -L /dev/ttyUSB0 115200
recover4.elf not found (6)
recovery.elf not found (6)
Read start4x.elf bytes 2983784 hnd 0x000013b1 hash '3f7b34a64191a848'
Read fixup4x.dat bytes 8453 hnd 0x00000d3b hash '59e66162bed1b815'
0x00c03112 0x00000000 0x000003ff
MEM GPU: 32 ARM: 992 TOTAL: 1024
Starting start4x.elf @ 0xfec00200 partition 0

MESS:00:00:05.434998:0: arasan: arasan_emmc_open

Initial setup

 

Like any hardware setup, it is important to make sure the shipped hardware has up to date firmware and configs. For this I took one of the MMC cards, and burned the Raspian OS with recommended software Once this booted up, the tools did a firmware upgrade on the system from whatever had been on the box when it was stored in a depot. This OS is a 32 bit operating system but is maintained by the ‘manufacturer’ so is a good test for ‘did it break in shipment’

{Pro-tip: Once you have finished updating, shutdown the system, take this card out, and put it in a box/bag for later use. At some point things are going to go badly in an install and you won’t know if its you, your hardware, or something else. Having a known bootable backup that is supposed to work is good.}

After this it is time to focus on getting ‘bootable’ systems using the base OS’s we want to target:

  1. Fedora Linux
  2. CentOS Stream

Fedora 34 Initial Install

 

Fedora 34 came out as I started working on the project, so I decided to aim for that as an initial OS. The ARM work that the Fedora team is doing is aimed primarily at 64-bit and with Server Ready hardware. As such, they do try to make a raspberry pi4 work, but it is listed as possibly problematic. That said, the following worked mostly fine:

  1. Download the raw workstation image
  2. On an existing Fedora 33 system, install arm-image-installer via sudo dnf install arm-image-installer
  3. Insert an mmc into the computer using the appropriate adaptor, and find the disk name.
  4. GNOME will probably be overly helpful and have mounted partitions on the card for you. You will need to unmount them. df -a | grep mmc ; sudo umount /dev/mmcblk0p1; sudo umount /dev/...
  5. write the image to the mmc disk with image-installer:
fedora-arm-image-installer --image=./Fedora-Server-34-1.2.aarch64.raw.xz --media=/dev/mmcblk0 --addkey=a_key.pub --resizefs --showboot --target=rpi4 --addconsole=ttyAMA0,115200
  1. Move the mmc card over to the powered off raspberry pi4, and prepare to boot up the hardware.
  2. On my Fedora system I started a screen to watch the fireworks: screen /dev/ttyUSB0 115200
  3. Power on the raspberry pi and watch what happens. If you are lucky then you will get the system eventually booting into Fedora 34 graphical mode. If you aren’t, then it may stay in rainbow mode (like when I found that my SSD drive pulled too much power on boot.)
  4. Log into the system and play around a bit. This is a good time to do any updates and such. Getting an idea of how ‘fast’/‘slow’ the system with defaults is good here too.

Get serial working

 

At this point I wanted to make sure I could log into Fedora from the serial port. In order to do this, you need to edit the grub configs which is done in /etc/default/grub and then rebuilding the config. I moved the grub timeout to 10 seconds to give me a chance to choose different options, removed the rhgb and quiet, and added a console line.

 
$ sudo -i
# vi /etc/default/grub
GRUB_TIMEOUT=10
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL_OUTPUT="console"
GRUB_CMDLINE_LINUX="console=tty0 console=ttyAMA0,115200 "
GRUB_DISABLE_RECOVERY="true"
GRUB_ENABLE_BLSCFG=true
# grub2-mkconfig -o /etc/grub2-efi.cfg

A test reboot is good here to make sure that when you boot you get past the

EFI stub: Booting Linux Kernel...
EFI stub: Using DTB from configuration table
EFI stub: Exiting boot services and installing virtual address map...

Fedora 34 with TianoCore

 

As stated before the Raspberry Pi systems are not System Ready and do not have a built in EFI or ACPI system for a kernel to boot from. Instead the default kernel boots usually with uboot mapping devices via device tree in order for hardware to be discovered. I am going to say that is about all I really know about the subject. There have been long threads in the Fedora ARM lists over the years on going over this versus ACPI, and I think it is better for an expert like Jon Masters or Peter Robinson to explain why APCI is preferred in Fedora Linux and Red Hat Enterprise Linux (RHEL) versus a novice like myself.

For the raspberry pi4, the current method to implement a UEFI interface is a port of the Tianocore by the Pi Firmware Task Force. TianoCore is an opensource implementation and extension of the Intel Extendable Firmware Interface which was written to replace the 16-bit BIOS used in personal computers since the 1980’s. A further extension was with the Open Virtual Machine Firmware which I believe was then used by the Pi Firmware Task Force for their version.

Easier than I thought

 

In reading various blogs and documentation on the status of the EFI support, I was prepared for a system that would only work via serial console or may not have networking or other utilities. Instead, I found the process to be fairly ‘painless’ and I only ended up with a non-booting system twice. The general steps were the following:

ssmoogen@fedora ~]$ mkdir RPi4
[ssmoogen@fedora ~]$ cd RPi4/
[ssmoogen@fedora RPi4]$ wget https://github.com/pftf/RPi4/releases/download/v1.27/RPi4_UEFI_Firmware_v1.27.zip
--2021-06-16 17:36:31-- https://github.com/pftf/RPi4/releases/download/v1.27/RPi4_UEFI_Firmware_v1.27.zip
Resolving github.com (github.com)... 140.82.114.3
Connecting to github.com (github.com)|140.82.114.3|:443... connected.
HTTP request sent, awaiting response... 302 Found
...
RPi4_UEFI_Firmware_v1.27.zip 100%[=====================================================================================================>] 2.92M 14.4MB/s in 0.2s

2021-06-16 17:36:32 (14.4 MB/s) - ‘RPi4_UEFI_Firmware_v1.27.zip’ saved [3064085/3064085]


$ wget https://github.com/pftf/RPi4/releases/download/v1.27/RPi4_UEFI_Firmware_v1.27.zip
$ unzip RPi4_UEFI_Firmware_v1.27.zip
Archive: RPi4_UEFI_Firmware_v1.27.zip
inflating: RPI_EFI.fd
inflating: bcm2711-rpi-4-b.dtb
inflating: bcm2711-rpi-400.dtb
inflating: bcm2711-rpi-cm4.dtb
inflating: config.txt
inflating: fixup4.dat
inflating: start4.elf
creating: overlays/
inflating: overlays/miniuart-bt.dtbo
inflating: Readme.md
creating: firmware/
inflating: firmware/Readme.txt
creating: firmware/brcm/
inflating: firmware/brcm/brcmfmac43455-sdio.clm_blob
inflating: firmware/brcm/brcmfmac43455-sdio.bin
inflating: firmware/brcm/brcmfmac43455-sdio.Raspberry
inflating: firmware/brcm/brcmfmac43455-sdio.txt
inflating: firmware/LICENCE.txt

At this point, if you haven’t already, read the documentation. Our main tasks will be to setup the raspberry pi EFI boot partition to have the needed data in it. I had seen that this came with dtb files which I figured needed to be replaced. However as seen below, all of these are in a Fedora 34 and are of a similar timeframe. The only file we really need to work with is the RPI_EFI.fd and config.txt files.

 
[ssmoogen@fedora RPi4]$ sudo -i
[sudo] password for ssmoogen:
[root@fedora ~]# cd /boot/efi/
[root@fedora efi]# cp config.txt config-orig.txt # always make a backup!
[root@fedora efi]# rpm -qf /boot/efi/bcm2711-rpi-4-b.dtb
bcm2711-firmware-20210430-1.1a46874.fc34.aarch64
[root@fedora efi]# rpm -qf /boot/efi/overlays/miniuart-bt.dtbo
bcm283x-overlays-20210430-1.1a46874.fc34.aarch64
[root@fedora efi]# cp ~ssmoogen/RPi4/RPI_EFI.fd /boot/efi/

At this point, it is time to replace the config.txt that came with the system with one which can be used for booting the UEFI program. This is where I caused my system to go into rainbow mode a couple of times. In the end, I put in the following file:

 

#boot in 64-bit mode
arm_64bit=1
# boot into the RPI_EFI firmware
armstub=RPI_EFI.fd
# turn on serial and keep it on in 2ndstage
enable_uart=1
uart_2ndstage=1
bootcode_delay=1
# using this from upstream UEFI config.txt
device_tree_address=0x1f0000
device_tree_end=0x200000
# set up the miniuart and the vc4
dtoverlay=miniuart-bt,vc4-fkms-v3d
disable_commandline_tags=1
disable_overscan=1
enable_gic=1
# set up the GPU ram and HDMI
gpu_mem=128
hdmi_ignore_cec_init=1
max_framebuffers=2
start_x=1

A couple of the commands in it are from a side trip on getting accelerated graphics working in CentOS Stream on Pi. I decided to document it for later as this is getting long. At this point I was able to get a booting system:

 
recover4.elf not found (6)
recovery.elf not found (6)
Read start4x.elf bytes 2983816 hnd 0x000032fe hash '210478ae179a91d0'
Read fixup4x.dat bytes 8451 hnd 0x00002c88 hash '7716af32619f0771'
0x00c03112 0x00000000 0x000003ff
MEM GPU: 256 ARM: 768 TOTAL: 1024
Starting start4x.elf @ 0xfec00200 partition 0

MESS:00:00:05.550101:0: arasan: arasan_emmc_open
MESS:00:00:05.698627:0: brfs: File read: /mfs/sd/config.txt
MESS:00:00:05.701548:0: brfs: File read: 342 bytes
MESS:00:00:05.821820:0: HDMI1:EDID error reading EDID block 0 attempt 0
MESS:00:00:05.831336:0: HDMI1:EDID error reading EDID block 0 attempt 1
MESS:00:00:05.840843:0: HDMI1:EDID error reading EDID block 0 attempt 2
MESS:00:00:05.850357:0: HDMI1:EDID error reading EDID block 0 attempt 3
MESS:00:00:05.859867:0: HDMI1:EDID error reading EDID block 0 attempt 4
MESS:00:00:05.869382:0: HDMI1:EDID error reading EDID block 0 attempt 5
MESS:00:00:05.878890:0: HDMI1:EDID error reading EDID block 0 attempt 6
MESS:00:00:05.888404:0: HDMI1:EDID error reading EDID block 0 attempt 7
MESS:00:00:05.897914:0: HDMI1:EDID error reading EDID block 0 attempt 8
MESS:00:00:05.907428:0: HDMI1:EDID error reading EDID block 0 attempt 9
MESS:00:00:05.911926:0: HDMI1:EDID giving up on reading EDID block 0
MESS:00:00:05.918043:0: brfs: File read: /mfs/sd/config.txt
MESS:00:00:06.995664:0: gpioman: gpioman_get_pin_num: pin DISPLAY_DSI_PORT not defined
MESS:00:00:07.002961:0: *** Restart logging
MESS:00:00:07.004382:0: brfs: File read: 342 bytes
MESS:00:00:07.072608:0: hdmi: HDMI1:EDID error reading EDID block 0 attempt 0
...
MESS:00:00:07.226856:0: dtb_file 'bcm2711-rpi-4-b.dtb'
MESS:00:00:07.235551:0: dtb_file 'bcm2711-rpi-4-b.dtb'
MESS:00:00:07.248011:0: brfs: File read: /mfs/sd/bcm2711-rpi-4-b.dtb
MESS:00:00:07.251301:0: Loading 'bcm2711-rpi-4-b.dtb' to 0x1f0000 size 0xc042
MESS:00:00:07.289344:0: brfs: File read: 49218 bytes
MESS:00:00:07.465024:0: brfs: File read: /mfs/sd/config.txt
MESS:00:00:07.467674:0: brfs: File read: 342 bytes
MESS:00:00:07.488759:0: brfs: File read: /mfs/sd/overlays/vc4-fkms-v3d.dtbo
MESS:00:00:07.535268:0: Loaded overlay 'vc4-fkms-v3d'
MESS:00:00:07.537248:0: dtparam: cma-256=true
MESS:00:00:07.541611:0: dtparam: miniuart-bt=true
MESS:00:00:07.552794:0: Unknown dtparam 'miniuart-bt' - ignored
MESS:00:00:07.654662:0: brfs: File read: 1446 bytes
MESS:00:00:07.658863:0: Failed to open command line file 'cmdline.txt'
MESS:00:00:08.953730:0: brfs: File read: /mfs/sd/RPI_EFI.fd
MESS:00:00:08.956193:0: Loading 'RPI_EFI.fd' to 0x0 size 0x1f0000
MESS:00:00:08.962019:0: No compatible kernel found
MESS:00:00:08.966520:0: Device tree loaded to 0x1f0000 (size 0xc622)
MESS:00:00:08.974158:0: uart: Set PL011 baud rate to 103448.300000 Hz
MESS:00:00:08.981676:0: uart: Baud rate change done...
MESS:00:00:08.983696:0:NOTICE: BL31: v2.3():v2.3
NOTICE: BL31: Built : 10:40:51, Apr 21 2020
UEFI firmware (version UEFI Firmware v1.27 built at 11:17:17 on May 25 2021)
3hESC (setup), F1 (shell), ENTER (boot)

On the HDMI monitor, you get a nice raspberry pi and a timeout to choose if you want to go into setup, shell or boot. If you hit ESC, you will get what looks like a fairly standard BIOS/EFI screen asking if you want to change various settings. At this point it is a good idea to make a change to allow for full memory usage (or you will be limited to 3GB and a slower system). Following the upstream README, Device ManagerRaspberry Pi ConfigurationAdvanced Configuration. At this point you select ‘Limit RAM to 3GB’ and disable it. Save the settings and escape up to the top menu. Choose the boot manager and you should be given the choices of the following operating systems:

 
Fedora
UEFI Shell
SD/MMC on Arasan SDHCI
UEFI PXEv4 (MAC:??)
UEFI PXEv6 (MAC:??)
UEFI HTTPv4 (MAC:??)
UEFI HTTPv6 (MAC:??)

 

Choose Fedora and after a short pause, you will get the grub config file. You shouldn’t need to change any defaults, but it is good in case you did. Once the kernel has been selected, the system will begin booting and the scary black screen occurs. This is a point where for some seconds nothing seems to be happening and a couple of times I was ready to power off and go back to other configs. Then you will see something similar to start scrolling:

 
[    0.000000] Linux version 5.12.10-300.fc34.aarch64 (mockbuild@buildvm-a64-03.iad2.fedoraproject.org) (gcc (GCC) 11.1.1 20210531 (Red Hat 11.1.1-3), GNU ld version 2.35.1-41.fc34) #1 SMP T
hu Jun 10 13:49:00 UTC 2021
[ 0.000000] efi: EFI v2.70 by https://github.com/pftf/RPi4
[ 0.000000] efi: ACPI 2.0=0x30720018 SMBIOS=0x33e00000 SMBIOS 3.0=0x33de0000 MEMATTR=0x321c2418 RNG=0x33fdb798 MEMRESERVE=0x30375118
[ 0.000000] efi: seeding entropy pool
[ 0.000000] ACPI: Early table checksum verification disabled
[ 0.000000] ACPI: RSDP 0x0000000030720018 000024 (v02 RPIFDN)
...

 

My belief is that the pause is the kernel and initial ramdisk are getting gunzipped in memory for usage. Reading from the MMC is slow, and uncompressing the files is slow. A future project may be to see if there is a sizable speedup of just doing this on the filesystem beforehand. In any case, the system will boot and be usable as a workstation.

End of File

This post has gotten on in size, and there were several other side tasks I worked on while doing it. Those will need to be seperate posts in the near future.

GSoC report: syslog-ng MacOS support

Posted by Peter Czanik on September 07, 2021 01:08 PM

For the past couple of months, Yash Mathne has been working on testing syslog-ng on MacOS as a GSoC (Google Summer of Code) student. He worked both on x86 and on the freshly released ARM hardware. And we have some good news here to share: while there is still room for improvement, most of syslog-ng works perfectly well on MacOS.

Read my blog for some historical background and the GSoC report: https://www.syslog-ng.com/community/b/blog/posts/gsoc-report-syslog-ng-macos-support

rpminspect-1.6-released

Posted by David Cantrell on September 07, 2021 12:53 PM

rpminspect 1.6 is now available. This release includes a lot of fine tuning and bug fixing for the various tests across multiple Fedora, CentOS, and RHEL releases. The GitHub Actions testing has expanded to cover many more distributions.

The main feature present in the 1.6 release is the handling of what I call the Product Security workflow. The idea is that any finding that says the Product Security team needs to investigate the change should not be something a developer can automatically waive. For example, a package adding a setuid root executable that the product does not already know about. The workflow for this should be the developer adds the new file to the appropriate fileinfo/ file and sends a pull request to the rpminspect-data project containing that data. The Product Security team would then review that change and approve it or not. If it’s approved, the change would be merged and the rpminspect-data package updated and rebuilt.

There are instances where some security findings should be reported slightly differently or even ignored. To handle that on a case-by-case basis, librpminspect supports the security/ rules files that allow you to specify a different reporting level for these findings. The match is performed by path glob(7) specification, package name, and package version. For the package name and version you can specify * to indicate any.

Work on 1.7 has begun. Please file issues and feature requests on the GitHub project page: https://github.com/rpminspect/rpminspect.

General release and build process changes:

  • Expand determine-os.sh to detect Crux Linux and Alt Linux
  • Add %find_lang to the package spec file

Changes to the GitHub Actions CI scripts and files:

  • Support older libraries on CentOS 7
  • Define OPENSSL_VERSION to 0 if it’s undefined
  • Initial set of changes for Alpine Linux
  • Python black formatting fixes
  • Install epel-release in pre.sh on centos7
  • Pass CRYPTOGRAPHY_DONT_BUILD_RUST=1 to pip on centos7
  • Add AlmaLinux 8 to the GitHub Actions extra-ci job
  • Restore previous centos7 pip and setuptools behavior
  • Add almalinux handling to extra-ci.yml
  • s/dnf/yum/g in pre.sh for centos7
  • Add an i386 job to extra-ci.yml
  • Try using i386 command for the i386 jobs
  • Install util-linux for /usr/bin/i386
  • Just handle i386 build in the qemu job
  • Fix 32-bit builds and add Fedora i686 CI targets
  • OS_SUBDIR clean up in the Makefile
  • Fixed typo in the Makefile
  • Need libffi-devel even for i686 builds
  • Set 32-bit build flags in ci.yml
  • Fix installing libffi-devel on i686 jobs
  • Add Rocky Linux 8 to the Extra CI job collection
  • Fix name of Rocky Linux Docker image
  • Add missing Amazon Linux CI files
  • Amazon Linux lacks glibc-devel.i686, so disable some tests
  • Python flake8 and black formatting fixes
  • Install tar before git task on amzn
  • Correct Amazon Linux name in extra-ci.yml
  • Add Mageia Linux to Extra CI
  • Must use dnf on Mageia Linux, not yum
  • clamav-dd -> clamav-db on Mageia Linux
  • Ignore top level docs in CI jobs
  • Add Alt Linux coverage to Extra CI
  • Remove /usr/lib/rpm/shell.req on Alt Linux
  • Remove forced RPMTAG_VENDOR value on Alt Linux
  • Install gcovr via pip for Alt Linux
  • Add Oracle Linux 8 to the extra-ci collection
  • Fix syntax error in utils/determine-os.sh
  • Output “oraclelinux” instead of ${ID} for Oracle Linux
  • Disable fedora:rawhide in ci.yml
  • Support Alt Linux p10 (platform 10)
  • ShellCheck fixes for determine-os.sh
  • Run extra-ci jobs for changes to utils/ files

rpminspect(1) changes:

  • Handle SIGWINCH in the download progress bar display
  • Use sigaction() instead of signal() in rpminspect(1)
  • Discontinue realpath() call on argv[0]
  • Honor -T and -E correctly even with security-focused checks
  • Fix the progress bar display problems for -f -v mode.

Documentation changes:

  • Update the AUTHORS.md file
  • Drop mention of LibreSSL from README.md
  • Add readthedocs rst source files
  • Update translation template
  • Link to generic.yaml from configuration.rst
  • Mention readthedocs.io in README.md
  • Update list of CI Linux distributions in README.md

General bug fix in the library or frontend program:

  • Only look at RPMTAG_SOURCE entries for removed sources
  • Remove unnecessary empty list check in inspect_upstream.c
  • Ignore noarch packages in the arch inspection
  • Drop all HEADER_* defines, switch to NAME_* (#397)
  • desktop: demote as INFO a missing Exec w/ TryExec (#395)
  • Add types block support for the config file (#404)
  • Check return code of yaml_scan_parser()
  • Use a simpler and correct regexp for the disttag inspection (#412)
  • Translate some additional warning messages
  • Use cap_compare() when comparing file capabilities (#410)
  • Skip .spec files in the types inspection
  • Description and Summary changes are reported as INFO
  • Fall back on full license name if there are no abbrevs
  • Follow-on to the license inspection changes for fedora_name
  • Ignore “complex” spec file macros in get_macros()
  • Fix a lot of xmlrpc-c memory leaks in builds.c
  • Support the legacy libcurl API in librpminspect
  • Fix SIGSEGV caused by misplaced xmlrpc_DECREF() call
  • Adjust where the ‘good’ bool is set in the emptyrpm loop
  • Slight code reformatting in inspect_disttag.c
  • Read file capabilities from the RPM header
  • Size origin_matches at 3 rather than 1.
  • Non well-formed XML fails xml, but invalid is info
  • Handle DT_RUNPATH and DT_RPATH owned directories correctly
  • Convert the security rules from a hash table to a list
  • Ensure the annocheck inspection behaves for build comparisons

librpminspect feature or significant change:

  • Add list_contains() to librpminspect
  • Skip source packages in emptyrpm
  • desktop: factor check for Exec
  • desktop: factor check for Icon
  • desktop: reset severity/waiverauth before add_result()
  • Add debugging output to the YAML config parsing code
  • Support relative ignore globs (#404)
  • Report new patches at the INFO level only
  • Allow directories owned by the build in runpath
  • Always run inspections with possible security results
  • Define security_t and secrule_t in librpminspect
  • Add strshorten() to strfuncs.c
  • Add libcurl download progress bars for rpminspect -v
  • Document the escape sequences used for the progress bar
  • Replace get_header_value() with get_rpm_header_value()
  • Remove get_cap() function and fix test_ownership.py
  • Add security rule reading code to librpminspect
  • Removed RESULT_WAIVED from severity_t
  • Improve remedy reporting to tell users what data file to edit
  • Improve permission and ownership reporting strings w/ fileinfo
  • Properly override config file blocks in subsequent reads
  • Move match_path() and ignore_path() to paths.c
  • Set OPENSSL_API_COMPAT in lib/checksums.c
  • Add product security workflow functions and test cases
  • Product security workflow enhancement for SECRULE_SECURITYPATH
  • Product security workflow handling for SECRULE_MODES
  • Product security workflow handling for SECRULE_CAPS
  • Product security workflow handling for SECRULE_SETUID
  • Product security workflow handling for SECRULE_WORLDWRITABLE
  • Product security workflow handling for SECRULE_EXECSTACK
  • Product security workflow handling for inspect_elf.c
  • Do not warn if RPM header is missing a tag
  • Suppress debug output for the config file parsing by default

New inspections or inspection changes (not bug fixes):

  • Output new diagnostics section in rpminspect report (#280)

Test suite commits:

  • Python black fixes for baseclass.py
  • Update unit tests for removal of RESULT_WAIVED
  • Modify the ownership tests for old rpm releases and non-root
  • Increase test suite timeout to 900
  • Support really old versions of RPM (4.0.4) and Alt Linux

See https://github.com/rpminspect/rpminspect/releases/tag/v1.6 for more information.

Where to get these new releases?

Fedora, EPEL 7, and EPEL 8 users can get new builds from the testing updates collection. If you install from the testing update, please consider a thumbs up in Bodhi. Without that it takes a minumum of two weeks for it to appear in the stable repo.

What is coming in sudo 1.9.8?

Posted by Peter Czanik on September 07, 2021 09:28 AM

Sudo development is at version 1.9.8 beta 3. There are two major new features: sudo can intercept sub-commands and log sub-commands. In this quick teaser I introduce you to log_subcmds. I hope it is interesting enough for you to test it out and provide feedback.

So, what is log_subcmds good for? There are many UNIX tools that can spawn external applications. You only see vi in the logs, but can you be sure without session recording that your admin only edits what he is supposed to? With log_subcmds you can see all the commands started from an application run through sudo. Or you can see all the commands started from a shell, even without session recording.

You can read the rest of my blog at https://blog.sudo.ws/posts/2021/08/what-is-coming-in-sudo-1.9.8/

Episode 287 – Is GitHub’s Copilot the new Clippy?

Posted by Josh Bressers on September 06, 2021 12:01 AM

Josh and Kurt talk about GitHub Copilot. What can we learn from a report claiming 40% of code generated by Copilot has security vulnerabilities? Is this the future or just some sort of strange new thing that will be gone as fast as it came?

<audio class="wp-audio-shortcode" controls="controls" id="audio-2548-3" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_287_Is_GitHubs_Copilot_the_new_Clippy.mp3?_=3" type="audio/mpeg">https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_287_Is_GitHubs_Copilot_the_new_Clippy.mp3</audio>

Show Notes