Fedora People

Composable tools for disk images

Posted by Richard W.M. Jones on June 30, 2022 02:17 PM

Over the past 3 or 4 years, my colleagues and I at Red Hat have been making a set of composable command line tools for handling virtual machine disk images. These let you copy, create, manipulate, display and modify disk images using simple tools that can be connected together in pipelines, while at the same time working very efficiently. It’s all based around the very efficient Network Block Device (NBD) protocol and NBD URI specification.

A basic and very old tool is qemu-img:

$ qemu-img create -f qcow2 disk.qcow2 1G

which creates an empty disk image in qcow2 format. Suppose you want to write into this image? We can compose a few programs:

$ touch disk.raw
$ nbdfuse disk.raw [ qemu-nbd -f qcow2 disk.qcow2 ] &

This serves the qcow2 file up over NBD (qemu-nbd) and then exposes that as a local file using FUSE (nbdfuse). Of interest here, nbdfuse runs and manages qemu-nbd as a subprocess, cleaning it up when the FUSE file is unmounted. We can partition the file using regular tools:

$ gdisk disk.raw
Command (? for help): n
Partition number (1-128, default 1): 
First sector (34-2097118, default = 2048) or {+-}size{KMGTP}: 
Last sector (2048-2097118, default = 2097118) or {+-}size{KMGTP}: 
Current type is 8300 (Linux filesystem)
Hex code or GUID (L to show codes, Enter = 8300): 
Changed type of partition to 'Linux filesystem'
Command (? for help): p
Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048         2097118   1023.0 MiB  8300  Linux filesystem
Command (? for help): w

Let’s fill that partition with some files using guestfish and unmount it:

$ guestfish -a disk.raw run : \
  mkfs ext2 /dev/sda1 : mount /dev/sda1 / : \
  copy-in ~/libnbd /
$ fusermount3 -u disk.raw
[1]+  Done    nbdfuse disk.raw [ qemu-nbd -f qcow2 disk.qcow2 ]

Now the original qcow2 file is no longer empty but populated with a partition, a filesystem and some files. We can see the space used by examining it with virt-df:

$ virt-df -a disk.qcow2 -h
Filesystem                Size   Used  Available  Use%
disk.qcow2:/dev/sda1     1006M    52M       903M    6%

Now let’s see the first sector. You can’t just “cat” a qcow2 file because it’s a complicated format understood only by qemu. I can assemble qemu-nbd, nbdcopy and hexdump into a pipeline, where qemu-nbd converts the qcow2 format to raw blocks, and nbdcopy copies those out to a pipe:

$ nbdcopy -- [ qemu-nbd -r -f qcow2 disk.qcow2 ] - | \
  hexdump -C -n 512
00000000  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
*
000001c0  02 00 ee 8a 08 82 01 00  00 00 ff ff 1f 00 00 00  |................|
000001d0  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
*
000001f0  00 00 00 00 00 00 00 00  00 00 00 00 00 00 55 aa  |..............U.|
00000200

How about instead of a local file, we start with a disk image hosted on a web server, and compressed? We can do that too. Let’s start by querying the size by composing nbdkit’s curl plugin, xz filter and nbdinfo. nbdkit’s --run option composes nbdkit with an external program, connecting them together over an NBD URI ($uri).

$ web=http://mirror.bytemark.co.uk/fedora/linux/development/rawhide/Cloud/x86_64/images/Fedora-Cloud-Base-Rawhide-20220127.n.0.x86_64.raw.xz
$ nbdkit curl --filter=xz $web --run 'nbdinfo $uri'
protocol: newstyle-fixed without TLS
export="":
	export-size: 5368709120 (5G)
	content: DOS/MBR boot sector, extended partition table (last)
	uri: nbd://localhost:10809/
...

Notice it prints the uncompressed (raw) size. Fedora already provides a qcow2 equivalent, but we can also make our own by composing nbdkit, curl, xz, nbdcopy and qemu-nbd:

$ qemu-img create -f qcow2 cloud.qcow2 5368709120 -o preallocation=metadata
$ nbdkit curl --filter=xz $web \
    --run 'nbdcopy -p -- $uri [ qemu-nbd -f qcow2 cloud.qcow2 ]'

Why would you do that instead of downloading and uncompressing? In this case it wouldn’t matter much, but in the general case the disk image might be enormous (terabytes) and you don’t have enough local disk space to do it. Assembling tools into pipelines means you don’t need to keep an intermediate local copy at any point.

We can find out what we’ve got in our new image using various tools:

$ qemu-img info cloud.qcow2 
image: cloud.qcow2
file format: qcow2
virtual size: 5 GiB (5368709120 bytes)
disk size: 951 MiB
$ virt-df -a cloud.qcow2  -h
Filesystem              Size       Used  Available  Use%
cloud.qcow2:/dev/sda2   458M        50M       379M   12%
cloud.qcow2:/dev/sda3   100M       9.8M        90M   10%
cloud.qcow2:/dev/sda5   4.4G       311M       3.6G    7%
cloud.qcow2:btrfsvol:/dev/sda5/root
                        4.4G       311M       3.6G    7%
cloud.qcow2:btrfsvol:/dev/sda5/home
                        4.4G       311M       3.6G    7%
cloud.qcow2:btrfsvol:/dev/sda5/root/var/lib/portables
                        4.4G       311M       3.6G    7%
$ virt-cat -a cloud.qcow2 /etc/redhat-release
Fedora release 36 (Rawhide)

If we wanted to play with the guest in a sandbox, we could stand up an in-memory NBD server populated with the cloud image and connect it to qemu using standard NBD URIs:

$ nbdkit memory 10G
$ qemu-img convert cloud.qcow2 nbd://localhost 
$ virt-customize --format=raw -a nbd://localhost \
    --root-password password:123456 
$ qemu-system-x86_64 -machine accel=kvm \
    -cpu host -m 2048 -serial stdio \
    -drive file=nbd://localhost,if=virtio 
...
fedora login: root
Password: 123456

# lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
sr0     11:0    1 1024M  0 rom  
zram0  251:0    0  1.9G  0 disk [SWAP]
vda    252:0    0   10G  0 disk 
├─vda1 252:1    0    1M  0 part 
├─vda2 252:2    0  500M  0 part /boot
├─vda3 252:3    0  100M  0 part /boot/efi
├─vda4 252:4    0    4M  0 part 
└─vda5 252:5    0  4.4G  0 part /home
                                /

We can even find out what changed between the in-memory copy and the pristine qcow2 version (quite a lot as it happens):

$ virt-diff --format=raw -a nbd://localhost --format=qcow2 -A cloud.qcow2 
- d 0755       2518 /etc
+ d 0755       2502 /etc
# changed: st_size
- - 0644        208 /etc/.updated
- d 0750        108 /etc/audit
+ d 0750         86 /etc/audit
# changed: st_size
- - 0640         84 /etc/audit/audit.rules
- d 0755         36 /etc/issue.d
+ d 0755          0 /etc/issue.d
# changed: st_size
... for several pages ...

In conclusion, we’ve got a couple of ways to serve disk content over NBD, a set of composable tools for copying, creating, displaying and modifying disk content either from local files or over NBD, and a way to pipe disk data between processes and systems.

We use this in virt-v2v which can suck VMs out of VMware to KVM systems, efficiently, in parallel, and without using local disk space for even the largest guest.

Nebula on Fedora

Posted by Fabio Alessandro Locati on June 30, 2022 12:00 AM
In the last year, I moved more and more data and services to hardware that I can directly control. A direct consequence of this is that I started to run more hardware at my house. This change has been very positive, but it is suboptimal when not at home. All services I run are secure and could be shared directly on the web, but I prefer a more cautious approach. For this reason, I decided to create a VPN.

تولد ۱۱ سالگی وب سایت طرفداران فدورا

Posted by Fedora fans on June 29, 2022 03:11 PM
happy-birthday

happy-birthdayدرود بر شما دوستان گرامی

امیدوارم که سالم و سلامت باشید. یک سال دیگر از فعالیت وب سایت طرفداران فدورا سپری شد و این افتخار را داریم که تولد ۱۱ سالگی آن را اعلام نماییم.

امید است تا در این ۱۱ سال فعالیت، کمکی هر چند کوچک به جامعه ی نرم افزار آزاد کرده باشیم. شخصا از تمامی دوستانی که ما را در این مسیر یاری کردند سپاسگزاری می کنم.

سلامت و فدورایی باشید

hos7ein

The post تولد ۱۱ سالگی وب سایت طرفداران فدورا first appeared on طرفداران فدورا.

Fedora Pride Social hour 2022

Posted by Fedora Community Blog on June 29, 2022 10:00 AM

Hey everyone! Happy Pride month!

This week’s Fedora Social is going to celebrate the end of Pride month as well as all our LGBTQ+ community members. Anyone is welcome! The more the merrier.

Thu, June 30, 2022 – 14:00-15:00 UTC

Join us to chat, play games, and just hang out together!

We’ll meet in the Element chat room and (usually) launch a Jitsi video call.

Note that this week is the “Early” time slot — we alternate with a “Late” one to better accommodate Fedoreans from different time zones. We’re so sorry If you can’t make this Fedora social but hope to see you at the next one!

The post Fedora Pride Social hour 2022 appeared first on Fedora Community Blog.

DIY Embroidery with Inkscape and Ink/Stitch

Posted by Fedora Magazine on June 29, 2022 08:00 AM

Introduction

Embroidered shirts are great custom gifts and can also be a great way to show your love for open source. This tutorial will demonstrate how to design your own custom embroidered polo shirt using Inkscape and Ink/Stitch. Polo shirts are often used for embroidery because they do not tear as easily as t-shirts when pierced by embroidery needles, though with care t-shirts can also be embroidered. This tutorial is a follow on article to Make More with Inkscape and Ink/Stitch and provides complete steps to create your design.

Logo on Front of Shirt

Pictures with only a few colors work well for embroidery. Let us use a public domain black and white SVG image of Tux created by Ryan Lerch and Garret LeSage.

<figure class="wp-block-image size-large is-resized">Black and white image of Tux, the Linux penguin mascot.<figcaption>Black and white image of Tux</figcaption></figure>

Download this public domain image, tux-bw.svg, to your computer, and import it into your document as an editable SVG image using File>Import...

<figure class="wp-block-image size-full is-resized">Image of Tux imported into an Inkscape document<figcaption>Image of Tux with text to be embroidered</figcaption></figure>

Use a Transparent Background

It is helpful to have a checkerboard background to distinguish background and foreground colors. Click File>Document Properties… and then check the box to enable a checkerboard background.

<figure class="wp-block-image size-full is-resized">Document properties dialog box<figcaption>Dialog box to enable checkerboard document background</figcaption></figure>

Then close the document properties dialog box. You can now distinguish between colors used on Tux and the background color.

<figure class="wp-block-image size-full is-resized">Tux the Linux mascot in an Inkscape document with a checkered background<figcaption>Tux can be distinguished from the document background</figcaption></figure>

Use a Single Color For Tux

Type s to use the Select and Transform objects tool, and click on the image of Tux to select it. Then click on Object>Fill and Stroke, in the menu. Type n to use the Edit paths by Nodes tool and click on a white portion of Tux. Within the Fill and Stroke pane change the fill to No paint to make this portion of Tux transparent.

<figure class="wp-block-image size-full is-resized">The central portion of Tux that was formerly white is now transparent<figcaption>Tux in one color</figcaption></figure>

Thi leaves the black area to be embroidered.

Enable Embroidering of Tux

Now convert the image for embroidery. Type s to use the Select and Transform objects tool and click on the image of Tux to select it again. Choose Extensions>Ink/Stitch>Fill Tools>Break Apart Fill Objects … In the resulting pop up, choose Complex, click Apply, and wait for the operation to complete.

<figure class="wp-block-image size-full is-resized">Dialog box showing simple and complex options for break objects apart function in Ink/Stitch<figcaption>Dialog to Break Apart Fill Objects</figcaption></figure>

For further explanation of this operation, see the Ink/Stitch documentation.

Resize Document

Now resize the area to be embroidered. A good size is about 2.75 inches by 2.75 inches. Press s to use the Select and Transform objects tool, and select Tux, hold down the shift key, and also select any text area. Then choose Object>Transform …, click on Scale in the dialogue box, change the measurements to inches, check the Scale proportionally box and choose a width of 2.75 inches, and click Apply.

<figure class="wp-block-image size-full is-resized">Tux has been resized to have a width of 2.75 inches, which is also shown in a dialog box<figcaption>Resized drawing</figcaption></figure>

Before saving the design, reduce the document area to just fit the image. Press s to use the Select and Transform objects tool, then select Tux.

<figure class="wp-block-image size-full is-resized">Inkscape window with black and white Tux on a checkered background with the transform dialog box<figcaption>Objects selected to determine resize area</figcaption></figure>

Choose File>Document Properties… then choose Resize to content: or press Ctrl+Shift+R

<figure class="wp-block-image size-full is-resized">Document properties dialog box with options to have a checkerboard background, resize to content, change the page orientation and others<figcaption>Dialog to resize page</figcaption></figure>

The document is resized.

<figure class="wp-block-image size-full is-resized">Tux in a resized document that fits closely<figcaption>Resized document</figcaption></figure>

Save Your Design

You now need to convert your file to an embroidery file. A very portable format is the DST (Tajima Embroidery Format) format, which unfortunately does not have color information, so you will need to indicate color information for the embroidery separately. First save your design as an Inkscape SVG file so that you retain a format that you can easily edit again. Choose File>Save As, then select the Inkscape SVG format and enter a name for your file, for example AnotherAwesomeFedoraLinuxUserFront.svg and save your design. Then choose File>Save As and select the DST file format and save your design. Generating this file requires calculation of stitch locations, this may take a few seconds. You can preview the DST file in Inkscape, but another very useful tool is vpype-embroidery

Install vpype-embroidery on the command line using a Python virtual environment via the following commands:

virtualenv test-vpype
source test-vpype/bin/activate
pip install matplotlib
pip install vpype-embroidery
pip install vpype[all]

Preview your DST file (in this case named AnotherAwesomeFedoraLinuxUserFront.dst which should be replaced by the filename you choose if it is different), using this command:

vpype eread AnotherAwesomeFedoraLinuxUserFront.dst show
<figure class="wp-block-image size-full is-resized">Image of Tux created from the DST file and shown using Vpype-embroider<figcaption>Preview of design created by vpype-embroidery</figcaption></figure>

Check the dimensions of your design, if you need to resize it, you should resize the SVG design file before exporting it as a DST file. Resizing the DST file is not recommended since it contains stitch placement information, regenerate this placement information from the resized SVG file to obtain a high quality embroidered result.

Text on the Back of the Shirt

Now create a message to put on the back of your polo shirt. Create a new Inkscape document using File>New. Then choose Extensions>Ink/Stitch>Lettering.

Choose a font, for example Geneva Simple Sans created by Daniel K. Schneider in Geneva. If you want to resize your text, do so at this point using the scale section of the dialog box since resizing it once it is in Inkscape will distort the resulting embroidered pattern. Add your text,

Another Awesome 
Fedora Linux User
<figure class="wp-block-image size-full is-resized">Image showing the text, Another Awesome Fedora Linux User, in a dialog box along with the choosen embroidery font, Geneva Simple Sans<figcaption>Lettering creation dialog box</figcaption></figure>

A preview will appear, click on Quit

<figure class="wp-block-image size-full is-resized">Preview of stitches for the text, Another Awesome Fedora Linux User, created by Ink/Stitch<figcaption>Preview image of text to be embroidered</figcaption></figure>

Then click on Apply and Quit in the lettering creation dialog box. Your text should appear in your Inkscape document.

<figure class="wp-block-image size-full is-resized">Text to be embroidered in the top left corner of an otherwise empty Inkscape document<figcaption>Resulting text in Inkscape document</figcaption></figure>

Create a checkered background and resize the document to content by opening up the document properties dialog box File>Document Properties…

<figure class="wp-block-image size-full is-resized">Document properties dialog box to resize a document<figcaption>Document properties dialog box</figcaption></figure>

Your document should now be a little larger than your text.

<figure class="wp-block-image size-full is-resized">Document is a little larger than the text<figcaption>Text in resized document</figcaption></figure>

Clean Up Stitches

Many commercial embroidery machines support jump instructions which can save human time in finishing the embroidered garment. Examine the text preview image. A single continuous thread sews all the letters. Stitches joining the letters are typically removed. These stitches can either be cut by hand after the embroidery is done, or they can be cut by the embroidery machine if it supports jump instructions. Ink/Stitch can add these jump instructions.

Add jump instructions by selecting View>Zoom>Zoom Page to enlarge the view of the drawing. Press s to choose the Select and transform objects tool. Choose Extensions>Ink/Stitch>Commands>Attach Commands to Selected Objects. A dialog box should appear, check just the Trim thread after sewing this object option.

<figure class="wp-block-image size-full is-resized">Ink/Stitch `Attach commands to selected object` dialog box showing option to `Trim thread after sewing this object`<figcaption>Attach commands dialog</figcaption></figure>

Then click in the drawing area and select the first letter of the text

<figure class="wp-block-image size-large is-resized">The first letter A in the text is selected<figcaption>Select first letter of the text</figcaption></figure>

Then click Apply, and some cut symbols should appear above the letter.

<figure class="wp-block-image size-large is-resized">The first letter A has scissor symbols above it<figcaption>Scissor symbols above first letter</figcaption></figure>

Repeat this process for all letters.

<figure class="wp-block-image size-large is-resized">All the letters in the text have symbols above them<figcaption>Separately embroidered letters</figcaption></figure>

Now save your design, as before, in both SVG and DST formats. Check the likely quality of the embroidered text by previewing your DST file (in this case named AnotherAwesomeFedoraLinuxUserBack.dst – replaced this by the filename you chose), using

vpype eread AnotherAwesomeFedoraLinuxUserBack.dst show
<figure class="wp-block-image size-large is-resized">Illustration showing a green stitch pattern that forms the txt, Another Awesome Fedora Linux User<figcaption>Preview of text to be embroidered created by vpype-embroidery</figcaption></figure>

Check the dimensions of your design, if you need to resize it, you should resize the SVG design file before exporting it as a DST file.

Create a Mockup

To show the approximate placement of your design on the polo shirt create a mockup. You can then send this to an embroidery company with your DST file. The Fedora Design Team has a wiki page with examples of mockups. An example mockup made using Kolourpaint is below.

<figure class="wp-block-image size-large is-resized">Image showing drawings of front and back of a polo shirt with Tux placed on the front left and text on the back top and middle <figcaption>Mockup image of polo shirt with design</figcaption></figure>

You can also use an appropriately licensed drawing of a polo shirt, for example from Wikimedia Commons.

Example Shirt

Pictures of a finished embroidered polo shirt are below

<figure class="wp-block-image size-large is-resized">Embroidered black polo shirt front with white thread used to embroider Tux<figcaption>Front of embroidered shirt</figcaption></figure> <figure class="wp-block-image size-large is-resized">Back of black polo shirt, with white embroidered text<figcaption>Back of embroidered shirt</figcaption></figure> <figure class="wp-block-image size-large">Closeup of the front left of a person wearing a black polo shirt with Tux embroidered on it with white thread.<figcaption>Closeup of embroidered Tux</figcaption></figure> <figure class="wp-block-image size-large">Back of person wearing a polo shirt with the text, Another Awesome Fedora Linux User, embroidered on the shirt<figcaption>Closeup of embroidered text</figcaption></figure>

Further Information

A three color image of Tux is also available, but single colors are easiest to achieve good embroidered results with. Adaptation of this shaded multiple color image is required to use it for embroidery. Additional tutorial information is available on the Ink/Stitch website.

Some companies that can do embroidery given a DST file include:

Search the internet for machine embroidery services close to you or a hackerspace with an embroidery machine you can use.

This article has benefited from many helpful suggestions from Michael Njuguna of Marvel Ark and Brian Lee of Embroidery Your Way.

WebExtension Support in Epiphany

Posted by Patrick Griffis on June 29, 2022 04:00 AM

I’m excited to help bring WebExtensions to Epiphany (GNOME Web) thanks to investment from my employer Igalia. In this post, I’ll go over a summary of how extensions work and give details on what Epiphany supports.

Web browsers have supported extensions in some form for decades. They allow the creation of features that would otherwise be part of a browser but can be authored and experimented with more easily. They’ve helped develop and popularize ideas like ad blocking, password management, and reader modes. Sometimes, as in very popular cases like these, browsers themselves then begin trying to apply lessons upstream.

Toward universal support

For most of this history, web extensions have used incompatible browser-specific APIs. This began to change in 2015 with Firefox adopting an API similar to Chrome’s. In 2020, Safari also followed suit. We now have the foundations of an ecosystem-wide solution.

“The foundations of” is an important thing to understand: There are still plenty of existing extensions built with browser-specific APIs and this doesn’t magically make them all portable. It does, however, provide a way towards making portable extensions. In some cases, existing extensions might just need some porting. In other cases, they may utilize features that aren’t entirely universal yet (or, may never be).

Bringing Extensions to Epiphany

With version 43.alpha Epiphany users can begin to take advantage of some of the same powerful and portable extensions described above. Note that there are quite a few APIs that power this and with this release we’ve covered a meaningful segment of them but not all (details below). Over time our API coverage and interoperability will continue to grow.

What WebExtensions can do: Technical Details

At a high level, WebExtensions allow a private privileged web page to run in the browser. This is an invisible Background Page that has access to a browser JavaScript API. This API, given permission, can interact with browser tabs, cookies, downloads, bookmarks, and more.

Along with the invisible background page, it gives a few options to show a UI to the user. One such method is a Browser Action which is shown as a button in the browser’s toolbar that can popup an HTML view for the user to interact with. Another is an Options Page dedicated to configuring the extension.

Lastly, an extension can inject JavaScript directly into any website it has permissions to via Content Scripts. These scripts are given full access to the DOM of any web page they run in. However content scripts don’t have access to the majority of the browser API but, along with the above pages, it has the ability to send and receive custom JSON messages to all pages within an extension.

Example usage

For a real-world example, I use Bitwarden as my password manager which I’ll simplify how it roughly functions. Firstly there is a Background Page that does account management for your user. It has a Popup that the user can trigger to interface with your account, passwords, and options. Finally, it also injects Content Scripts into every website you open.

The Content Script can detect all input fields and then wait for a message to autofill information into them. The Popup can request the details of the active tab and, upon you selecting an account, send a message to the Content Script to fill this information. This flow does function in Epiphany now but there are still some issues to iron out for Bitwarden.

Epiphany’s current support

Epiphany 43.alpha supports the basic structure described above. We are currently modeling our behavior after Firefox’s ManifestV2 API which includes compatibility with Chrome extensions where possible. Supporting ManifestV3 is planned alongside V2 in the future.

As of today, we support the majority of:

  • alarms - Scheduling of events to trigger at specific dates or times.
  • cookies - Management and querying of browser cookies.
  • downloads - Ability to start and manage downloads.
  • menus - Creation of context menu items.
  • notifications - Ability to show desktop notifications.
  • storage - Storage of extension private settings.
  • tabs - Control and monitoring of browser tabs, including creating, closing, etc.
  • windows - Control and monitoring of browser windows.

A notable missing API is webRequest which is commonly used by blocking extensions such as uBlock Origin or Privacy Badger. I would like to implement this API at some point however it requires WebKitGTK improvements.

For specific API details please see Epiphany’s documentation.

What this means today is that users of Epiphany can write powerful extensions using a well-documented and commonly used format and API. What this does not mean is that most extensions for other browsers will just work out of the box, at least not yet. Cross-browser extensions are possible but they will have to only require the subset of APIs and behaviors Epiphany currently supports.

How to install extensions

This support is still considered experimental so do understand this may lead to crashes or other unwanted behavior. Also please report issues you find to Epiphany rather than to extensions.

You can install the development release and test it like so:

flatpak remote-add --if-not-exists gnome-nightly https://nightly.gnome.org/gnome-nightly.flatpakrepo
flatpak install gnome-nightly org.gnome.Epiphany.Devel
flatpak run --command=gsettings org.gnome.Epiphany.Devel set org.gnome.Epiphany.web:/org/gnome/epiphany/web/ enable-webextensions true

You will now see Extensions in Epiphany’s menu and if you run it from the terminal it will print out any message logged by extensions for debugging. You can download extensions most easily from Mozilla’s website.

Untitled Post

Posted by Zach Oglesby on June 28, 2022 06:03 PM

Who doesn’t throw plates at the wall in a dining room and clear tables. I know I did that all the time when I was in the military. 🫣

Using fwupdmgr to update NVME firmware

Posted by Peter Robinson on June 28, 2022 05:38 PM

The fabulous fwupdmgr provides the ability to easily update firmware that is published to Linux Vendor Firmware Service (LVFS) but it can also be used to apply updates that aren’t necessarily in LVFS. One type of firmware that it supports updating is NVME firmware, that’s basically any NMVE, because the standard specifies a standardised mechanism for updating the firmware on all NVME devices.

I had a need to update a NVME firmware in an aarch64 device to see if it fixed an issue I was seeing. The Crucial P2 supported options were of course x86 only. The ISO download actually contained a little LinuxOS in an initrd on the .iso. The advice from Richard the fwupd technical lead was to “Look for a ~4mb high entropy blob” so mounting it up, I mounted the iso, extracted the initrd, and then used fwupdmmgr to apply the new firmware.

Find the NVME and check the firmware version:

$ cat /sys/class/nvme/nvme0/firmware_rev 
P2CR010 

So once I’d downloaded the update file I did the following to extract and update the firmware. Note I did this all as root, you can do most of it as non root.

# unzip iso_p2cr012.zip
# mount -o loop iso_p2cr012.iso /mnt/
# mkdir ~/tmp
# cp /mnt/boot/corepure64.gz tmp/
# cd tmp
# gunzip corepure64.gz
# cpio -iv < corepure64
# fwupdtool install-blob opt/firmware/P2CR012/1.bin
Loading…                 [-                                      ]
Loading…                 [-                                      ]
Choose a device:
0.	Cancel
1.	71b677ca0f1bc2c5b804fa1d59e52064ce589293 (CT250P2SSD8)
2.	2270d251f7c1dc37a29a2aa720a566aa0fa0ecde (spi1.0)
1
Waiting…                 [************************************** ] Less than one minute remaining…
An update requires a reboot to complete. Restart now? [y|N]: y

And away it goes, a reboot later and did it work?

$ cat /sys/class/nvme/nvme0/firmware_rev 
P2CR012

YES!!

Outreachy Interns introduction – May to August 2022

Posted by Fedora Community Blog on June 28, 2022 08:00 AM

Last month, Outreachy announced the interns selected for duration May 2022 to August 2022, and we have three interns with us. This blog introduces them to the community. If you see them around, please welcome them and share some virtual cookies.

Outreachy is a paid, remote internship program that helps traditionally underrepresented people in tech make their first contributions to Free and Open Source Software (FOSS) communities. Fedora Project is participating in this round of Outreachy as a mentoring organization. We asked our Outreachy interns to tell us some things about themselves! Here are they, in their own words.

I’m Nikita Tripathi and I like to amalgamate my Artistic Outlook into my Designs. I am currently a sophomore pursuing BTech in Mechanical Engineering at Indian Institute of Technology, Roorkee. Creating artworks using different mediums, playing musical instruments (like Sitar, Flute, Xylophone & Keyboard) and sometimes reading thrillers are some of my favourite forms of escapism.

While exploring what interests me, I tumbled upon design during my freshman year and came to learn about Fedora through my seniors who had applied for Outreachy at the Fedora Project. Contributing to the Fedora Badges Design Project is my first brush at Open Source, at being an intern and at working with people from different time zones. Calling the experience so far awesome, would be an understatement. My mentors Marie Nordin and Smera Goel, have been very supportive and I am grateful for the opportunity. I look forward to learning and growing more with the team.

Nikita Tripathi | FAS Id: nekonya3

Hi, I’m Rupanshi! I’m pursuing Computer Science and Engineering at the Thapar Institute of Engineering and Technology, Patiala. I’ve dabbled in quite a few fields, including but not limited to Backend Development, DevOps, CI/CD, and Server Management and am currently exploring the field of Data Science. In my spare time, I enjoy reading books and learning new recipes (though I absolutely suck at cooking :p)

My journey, so far, with the Fedora project has been one hell of an learning experience. I’m working on the project alongside some of the nicest and most supportive mentors. There’s a very popular quote about how it is outside your comfort zone that the magic happens, and I have only recently started to realise the truth behind it. While contributing to the Zezere project under Fedora-IoT, I’ve come to love a lot of things outside my comfort zone. It has reignited the spark of learning something new. 

Needless to say, I’m glad I applied, and I’m looking forward to learning a lot of new things.

Rupanshi Jain | FAS Id: rupanshijain

Hi, I’m Anushka. An undergrad at IIT Roorkee, India, living an engineer’s life in a designer’s shoe. I’m a self-taught product designer based in India. Recently, I’ve been a product design intern at Swiggy and Razorpay. I’m a fan of good copy and visual storytelling. I paint sometimes or create mood boards of paintings I’d love to recreate! I try to go for a run most days. I also recreate recipes I find on Instagram when I get bored of eating everyday meals. I dream about going on a world tour someday:) 

For the longest time, I’ve watched peers at college get into Google Summer of Code (GSoC) and code their summers away. I felt that design doesn’t belong in open-source! So, finding out about Outreachy was life-changing. My major motivation is learning and advocating for open-source design. I had a fantastic time last month when I collaborated with the docs and design team and learnt to use an open-source tool Penpot.

I look forward to meeting new teams and people (remotely and in-person someday). I’m super excited for the journey ahead:)

Anushka Jain | FAS Id: likeanushkkaa

Best wishes for their internship period and beyond!

We wish them all a successful journey as Outreachy interns and look forward to hearing about their experiences and project updates as we go.

The post Outreachy Interns introduction – May to August 2022 appeared first on Fedora Community Blog.

rpminspect-1.10 released

Posted by David Cantrell on June 27, 2022 07:02 PM

rpminspect 1.10 is now available. The last release was in March of 2022. This release is definitely the largest so far. Nearly 200 individual pull requests and 147 reported issues have been fixed.

The main focus of this release has been stabilization across many packages. We have been running continual tests against all current builds in CentOS Stream 9 to keep finding and fixing bugs. This release has so many stabilization and reporting improvements.

Work on 1.11 has begun. Please file issues and feature requests on the GitHub project page: https://github.com/rpminspect/rpminspect.

General release and build process changes:

  • Check the results of meson’s run_command()

Config file or data/ file changes:

  • Clarify the ignore block in comments

Changes to the GitHub Actions CI scripts and files:

  • Enable Fedora rawhide again for x86_64 and i686
  • Do not use a specific actions/checkout version for alpinelinux
  • Use actions/checkout@v3 in alpinelinux.yml
  • On alpinelinux, run git config to define the safe directory
  • Run git config command on all GitHub Actions jobs
  • Make sure sh:git is installed for the fedora GHA jobs
  • Update the Slackware Linux GHA job
  • Build clamav with -D ENABLE_JSON_SHARED=ON on Slackware
  • opensuse does not use yum
  • Ensure manual install of rc on OpenSUSE Leap works
  • Install automake and automake for opensuse-leap job
  • Add bison and html2text to opensuse-leap reqs.txt list

rpminspect(1) changes:

  • Add missing format string to errx() calls
  • For fetch-only, do not override the argv counter in the loop
  • Match products with dist tags containing periods
  • Careful cleanup with rmtree() on exit
  • Honor the -s / --suppress option on json, xunit, and summary modes
  • Use errx() for RI_PROGRAM_ERROR conditions in rpminspect(1)
  • Do not assume before_product and after_product exist
  • Improve product release detection for build comparisons
  • Restore product release matching for single build analysis
  • Handle build comparisons where product release is half known
  • Fix handling of the -s and -t command line options

Documentation changes:

  • Large set of Doxygen comments in header files
  • Add Doxygen comments for include/readelf.h
  • More Doxygen comment headers in include/

General bug fix in the library or frontend program:

  • Normalize the KABI path and do not warn on access(3) failures
  • Do not try to mmap() zero length files in read_file()
  • Do not use warn() if read_file() returns NULL in get_patch_stats()
  • Handle NULL result->msg in output_xunit()
  • Reset the tmp pointer on realloc() in strxmlescape()
  • On stat() failure in read_file_bytes(), just return NULL
  • Use CURLINFO_CONTENT_LENGTH_DOWNLOAD on older libcurl releases
  • Correct the reporting of kmod parameter differences
  • Do not report fallthrough changes as VERIFY in changedfiles
  • Stop resetting the patch_ignore_list when reading config files
  • Honor all per-inspection ignore lists; match path prefix
  • Remove temporary files in the changelog inspection
  • Carefully filter debug packages in gather_deprules_by_type()
  • Double free removed in match_fileinfo_mode()
  • read_file_bytes() must be restricted to S_ISREG() files
  • In rpmdeps, do not report new explicit Requires as VERIFY
  • Do not incorrectly report security-related files as new
  • Correctly handle addedfiles edge cases
  • Security path checking only applies to comparisons in addedfiles
  • Missing free() calls in the new list_remove() function
  • Use a long rather than int64_t for the patch number
  • Correct RPM dependency rule peering
  • Prevent double free() in the patches inspection
  • Correct handling of kmidiff(1) exit codes
  • Correctly check for forbidden directories in RPM payloads
  • Handle PatchN: lines in spec files with no space after :
  • Address some additional Patch and %patch line reading issues
  • strtrim() and strsplit() memory management fixes
  • Handle more auto deps in the kernel package correctly
  • Make sure INFO results in metadata do not fail rpminspect
  • Relax the types inspection a bit
  • Try FNM_LEADING_DIR matches when patterns end in wildcard
  • Correctly pick up the use of %autopatch or %autosetup
  • strcmp() -> !strcmp() in the patches inspection
  • Memory management fix for the changelog inspection
  • Remove temporary files in the changelog inspection
  • Do not fail runpath when comparing kernel builds
  • free before_output and after_output after using them
  • Do not fail dsodeps if ELF type is not ET_DYN
  • Tie the annocheck inspection result to reporting severity
  • Only report forbidden path additions as VERIFY in addedfiles
  • In removedfiles report VERIFY and BAD for security paths
  • Account for leading executables in Exec= (e.g., env VAR=VAL)
  • Output unified diff correctly in delta_out()
  • Simplify severity reporting in the changedfiles inspection
  • Add missing free(tmp) calls in the desktop inspection
  • Minimize total_width initialization for download progress bar
  • Fail if we cannot read RPMs before downloading
  • Adjust reporting severity in the permissions inspection
  • Prevent repetitive results reporting in types
  • Correct rpmdeps inspection reporting levels
  • Correct results reporting for the permissions inspection
  • Correct results reporting for the types inspection
  • Correct results reporting for the filesize inspection
  • Allow NULL inputs to strprefix() and strsuffix()
  • Get per-inspection ignore list working in upstream
  • Support per-file allowed lists for the badfuncs inspection
  • Use allowed_arch() in the arch and subpackages inspections
  • Remove unnecessary warning from failed chdir() call
  • Process per-inspection ignore blocks first in init.c

librpminspect feature or significant change:

  • Drop dependency on the external diffstat command
  • Remove init_elf_data() function
  • Verify enough local disk space exists before downloading
  • Check for enough disk space before unpacking RPMs
  • Add strexitcode() and RI_INSUFFICIENT_SPACE exit code
  • Display insufficient space messages in human readable sizes
  • Doxygen comment work but also add and use missing remedy strings
  • Update to uthash 2.3.0
  • Drop the file count and line count checks in patches
  • Default the filesize inspection size_threshold to INFO
  • Rename init_rpmpeer() and free_rpmpeer() functions
  • Restrict the annocheck and lto inspections to ELF files
  • Simplify the librpm initialization call
  • Make the rpmdeps handle expected config() autodeps correctly
  • Adjust how the rpmdeps inspections trims ISA substrings
  • Add list_remove() function to librpminspect
  • Expand the patches inspection to verify patches are applied
  • Change how debuginfo dirs are matched for files
  • Add strtrim() function to librpminspect
  • In strsplit(), skip empty string tokens
  • Replace rpmDefineMacro usage with rpmPushMacro
  • In diagnostics, display download and unpack space reqs
  • Make the kmod inspection report changes as INFO only
  • Remove unnecessary archive_read_open_filename() warning
  • Always output diagnostics results even if -s specified
  • Move ./rpminspect.yaml reading to init_rpminspect()
  • Support optional product release configuration files
  • Allow local rpminspect.yaml files to extend annocheck options
  • Use REG_EXTENDED in match_product()
  • In match_path(), honor common syntax of /path/to/dir/*
  • Add ints to the BLOCK_ enum in init.c

Test suite commits:

  • Adjust the addedfiles tests to handle new default size threshold
  • Disable all MultipleProvidersCompareRPMs test cases
  • Fix the MultipleProvidersCompareRPMs test cases
  • Correct the %autopatch and %autosetup test cases
  • Skip %autopatch and %autosetup tests on systems without lua
  • Update the test_addedfiles.py test cases
  • Verify automatic ELF Requires handle subpackage changes
  • Support optional rpminspect.yaml overrides per test
  • Use .update() rather than |= to merge dicts
  • export QA_RPATHS from the top level Makefile

See https://github.com/rpminspect/rpminspect/releases/tag/v1.10 for more information.

Where to get this new release?

Fedora, EPEL 7, and EPEL 8 users can get new builds from the testing updates collection. If you install from the testing update, please consider a thumbs up in Bodhi. Without that it takes a minumum of two weeks for it to appear in the stable repo.

Untitled Post

Posted by Zach Oglesby on June 27, 2022 06:58 PM

I am starting to worry that if the Padres make it to the World Series this year it’s going to be a flash back to 1998 with a behemoth of a Yankees team.

Abstract wallpapers in Blender using geometry nodes

Posted by Máirín Duffy on June 27, 2022 04:36 PM

One project I am working on in hopes of having something ready in Fedora 37 final is a new set of installed-by-default but not set as default extra wallpapers for Fedora. These wallpapers would have light & dark mode versions. This is something that Allan Day and I have been planning, and we decided to start out with a set of 6 abstract wallpapers ideally built in a tool such as Blender so that we could easily generate and tweak & refine the light and dark versions in a way that photography (at least within our current resources) does not allow.

I set up a GitLab project for the effort, and that is here: https://gitlab.com/fedora/design/extra-default-wallpapers

Coming up with a theme

My initial thinking on this project is that the wallpapers should have some kind of Fedora-specific theme or narrative driving them, but one that is not tied to any specific release. After thinking a bit on this, I decided the best way forward was to just base the wallpapers on the Fedora Four F’s: freedom, friends, features, first. Conveniently, each of these has a “color code” as well as an icon to represent each which could be used as seeds of inspiration for each wallpaper and/or in selecting which abstract concepts would be best suited to represent Fedora:

Screenshot of the linked documentation page describing each of the Fedora Four F's

I wrote this idea up on Fedora Discussions – each wallpaper will have a base color or highlight (depending on the color, some are quite a bit too bright for a wallpaper base color, lol) coordinating with one of Fedora’s brand palette colors: freedom blue, features orange, friends magenta, first green, as well as the Fedora purple that is used to signify Fedora events, and a neutral grey that is in the Fedora brand palette:

6 colored squares: dark gray, fedora blue, features orange, first green, events purple, friends magenta

 

Building a dynamic abstract structure in Blender using Geometry Nodes

So concepts are great but also useless if you can’t actually produce anything! 🙂 I decided I should get started in Blender. While I’ve taken a bit of Blender training in recent months, with the excitement of the Blender 3.x series coming out, I’m not quite adept with Blender. Creating abstract structures in it for wallpaper felt overwhelming. I had planned to watch jimmac’s streams (mentioned in the README – I had tracked the links down after Allan mentioned them) but I guess Twitch expires older records or something so by the time I’d carved out a block of time to work on this, they’d expired.

I went to YouTube and despite being created for Blender 2.8, found a nice abstract wallpaper tutorial by Bad Normals that taught some of the basics of working with geometry nodes in Blender which ended up serving as the basis of my work thus far:

<iframe allowfullscreen="true" class="youtube-player" height="788" loading="lazy" sandbox="allow-scripts allow-same-origin allow-popups allow-presentation" src="https://www.youtube.com/embed/WCogqNh2AUw?version=3&amp;rel=1&amp;showsearch=0&amp;showinfo=1&amp;iv_load_policy=1&amp;fs=1&amp;hl=en-US&amp;autohide=2&amp;wmode=transparent" style="border:0;" width="1400"></iframe>

I had to adapt some of the instructions to Blender 3.x… there’s some hints in the comments, other things I had to figure out on my own. (You can see how I ultimately ended up configuring things in my posted *.blend files.)

This is a shot of the model this all created – Tweaking it can make the different “blades” of the model change size and shape and twist and turn in different ways which gives totally different vibes to the entire piece:

Screenshot of the Blender interface showing flattened rings that grow from small at the bottom to large at the top and repeat in rhythmic ways up the screen

The entire thing – this is in part how geometry node generated models work – is created from a single ring which is then just essentially cloned then scaled, turned, twisted, and re-positioned along a pattern to build up this large structure:

Screenshot of Blender's interface showing a single flattened ring model in the center of the screen

Playing with the model and coming up with visuals

What I ended up with after working through the tutorial was a model that, in a sense, is really a program or machine of sorts that can generate different abstract structures based on tweaking various variables / configuration of both the root object (see panel in the upper right in screenshot below) and the individual nodes that generate the copies of the root object (see individual node blocks in the node diagram at the bottom of the screenshot below.)

Screenshot of the Blender interface showing the model at the top and the various geometry nodes at the bottom, little boxes linked together with ilnes that each have little configuration / variables in them that will tweak the overall structure

This single model basically generated 11 different wallpaper designs that you might not be able to tell all came from the same basic model.

The earliest ones I came up with I would call the “Flower” series:

 

I played a lot with depth of field on these 🙂 After a while though I started really pulling the model apart and modifying it; this is some of the different visuals I came up with (you can see the whole set in GitLab):

Flight set:

Petals set:

Mermaid Set:

There’s a bunch more in the repo that you can view here: https://gitlab.com/fedora/design/extra-default-wallpapers/-/tree/main/Wallpapers

 

Feedback & next steps

Note that up until this point, I haven’t been too focused on color and the palette I developed, but rather focusing on building the model system and poking around with it to get different types of output and trying to relate that output to some concepts (e.g. coming up with names for different output sets 🙂 ).  The “Flight” series I think can relate pretty well to the “Freedom” Four F’s concept so I’ll likely be iterating those along that path, for example.

I would love your feedback on these (and the others in the repo), but note that the colors / lighting / etc. are all rough and not very thought-through in this round, and it’s more the shapes and composition that feedback on would be most helpful!

My next steps would be to see which of the sets best map to each of the Fedora concepts/themes, and start iterating those based on the Fedora concept, changing the coloring, lighting, etc. to fit the concept.

Generally: I know I have missed the beta packaging deadline so you might not see these in beta, but I am hoping to get a solid set of six into Fedora 37 soon, and perhaps host a test day to get feedback that could then drive more iterations and refinements. So keep your eyes peeled for that, and in the meantime, let me know what you think of what I’ve come up with so far. 🙂 I’ve posted all the *.blend files too so feel free to have a play if you’d like!

Mejor Sitio Web 2022

Posted by Kiwi TCMS on June 27, 2022 02:55 PM

Kiwi TCMS is happy to announce that we have been awarded a "Best Website 2022" award by Reviewbox.es, scoring 36/40 on their evaluation. The review criteria can be found at https://www.reviewbox.es/los-mejores-sitios-web/.

Querido equipo de Kiwitcms,

¡Felicidades!

Habéis logrado lo que muchos otros desean:

Habéis acumulado 36/40 puntos en nuestra investigación de mercado y por lo tanto calificasteis (mínimo 30 de 40 puntos necesarios) para nuestro premio Mejor Sitio Web 2022.

Resultados y criterios de investigación

URL: Kiwitcms-Team

Puntos: 36/40

Our team is happy to accept this award, which comes exactly 2 years after we became an OpenAwards winner.

Thank you and Happy testing!


If you like what we're doing and how Kiwi TCMS supports various communities please help us!

See you in GUADEC!

Posted by Felipe Borges on June 27, 2022 09:40 AM

Hey there!

After two virtual conferences, GUADEC is finally getting back to its physical form. And there couldn’t be a better place for us to meet again than Mexico! If you haven’t registered yet, hurry up!

Looking forward to seeing you all!

Digitally signing PDF documents in Linux: with hardware token & Okular

Posted by Rajeesh K Nambiar on June 27, 2022 09:34 AM

We are living in 2022. And it is now possible to digitally sign a PDF document using libre software. This is a love letter to libre software projects, and also a manual.

For a long time, one of the challenges in using libre software in ‘enterprise’ environments or working with Government documents is that one will eventually be forced to use a proprietary software that isn’t even available for a libre platform like GNU/Linux. A notorious use-case is digitally signing PDF documents.

Recently, Poppler (the free software library for rendering PDF; used by Evince and Okular) and Okular in particular has gained a lot of improvements in displaying digital signature and actually signing a PDF document digitally (see this, this, this, this, this and this). When the main developer Albert asked for feedback on what important functionality would the community like to see incorporated as part this effort; I had asked if it would be possible to use hardware tokens for digital signature. Turns out, poppler uses nss (Network Security Services, a Mozilla project) for managing the certificates, and if the token is enrolled in NSS database, Okular should be able to just use it.

This blog post written a couple of years ago about using hardware token in GNU/Linux is still actively referred by many users. Trying to make the hardware token work with Okular gave me some more insights. With all the other prerequisites (token driver installation etc.) in place, follow these steps to get everything working nicely.

Howto

  1. There are 2 options to manage NSSDB: (i) manually by setting up $HOME/.pki/nssdb, or (ii) use the one automatically created by Firefox if you already use it. Assuming the latter, the nssdb would be located in the default profile directory $HOME/.mozilla/firefox/<random.dirname>/ (check for existence of the file pkcs11.txt in that directory to be sure).
  2. Open Okular and go to SettingsConfigure backendPDF and choose/set the correct certificate database path, if not already set by default.
<figure class="wp-block-image size-large"><figcaption>Fig. 1: Okular PDF certificate database configuration.</figcaption></figure>
  1. Start the smart card service (usually auto-started, you won’t have to do this): either pcsc_wd.service (for WatchData keys) or pcscd.service.
  2. Plug in the hardware token.
  3. Open a PDF in Okular. Add digitial signature using menu ToolsDigitally Sign
  4. This should prompt for the hardware token password.
<figure class="wp-block-image size-large is-resized"><figcaption>Fig. 2: Digital token password prompt when adding digital sign in the PDF document.</figcaption></figure>
  1. Click & drag a square area where you need to place the signature and choose the certificate. Note that, since Poppler 22.03, it is also possible to insert signature in a designated field.
<figure class="wp-block-image size-large is-resized"><figcaption>Fig. 3: Add digital signature by drawing a rectangle.</figcaption></figure>
  1. Signature will be placed on a new PDF file (with suffix -signed) and it will open automatically.
<figure class="wp-block-image size-large"><figcaption>Fig. 4: Digitally signed document.</figcaption></figure>
  1. You can also see the details of the hardware token in PDF backend settings.
<figure class="wp-block-image size-large"><figcaption>Fig. 5: Signature present in hardware token visible on the PDF backend settings.</figcaption></figure>

Thanks to the free software projects & developers who made this possible.

Accessibility in Fedora Workstation

Posted by Fedora Magazine on June 27, 2022 08:00 AM

The first concerted effort to support accessibility under Linux was undertaken by Sun Microsystems when they decided to use GNOME for Solaris. Sun put together a team focused on building the pieces to make GNOME 2 fully accessible and worked with hardware makers to make sure things like Braille devices worked well. I even heard claims that GNOME and Linux had the best accessibility of any operating system for a while due to this effort. As Sun started struggling and got acquired by Oracle this accessibility effort eventually trailed off with the community trying to pick up the slack afterwards. Especially engineers from Igalia were quite active for a while trying to keep the accessibility support working well.

But over the years we definitely lost a bit of focus on this and we know that various parts of GNOME 3 for instance aren’t great in terms of accessibility. So at Red Hat we have had a lot of focus over the last few years trying to ensure we are mindful about diversity and inclusion when hiring, trying to ensure that we don’t accidentally pre-select against underrepresented groups based on for instance gender or ethnicity. But one area we realized we hadn’t given so much focus recently was around technologies that allowed people with various disabilities to make use of our software. Thus I am very happy to announce that Red Hat has just hired Lukas Tyrychtr, who is a blind software engineer, to lead our effort in making sure Red Hat Enterprise Linux and Fedora Workstation has excellent accessibility support!

Anyone who has ever worked for a large company knows that getting funding for new initiatives is often hard and can take a lot of time, but I want to highlight how I was extremely positively surprised at how quick and easy it was to get support for hiring Lukas to work on accessibility. When Jiri Eischmann and I sent the request to my manager, Stef Walter, he agreed to champion the same day, and when we then sent it up to Mike McGrath who is the Vice President of Linux Engineering he immediately responded that he would bring this to Tim Cramer who is our Senior Vice President of Software Engineering. Within a few days we had the go ahead to hire Lukas. The fact that everyone just instantly agreed that accessibility is important and something we as a company should do made me incredibly proud to be a Red Hatter.

What we hope to get from this is not only a better experience for our users, but also to allow even more talented engineers like Lukas to work on Linux and open source software at Red Hat. I thought it would be a good idea here to do a quick interview with Lukas Tyrychtr about the state of accessibility under Linux and what his focus will be.

Christian: Hi Lukas, first of all welcome as a full time engineer to the team! Can you tell us a little about yourself?

Lukas: Hi, Christian. For sure. I am a completely blind person who can see some light, but that’s basically it. I started to be interested in computers around 2009 or so, around my 15th or 16th birthday. First, because of circumstances, I started tinkering with Windows, but Linux came shortly after, mainly because of some pretty good friends. Then, after four years the university came and the Linux knowledge paid off, because going through all the theoretical and practical Linux courses there was pretty straightforward (yes, there was no GUI involved, so it was pretty okay, including some custom kernel configuration tinkering). During that time, I was contacted by Red Hat associates whether I’d be willing to help with some accessibility related presentation at our faculty, and that’s how the collaboration began. And, yes, the hire is its current end, but that’s actually, I hope, only the beginning of a long and productive journey.

Christian: So as a blind person you have first hand experience with the state of accessibility support under Linux. What can you tell us about what works and what doesn’t work?

Lukas: Generally, things are in pretty good shape. Braille support on text-only consoles basically just always works (except for some SELinux related issues which cropped up). Having speech there is somewhat more challenging, the needed kernel module (Speakup for the curious among the readers) is not included by all distributions, unfortunately it is not included by Fedora, for example, but Arch Linux has it. When we look at the desktop state of affairs, there is basically only a single screen reader (an application which reads the screen content), called Orca, which might not be the best position in terms of competition, but on the other hand, stealing Orca developers would not be good either. Generally, the desktop is usable, at least with GTK, Qt and major web browsers and all recent Electron based applications. Yes, accessibility support receives much less testing than I would like, so for example, a segmentation fault with a running screen reader can still unfortunately slip through a GTK release. But, generally, the foundation works well enough. Having more and naturally sounding voices for speech synthesis might help attract more blind users, but convincing all the players is no easy work. And then there’s the issue of developer awareness. Yes, everything is in some guidelines like the GNOME ones, however I saw much more often than I’d like to for example a button without any accessibility labels, so I’d like to help all the developers to fix their apps so accessibility regressions don’t get to the users, but this will have to improve slowly, I guess.

Christian: So you mention Orca, are there other applications being widely used providing accessibility?

Lukas: Honestly, only a few. There’s Speakup – a kernel module which can read text consoles using speech synthesis, e.g. a screen reader for these, however without something like Espeakup (an Espeak to Speakup bridge) the thing is basically useless, as it by default supports hardware synthesizers, however this piece of hardware is basically a think of the past, e.g. I have never seen one myself. Then, there’s BRLTTY. This piece of software provides braille output for screen consoles and an API for applications which want to output braille, so the drivers can be implemented only once. And that’s basically it, except for some efforts to create an Orca alternative in Rust, but that’s a really long way off. Of course, utilities for other accessibility needs exist as well, but I don’t know much about these.

Christian: What is your current focus for things you want to work on both yourself and with the larger team to address?

Lukas: For now, my focus is to go through the applications which were ported to GTK 4 as a part of the GNOME development cycle and ensure that they work well. It includes adding a lot of missing labels, but in some cases, it will involve bigger changes, for example, GNOME Calendar seems to need much more work. During all that, educating developers should not be forgotten either. With these things out of the way, making sure that no regressions slip to the applications should be addressed by extending the quality assurance and automated continuous integration checks, but that’s a more distant goal.

Christian: Thank you so much for talking with us Lukas, if there are other people interested in helping out with accessibility in Fedora Workstation what is the best place to reach you?

Actually for now the easiest way to reach me is by email at ltyrycht@redhat.com. Be happy to talk to anyone wanting to help with making Workstation great for accessibility.

Episode 329 – Signing (What is it good for)

Posted by Josh Bressers on June 27, 2022 12:01 AM

Josh and Kurt talk about what the actual purpose of signing artifacts is. This is one of those spaces where the chain of custody for signing content is a lot more complicated than it sometimes seems to be. Is delivering software over https just as good as using a detached signature? How did we end up here, what do we think the future looks like? This episode will have something for everyone to complain about!

<audio class="wp-audio-shortcode" controls="controls" id="audio-2814-1" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_329_Signing_What_is_it_good_for.mp3?_=1" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_329_Signing_What_is_it_good_for.mp3</audio>

Show Notes

Migrating Rails cookies to the new JSON serializer

Posted by Josef Strzibny on June 26, 2022 12:00 AM

How to move from Marshal to the new Rails 7 default JSON serializer.

I was recently upgrading Phrase to Rails 7. Big upgrades like that are usually being done with the most minimal changes, and this one wasn’t an exception. However, every major and minor version of Rails brings some new defaults that can accumulate over time, leaving you with some debt to pay.

Today I want to talk about the JSON serializer for cookies that became the default starting with Rails 7.0.

Marshal vs JSON

So what’s up with the new JSON serializer? The previous cookie serializer conveniently used Marshal to convert any Ruby object into a byte stream and allowed us to store pretty much anything as long as it fit 4096 bytes (browser’s limit). This same flexibility is also why marshaling is not considered secured and the reason behind the Rails 7 change to save cookies as JSON.

The main problem of this change is that your current users have a bunch of cookies already saved in their browsers, so if you changed the serializer without changing your cookies, you could find yourself with a broken application.

The plan

The plan to change the cookies to JSON consists of:

  • Upgrading to Rails 7 while keeping the old Marshal serializer
  • Making a list of cookies and their data types
  • Converting data types when reading the cookies
  • Moving to the :hybrid serializer
  • Changing the way we save cookies
  • Moving from :hybrid to :json

Upgrading to Rails 7

Rails has us covered when it comes to upgrading to version 7. After running rails app:update we’ll get a new initializer with all new Rails 7 defaults in one place. We are interested in the following part:

# config/initializers/new_framework_defaults_7_0.rb
...
# If you're upgrading and haven't set `cookies_serializer` previously, your cookie serializer
# was `:marshal`. Convert all cookies to JSON, using the `:hybrid` formatter.
#
#
# If you're confident all your cookies are JSON formatted, you can switch to the `:json` formatter.
#
#
# Continue to use `:marshal` for backward compatibility with old cookies.
#
#
# If you have configured the serializer elsewhere, you can remove this.
#
#
# See https://guides.rubyonrails.org/action_controller_overview.html#cookies for more information.
# Rails.application.config.action_dispatch.cookies_serializer = :hybrid

Because we don’t know what would happen to our code getting JSON cookies, we need to start with the old default. We set your serializer to :marshal and continue with the upgrade:

Rails.application.config.action_dispatch.cookies_serializer = :marshal

This allows us to upgrade Rails and safely revert the upgrade if needed. Once on Rails 7, we can attempt to fix our cookies.

Cookies list

Since every cookie can be saved and read differently, we need to identify all cookies (including signed and session cookies) in the application. This can start with a simple search for cookies and session, but let’s not forget that our dependencies might use cookies too. You should have a look for and check cookie storage in your browser’s dev tools. Make a list and group cookies by data type you are saving. Your marketing and legal department might also be interested in this.

The list can look like the following:


# Cookies

## Strings

- cookies[:language]

## Integers

- session[:oauth_access_token_id]

Converting data types

The main problem when upgrading to JSON is that cookies will end up with different data types. So we should prepare a list of the data types we found and ensure we understand how the value will change.

If you are unsure what will happen, set a cookie in your controller and read it back using the new default:

# in config
Rails.application.config.action_dispatch.cookies_serializer = :json

# in a controller
cookies[:test] = { key: "value" }
puts cookies[:test].inspect

I’ll note some main data types and how we have to change the code reading the values from these cookies.

  • Strings

    Strings in Marshal will be strings in JSON. Change is not required.

  • Atoms

    Atoms won’t stay atoms but would be converted to strings. The fix is to call to_sym on the value from the cookie.

  • Booleans

    Booleans won’t stay booleans but will be returned as strings. The fix is to cast the returned values using ActiveModel::Type::Boolean.new.cast.

  • Integers

    Integers won’t stay integers but will be returned as strings. The fix is to call .to_i on returned values.

  • Hashes

    Hashes will stay hashes but atom keys will end up being strings. The fix is to convert such a cookie to HashWithIndifferentAccess. Note that hashes can contain values that might need change as well.

  • Ruby objects

    The last category is all the other Ruby objects and won’t stay the same. For example, ActiveSupport::Duration would be returned as a string in seconds.

In the end, things will be JSON, so if you need a more complicated structure, convert it to JSON, and then you can pass it to the cookie.

Moving forward

Once we update how we read our cookies, we can switch to the :hybrid mode and deploy our change.

Rails.application.config.action_dispatch.cookies_serializer = :hybrid

The main work is done. Give yourself a break!

Saving cookies

To properly finish up, we should go back to every cookie and make sure we are already saving it with the expected data type. This will improve clarity. So instead of keeping an integer, we can call .to_s and pass a string explicitly.

Leaving hybrid mode

After a reasonable time in production, we can now remove the :hybrid option from the configuration (you can delete the whole setting). You have made it!

XPath for libvirt external snapshop path

Posted by Adam Young on June 24, 2022 07:20 PM

The following xmllint XPath query will pull out the name of the backing file for a VM named fedora-server-36 and an external snapshot named fedora-36-post-install,

virsh snapshot-dumpxml fedora-server-36 fedora-server-36-post-install | xmllint --xpath "string(//domainsnapshot/disks/disk[@snapshot='external']/source/@file)" -

The string function extracts the attribute value.

This value can be used in the process of using or deleting the snapshot.

Friday’s Fedora Facts: 2022-25

Posted by Fedora Community Blog on June 24, 2022 04:15 PM

Here’s your weekly Fedora report. Read what happened this week and what’s coming up. Your contributions are welcome (see the end of the post)!

Announcements

CfPs

<figure class="wp-block-table">
ConferenceLocationDateCfP
PyCon SKBratislava, SK9–11 Sepcloses 30 Jun
SREcon22 EMEAAmsterdam, NL25–27 Octcloses 30 Jun
Write the Docs Praguevirtual11–13 Sepcloses 30 Jun
React IndiaGoa, IN & virtual22–24 Sepcloses 30 Jun
NodeConf EUKilkenny, IE & virtual3–5 Octcloses 6 Jul
Nest With Fedoravirtual4–6 Augcloses 8 Jul
CentOS Dojo at DevConf USBoston, MA, US17 Augcloses 22 Jul
</figure>

Help wanted

Prioritized Bugs

See the Prioritized Bugs documentation for information on the process, including how to nominate bugs.

<figure class="wp-block-table">
Bug IDComponentStatus
2079833cmakeNEW
</figure>

Meetings & events

Fedora Hatches

Hatches are local, in-person events to augment Nest With Fedora. Here are the upcoming Hatch events.

<figure class="wp-block-table">
DateLocation
24–25 JunVigo, ES
25 JunStockholm, SE
6–8 JulPune, IN
13–14 JulCork, IE
21 JulRochester, NY, US
28 JulMexico City, MX
11 AugBrno, CZ
</figure>

As a reminder, the Nest With Fedora CfP is open. We’ll see you online for Nest With Fedora 4–6 August.

Releases

<figure class="wp-block-table">
Releaseopen bugs
F354218
F362961
Rawhide7416
</figure>

Fedora Linux 37

Schedule

Below are some upcoming schedule dates. See the schedule website for the full schedule.

  • 2022-06-28 — System-Wide Changes, Changes requiring mass rebuild due
  • 2022-07-19 — Self-Contained Changes due
  • 2022-07-20 — Mass rebuild begins

Changes

The table below lists proposed Changes. See the ChangeSet page or Bugzilla for information on approved Changes.

<figure class="wp-block-table">
ProposalTypeStatus
Fallback HostnameSystem-WideFESCo #2804
RPM Macros for Build FlagsSystem-WideApproved
LLVM 15Self-ContainedApproved
Supplement of Server distributables by a KVM VM disk imageSelf-ContainedApproved
Erlang 25Self-ContainedAnnounced
Add -fno-omit-frame-pointer to default compilation flagsSystem-WideFESCo #2817
Gettext Runtime SubpackageSystem-WideFESCo #2818
Golang 1.19System-WideFESCo #2819
Deprecate openssl1.1 packageSystem-WideAnnounced
Stratis 3.1.0Self-ContainedAnnounced
</figure>

Fedora Linux 40

Changes

The table below lists proposed Changes.

<figure class="wp-block-table">
ProposalTypeStatus
Retire python3.7Self-ContainedFESCo #2816
</figure>

Contributing

Have something you want included? You can file an issue or submit a pull request in the fedora-pgm/pgm_communication repo.

The post Friday’s Fedora Facts: 2022-25 appeared first on Fedora Community Blog.

CPE Weekly Update – Week 25 2022

Posted by Fedora Community Blog on June 24, 2022 10:00 AM
featured image with CPE team's name

This is a weekly report from the CPE (Community Platform Engineering) Team. If you have any questions or feedback, please respond to this report or contact us on #redhat-cpe channel on libera.chat (https://libera.chat/).

Week: 20th – 24th June 2022

Highlights of the week

Infrastructure & Release Engineering

Goal of this Initiative

The purpose of this team is to take care of day-to-day business regarding CentOS and Fedora Infrastructure and Fedora release engineering work.
It’s responsible for services running in Fedora and CentOS infrastructure and preparing things for the new Fedora release (mirrors, mass branching, new namespaces etc.).
The ARC (which is a subset of the team) investigates possible initiatives that CPE might take on.
Link to planning board: https://zlopez.fedorapeople.org/I&R-2022-06-22.pdf
Link to docs: https://docs.fedoraproject.org/en-US/infra/

Update

Fedora Infra

  • Most apps have moved over to the OpenShift4 cluster. Hopefully, the transition should be finishing up this week.
  • Wiki: All upgraded in production and working (thanks Ryan!)
  • Resultsdb: All moved over to OpenShift 4 in prod and working (thanks Leo!)
  • Business proceeding as usual

CentOS Infra including CentOS CI

  • Kerberos settings switch for git.centos.org (kcm on el8 vs keyring on el7) for lookaside upload cgi
  • Issue on iad2 hosted reference mirror (epel.next and mirrormanager), all fixed now
  • Duffy CI ongoing tasks and deployments (all announced)
  • Equinix nodes migration (on their request)
  • Business proceeding as usual

Release Engineering

  • Compose-tracker updated to f36 in staging, production happening tomorrow
  • Python 3.11 merged to rawhide
  • MBS randomly fails to process builds
  • Rawhide compose failures recently (syslinux retirement, then python 3.11 merge) all fixed now
  • Business proceeding as usual

CentOS Stream

Goal of this Initiative

This initiative is working on CentOS Stream/Emerging RHEL to make this new distribution a reality. The goal of this initiative is to prepare the ecosystem for the new CentOS Stream.

Updates

  • CentOS Stream 8: Manually keeping regular RPMs and module RPMs updated on the koji.stream server as current updates are composed and released.

CentOS Duffy CI

Goal of this Initiative

Duffy is a system within CentOS CI infrastructure allowing tenants to provision and access machines (physical and/or virtual, of different architectures and configurations) for the purposes of CI testing. Development of Duffy is largely finished, we’re currently planning and testing deployment scenarios.

Updates

  • Release version 3.2.1
  • Docs, docs, docs and a Dojo

Package Automation (Packit Service)

Goal of this initiative

Automate RPM packaging of infra apps/packages

Updates

  • Mostly business as usual
  • Thanks again to all who are reviewing our PRs
  • Most of our GitHub critical apps are enabled now or close to being enabled

Flask-oidc: oauth2client replacement

Goal of this initiative

Flask-oidc is a library used across the Fedora infrastructure and is the client for ipsilon for its authentication. flask-oidc uses oauth2client. This library is now deprecated and no longer maintained. This will need to be replaced with authlib.

Updates:

  • POC working using authlib, tidying up code to prepare to submit a PR back to upstream

EPEL

Goal of this initiative

Extra Packages for Enterprise Linux (or EPEL) is a Fedora Special Interest Group that creates, maintains, and manages a high-quality set of additional packages for Enterprise Linux, including, but not limited to, Red Hat Enterprise Linux (RHEL), CentOS and Scientific Linux (SL), Oracle Linux (OL).

EPEL packages are usually based on their Fedora counterparts and will never conflict with or replace packages in the base Enterprise Linux distributions. EPEL uses much of the same infrastructure as Fedora, including a build system, Bugzilla instance, updates manager, mirror manager and more.

Updates

  • This week we have 6442 (+127)  packages, from 2882 (+76) source packages
  • Containerd and puppet retired from EPEL7 because of upstream EOL and multiple CVEs.
  • Caddy was updated, fixing 4 CVEs in EPEL9

Kindest regards,
CPE Team

The post CPE Weekly Update – Week 25 2022 appeared first on Fedora Community Blog.

Fedora Job Opening: Community Action and Impact Coordinator (FCAIC)

Posted by Fedora Community Blog on June 23, 2022 09:30 PM

It is bittersweet to announce that I have decided to move on from my role as the Fedora Community Action and Impact Coordinator (FCAIC). For me, this role has been full of growth, unexpected challenges, and so much joy. It has been a privilege to help guide our wonderful community through challenges of the last three years. I’m excited to see what the next FCAIC can do for Fedora. If you’re interested in applying, see the FCAIC job posting on Red Hat Jobs and read more about the role below. 

Adapting to Uncertain Times

When I applied back in 2019, a big part of the job description was to travel the globe, connecting with and supporting Fedora communities worldwide. As we all know, that wasn’t possible with the onset of COVID-19 and everything that comes with a pandemic. 

Instead, I learned how to create virtual experiences for Fedora, connect with people solely in a virtual environment, and support contributors from afar. Virtual events have been a HUGE success for Fedora. The community has shown up for those events in such a wonderful way. We have almost tripled our participation in our virtual events since the first Release Party in 2020. We have more than doubled the number of respondents to the Annual Contributor Survey over last year’s turnout. I am proud of the work I have accomplished and even more so how much the community has grown and adapted to a very challenging couple of years.

What’s next for me

As some of you may know, I picked up the Code of Conduct (CoC) work that my predecessor Brian Exelbierd (Bex) started for Fedora. After the Fedora Council approved the new CoC, I then got started on additional pieces of related work: Supplemental Documentation and Moderation Guidelines. I am also working on expanding the small Code of Conduct Committee(CoCC) to include more community members. As a part of the current CoCC, I have helped to deal with the majority of the incidents throughout my time as FCAIC. 

Because of my experience with all this CoC work, I will be moving into a new role inside of Red Hat’s OSPO: Code of Conduct Specialist. I will be assisting other Community Architects (like the FCAIC role) to help roll out CoC’s and governance around them, as well as collaborating with other communities to develop a Community of Practice around this work. I am excited and determined to take on this new challenge and very proud to be a part of an organization that values work that prioritizes safety and inclusion. 

What’s next for Fedora

This is an amazing opportunity for the Fedora community to grow in new and exciting ways. Every FCAIC brings their own approach to this role as well as their own ideas, strengths, and energy. I will be working with Matthew Miller, Ben Cotton, and Red Hat to help hire and onboard the new Fedora Community Action and Impact Coordinator. I will continue as FCAIC until we hire someone new, and will help transition them into the role. Additionally, I will offer support, advice, and guidance as others who have moved on have done for me. I am eager to see who comes next and how I can help them become a success. And, as I have for years prior to my tenure as FCAIC, I will continue to participate in the community, albeit in different ways. 

This means we are looking for a new FCAIC! Do you love Fedora? Do you want to help support and grow the community full time? This is the core of what the FCAIC does. The job description has a list of the primary job responsibilities and required skills- but that is just a taste of what is required and what it is to support the Fedora community full time. Day-to-day work includes working with the Mindshare Committee, managing the Fedora budget, and being a part of many other teams and in many places. You should be ready and excited to write about Fedora’s achievements, policies, as well as generate strategies to help the community succeed. And, of course, there is event planning and support (Flock, Nest, Hatch, Release Parties, etc). It can be tough work, but it is a lot of fun and wonderfully rewarding to help Fedora thrive. 

How to apply

Do you enjoy working with people all over the world, with a variety of skills and interests? Are you good at setting long term goals and seeing them through to completion? Can you set priorities, follow through, and know when to say “no” in order to focus on the most important tasks for success? Are you excited about building not only a successful Linux distribution, but also a healthy project? Is Fedora’s mission deeply important to you? If you said “yes” to these questions, you might be a great candidate for the FCAIC role. If you think you’re a great fit, please apply online, or contact Marie Nordin, or Jason Brooks.

The post Fedora Job Opening: Community Action and Impact Coordinator (FCAIC) appeared first on Fedora Community Blog.

Fedora Job Opening: Community Action and Impact Coordinator (FCAIC)

Posted by Fedora Magazine on June 23, 2022 09:15 PM

It is bittersweet to announce that I have decided to move on from my role as the Fedora Community Action and Impact Coordinator (FCAIC). For me, this role has been full of growth, unexpected challenges, and so much joy. It has been a privilege to help guide our wonderful community through challenges of the last three years. I’m excited to see what the next FCAIC can do for Fedora. If you’re interested in applying, see the FCAIC job posting on Red Hat Jobs and read more about the role below. 

Adapting to Uncertain Times

When I applied back in 2019, a big part of the job description was to travel the globe, connecting with and supporting Fedora communities worldwide. As we all know, that wasn’t possible with the onset of COVID-19 and everything that comes with a pandemic. 

Instead, I learned how to create virtual experiences for Fedora, connect with people solely in a virtual environment, and support contributors from afar. Virtual events have been a HUGE success for Fedora. The community has shown up for those events in such a wonderful way. We have almost tripled our participation in our virtual events since the first Release Party in 2020. We have more than doubled the number of respondents to the Annual Contributor Survey over last year’s turnout. I am proud of the work I have accomplished and even more so how much the community has grown and adapted to a very challenging couple of years.

What’s next for me

As some of you may know, I picked up the Code of Conduct (CoC) work that my predecessor Brian Exelbierd (Bex) started for Fedora. After the Fedora Council approved the new CoC, I then got started on additional pieces of related work: Supplemental Documentation and Moderation Guidelines. I am also working on expanding the small Code of Conduct Committee(CoCC) to include more community members. As a part of the current CoCC, I have helped to deal with the majority of the incidents throughout my time as FCAIC. 

Because of my experience with all this CoC work, I will be moving into a new role inside of Red Hat’s OSPO: Code of Conduct Specialist. I will be assisting other Community Architects (like the FCAIC role) to help roll out CoC’s and governance around them, as well as collaborating with other communities to develop a Community of Practice around this work. I am excited and determined to take on this new challenge and very proud to be a part of an organization that values work that prioritizes safety and inclusion. 

What’s next for Fedora

This is an amazing opportunity for the Fedora community to grow in new and exciting ways. Every FCAIC brings their own approach to this role as well as their own ideas, strengths, and energy. I will be working with Matthew Miller, Ben Cotton, and Red Hat to help hire and onboard the new Fedora Community Action and Impact Coordinator. I will continue as FCAIC until we hire someone new, and will help transition them into the role. Additionally, I will offer support, advice, and guidance as others who have moved on have done for me. I am eager to see who comes next and how I can help them become a success. And, as I have for years prior to my tenure as FCAIC, I will continue to participate in the community, albeit in different ways. 

This means we are looking for a new FCAIC! Do you love Fedora? Do you want to help support and grow the community full time? This is the core of what the FCAIC does. The job description has a list of the primary job responsibilities and required skills- but that is just a taste of what is required and what it is to support the Fedora community full time. Day-to-day work includes working with the Mindshare Committee, managing the Fedora budget, and being a part of many other teams and in many places. You should be ready and excited to write about Fedora’s achievements, policies, as well as generate strategies to help the community succeed. And, of course, there is event planning and support (Flock, Nest, Hatch, Release Parties, etc). It can be tough work, but it is a lot of fun and wonderfully rewarding to help Fedora thrive. 

How to apply

Do you enjoy working with people all over the world, with a variety of skills and interests? Are you good at setting long term goals and seeing them through to completion? Can you set priorities, follow through, and know when to say “no” in order to focus on the most important tasks for success? Are you excited about building not only a successful Linux distribution, but also a healthy project? Is Fedora’s mission deeply important to you? If you said “yes” to these questions, you might be a great candidate for the FCAIC role. If you think you’re a great fit, please apply online, or contact Marie Nordin, or Jason Brooks.

Copy in for-each loops in C++

Posted by Adam Young on June 23, 2022 05:03 PM

I had a bug in my OpenGL program. Here was the original code:

  for (Orbitor o : orbitors){
    o.calculate_position();
  }

and here was the working version

  for (std::vector<orbitor>::iterator it = orbitors.begin() ;
       it != orbitors.end();
       ++it)
    {
      it->calculate_position();
    }

The bug was that the o.calculate_position(); call was supposed to update the internal state of the Orbitor structure, but was called on a copy of the instance in the original structure, and not on the original structure itself. Thus, when a later call tried to show the position, it was working with the version that had not updated the position first, and thus was showing the orbitors in the wrong position.

The reason for this bug makes sense to me: the first version of the loop makes a copy of the instance to use inside the for loop. The second uses a pointer to the original object.

One way that I can protect against these kinds of bugs is to create a private copy constructor, so that the objects cannot be accidentally copied.

The Freedom of Internet at OSCAL 2022

Posted by Julita Inca Chiroque on June 23, 2022 03:39 PM

How old is the Internet? Are we aware of the technologies that are behind this concept? What does it mean to be secure while accessing the Internet? How many antivirus programs have been developed to protect GNU/Linux systems? These were some of the questions I had in mind when I decided to attend OSCAL 2022. To my surprise, this event exceeded my expectations. I discovered really interesting topics and workshops, young tech developers, experienced speakers, hardworking organizers, and a very enthusiastic FLOSS community in Tirana, Albania.

<figure class="wp-block-image size-large"></figure>

Going to the event

My journey started on Platform 5 in Reading from were I got a train to London Gatwick airport. It was a sunny Friday morning in the UK, and it seemed that many people took the day off because the train was completely full! Laughs all around were part of my journey, smartly dressed people, ladies in glamorous dresses and hats, presumably heading for Ascot races.

<figure class="wp-block-image size-large"></figure>

A direct flight of about 3 hours was on time and, ironically, when I was trying to connect to Internet at the airport, I had to fill in an online form which not only asked for my personal information such as full name, email, and phone number but it also wanted to know my citizenship status. Why do they need to collect this information?

The distance from the Tirana airport to the centre of Tirana was about 30 – 40 minutes by bus or taxi. From there, I was able to walk to almost everywhere as the centre is quite compact. The city was full of activities for youngsters, over the weekend there were dance performances and concerts every night.

<figure class="wp-block-image size-large"></figure>

The event

Two days of talks, workshops and the exhibitions were held on the second floor of Tirana European Youth Capital HQ. A lovely patisserie was conveniently located on the first floor, so we could enjoy our food in the same building.  

<figure class="wp-block-image size-large"></figure>

On the first night I was able to attend the PreParty at Hackerspace. I could see how the community decorated its space. It got my attention how they framed the posters of previous events and I really felt it was a very connected and organized team. I was so glad to finally meet face-to-face the young women who run the Hackerlab in Albania. Some of them are FLOSS developers, security administrators, marketers and promoters in Albania.

<figure class="wp-block-image size-large"></figure>

Talks 

You can see the detailed list of talks here. I listened to the presentations made by OpenSUSE, LibreOffice, Wikipedia and the Fedora community. Pictured below, the talks by Johannes and Mike. Johannes did 5 live demos to create a secure setup for developers. Mike shared the ongoing efforts done by the Libre Office community.

<figure class="wp-block-image size-large"></figure>

Censorship of Wikipedia was a controversial talk that kept us very concentrated for more than 25 minutes. Good job Lars! There was also a panel to talk about the FLOSS situation over the past 20 years and future opportunities.

<figure class="wp-block-image size-large"></figure>

Other talks were focused on the laws and regulations regarding the data we publish and the opinions we express online.

Workshops

I attended two workshops that were full and even people were stood up. One by Endri: Digital Forensics and Incident Response, and the other about TOR by Klesti Fetiu.

<figure class="wp-block-image size-large"></figure>

Exhibitor stands 

One of the projects that caught my eye was CARBONIO which is an alternative to Office365. I tested the software and it was able to do calls, mark calendars, receive and send emails. One novelty of this software was the feature of using and editing in real time LibreOffice files from emails. 

<figure class="wp-block-image size-large"></figure>

I definitely recommend this free software. It was full functional and it provides a fresh UI.

Fedora 

It has been almost ten years since I joined Fedora as Ambassador in July 2012. It was so nice to meet in person Mariana Balla from Fedora Albania. It was also a pleasure to meet Nefie Shehu who is a very active FLOSS leader in Albania. It was nice to chat with OpenSUSE.

<figure class="wp-block-image size-large"></figure>

My presentation

I started my presentation paying my respects to Marina Zhurakhinskaya  who has done so much during her lifetime to promote the participation of women in Linux around the world. To see my presentation, please click here.

<figure class="wp-block-image size-large"></figure>

Curious facts

  • We had a workshop to unlock a lock.
  • The traffic lights duplicated with long strip-lights
  • The touch-screen controls in an elevator
  • I saw many more women than men in the city.
  • During my flight, for the first time I noticed an aircraft going in an opposite direction – a huge airliner was out of sigh incredibly fast.
<figure class="wp-block-image size-large"></figure>

Thank you Fedora for sponsoring my traveling to Tirana, Albania! I enjoyed getting around the city!

<figure class="wp-block-image size-large"></figure>

PHP version 8.0.21RC1 and 8.1.8RC1

Posted by Remi Collet on June 23, 2022 02:52 PM

Release Candidate versions are available in testing repository for Fedora and Enterprise Linux (RHEL / CentOS / Alma / Rocky and other clones) to allow more people to test them. They are available as Software Collections, for a parallel installation, perfect solution for such tests, and also as base packages.

RPM of PHP version 8.1.8RC1 are available

  • as SCL in remi-test repository
  • as base packages
    • in the remi-php81-test repository for Enterprise Linux 7
    • in the remi-modular-test for Fedora 34-36 and Enterprise Linux ≥ 8

RPM of PHP version 8.0.21RC1 are available

  • as SCL in remi-test repository
  • as base packages
    • in the remi-php81-test repository for Enterprise Linux 7
    • in the remi-modular-test for Fedora 34-36 and Enterprise Linux ≥ 8

 

emblem-notice-24.pngPHP version 7.4 is now in security mode only, so no more RC will be released, this is also the last one for 7.4.

emblem-notice-24.pngInstallation : follow the wizard instructions.

Parallel installation of version 8.1 as Software Collection:

yum --enablerepo=remi-test install php81

Parallel installation of version 8.0 as Software Collection:

yum --enablerepo=remi-test install php80

Update of system version 8.1 (EL-7) :

yum --enablerepo=remi-php81,remi-php81-test update php\*

or, the modular way (Fedora and EL ≥ 8):

dnf module reset php
dnf module enable php:remi-8.1
dnf --enablerepo=remi-modular-test update php\*

Update of system version 8.0 (EL-7) :

yum --enablerepo=remi-php80,remi-php80-test update php\*

or, the modular way (Fedora and EL ≥ 8):

dnf module reset php
dnf module enable php:remi-8.0
dnf --enablerepo=remi-modular-test update php\*

Notice: version 8.1.7RC1 is also in Fedora rawhide for QA.

emblem-notice-24.pngEL-9 packages are built using RHEL-9.0

emblem-notice-24.pngEL-8 packages are built using RHEL-8.6

emblem-notice-24.pngEL-7 packages are built using RHEL-7.9

emblem-notice-24.pngRC version is usually the same as the final version (no change accepted after RC, exception for security fix).

emblem-notice-24.pngVersion 8.2.0alpha2 is also available

Software Collections (php80, php81)

Base packages (php)

Upgrade of Copr servers

Posted by Fedora Infrastructure Status on June 23, 2022 01:00 PM

We're updating copr packages to the new versions which will bring new features and bugfixes.

This outage impacts the copr-frontend and the copr-backend.

Release 6.0.1

Posted by Bodhi on June 23, 2022 12:51 PM

Released on 2022-06-23. This is a bugfix release.

Dependency changes

  • Remove the dependency on WhiteNoise since the documentation has moved to Github (#4555).
  • Updated bundled chartJS component to 3.8.0 (#4561).

Features

  • Allow disabling autokarma, autotime and close-bugs when editing an update by CLI (#4564).

Bug fixes

  • Fix a small template issue about the karma thumbs display (#4562).
  • Autokarma, autotime and close-bugs automatisms may have been accidentally overridden when editing updates by CLI (#4563).
  • In very peculiar circumstances, side-tag Rawhide updates may remain stuck if a user posts a negative karma or tries to set a request before Bodhi automatically pushes the update to stable (#4566).
  • Don't crash when Ipsilon has no userinfo (#4569).

Contributors

The following developers contributed to this release of Bodhi:

  • Aurélien Bompard
  • Mattia Verga

The beauty of equations in Physics

Posted by Siddhesh Poyarekar on June 23, 2022 01:18 AM

I have been stereotyped by many I know (and understandably so) as a quintessential computer geek, probably someone who dabbled in computers since his childhood and is his first love. That is however far from the truth because I first programmed a computer at the age of 20 and started my career soon after as a lowly back office outsourced engineer that a lot of the geek community looks down upon. What I did grow up with was something related but still different - mathematics and physics.

Around the age of 15 my mother enrolled me for IIT coaching classes (a huge financial struggle and a social shock for me, but that’s another story) and I found teachers that ignited a love for these subjects. I was always technically inclined thanks to my father who was an engineer. After his early death everyone around me wanted me to be his ‘successor’ as an engineer and I happily (almost proudly then, what does a 9 year old know!) obliged. Whatever technical inclination I had was due to watching my father as a 6-9 year old (another interesting story) and it was enough to make me an above average math and science student.

In the IIT coaching classes, I learned in Physics, a way to look at the world and in mathematics, a way to express that view. Despite the ragging due to my social status (backward caste and poor, in a predominantly rich, upper caste, South Bombay clique), I was floating in the air those two years, making up problems to trick friends and solving equations for fun. I don’t think I’ve done maths just for the heck of it since. The love affair with physics and its mathematics did not live too long though as I flunked my IIT examinations and had to rethink everything, including my view of myself; it woouldn’t be the last time either.

I went into what I now recognize as depression for over a year. As I recovered, I was ankle deep in Linux and FOSS and it was the beginning of the next chapter of my life, a life that has more extensive documentation on the intertubes. The physics and maths got beaten out of me in the process though. One thing however seems to have stuck, my obsession for beauty and rhythm whenever I encounter a mathematical equation. If a result looked large and wieldy, I would get very uncomfortable and keep working at it, refactoring it till it looked beautiful and reading it out sounded like I was reciting a poem. It was an obsession that my teachers loved and hated depending on how they were feeling on that day.

I rediscovered that love for symmetry and rhythm when I spent some time working with multiple precision math nearly a decade ago. I discovered it once again some years ago with just a few minutes of hacking away at a physics problem at reserved-bit where I came up with equations for a little maze that the kids at the makerspace wanted to build. The immense satisfaction of seeing the equation being easy on the eyes and almost musical to read is a feeling I cannot express in words. Then there is the beauty of discovering little facts by reading the equation (like location of an object at any point in the maze being independent of the acceleration due to gravity for the maze) that adds to the wonderful experience.

There is a parallel to such beauty in programming in the form of APIs or algorithms, but it doesn’t quite feel the same to me. I guess I enjoy programming quite a lot but no, I don’t love it like I did physics and maths. I don’t seem to have the mental space to go back to it though. I guess it’s a first love that I can only look back fondly at for now.

Cockpit 272

Posted by Cockpit Project on June 23, 2022 12:00 AM

Cockpit is the modern Linux admin interface. We release regularly.

Here are the release notes from Cockpit 272:

Firewall: Edit custom services

The Firewall page is able to open custom ports by creating custom services. This release adds the ability to edit custom services.

edit-custom

Services: Pin services as favorites

The services page now allows users to pin any service to the top of the services list.

pin-2

To pin a service, navigate to its detail page and click “Pin unit” in the menu next to its name.

pin

Login: Dark mode

The login page now has a dark mode which changes with your system’s dark settings. Most desktops have a setting for this in “appearance” area in system settings, including GNOME, KDE, Windows, macOS, Android, and iOS/iPadOS.

Here’s Cockpit Client, which also recently got dark mode a few releases ago, in the standard light mode:

Screenshot from 2022-06-22 18-42-47

And after switching to dark settings, it looks like this:

Screenshot from 2022-06-22 18-42-59

We’re still working on adding dark mode for Cockpit once you’ve logged in.

Unprivileged cockpit/ws container mode

The cockpit/ws container can now run in unprivileged mode.

It presents an unbranded variant of login page that always asks for a host name. Connections are made with SSH.

This mode is suitable for deploying to, e.g., Kubernetes or similar environments, where you don’t have or want privileged containers. In this “bastion host mode”, you can have Cockpit for servers in your data center without opening an extra port for cockpit-ws.

Currently, username + password and “classic RSA” type SSH keys are supported. See the container documentation for details.

Try it out

Cockpit 272 is available now:

Pango 1.90

Posted by Matthias Clasen on June 22, 2022 08:19 PM

I’ve finally convinced myself that I need to make a Pango 2.0 release to clean up the API, and introduce some new APIs without breaking users that expect Pango to be very stable.

So, here it is… well not quite. What I am presenting today is not Pango 2.0  yet,  but 1.90 – an unstable preview of the coming changes, to gather feedback and give some heads-up about whats coming.

Whats changed?

Pango is now shipped as a single shared object, libpango-2.so, which contains the high-level cross-platform code as well as platform-specific fontmap implementations and the cairo support (if it is enabled). All of the APIs have been cleaned up and modernized.

PangoFontMap  has seen some significant changes. It is now possible to instantiate a PangoFontMap, and populate it manually with PangoFontFamily and PangoFontFace objects.

There are still platform-specific subclasses

  •  PangoFcFontMap
  • PangoCoreTextFontMap
  • PangoDirectWriteFontMap

which will use platform APIs to enumerate fonts and populate the fontmap.

Whats new?

PangoLineBreaker is the core of pango’s line-breaking algorithm,
broken out from PangoLayout. Having this available independent
from PangoLayout will facilitate uses such as multi-column
layout, text flow between frames and shaping paragraphs around
images.

Here is an example that shows changing the column width mid-paragraph:

PangoLines is the ‘formatted output’ part of a PangoLayout, and can be used to collect the output of a PangoLineBreaker.

PangoHbFont is a font implementation that is a thin wrapper around HarfBuzz font and face objects. This the way in which Pango handles fonts on all platforms now.

PangoUserFont is a  callback-based font implementation to allow for entirely application-defined font handling, including glyph drawing. This is similar to cairo user fonts, where this example was borrowed:

Many smaller changes, such as better control over line height with line-height attributes and control over the trimming of leading, or guaranteed font ↔ description roundtrips with face-ids.

How can I try this?

The Pango code lives on the pango2 branch, and there is a corresponding pango2 branch of GTK, which contains a port of GTK to the new APIs.

The tarballs are here.

Summary

If you have an interest in text rendering, please try this out and tell us what you think. Your feedback will make Pango 2 better.

For more details about the changes in this release, see the NEWS, and have a look at the migration guide.

If you want to learn more about the history of Pango and the background for some of these changes, come to my Guadec talk in Guadalajara!

Intro to libvirt based virtualization on Linux

Posted by Adam Young on June 22, 2022 03:14 PM

The processes of development, installation, testing, and debugging of software all benefit from the use of a virtual machines. If you are working in a Linux based infrastructure, you have access to the virtual machine management on your system. There are a handful of related technologies that all work together to help you get your work done.

  • A hypervisor is machine that runs virtual machines. There are several proprietary hypervisor distributions in the marketplace, such as VMwares ESXi and Microsofts HyperV.
  • KVM is the Linux Kernel module that allows you to run a virtual machine in a process space isolated from the rest of the system.
  • Qemu is an implementation of that virtual machine. It was originally an emulator (hence the name) and can still be run that way, but it is far more powerful and performant when run in conjuntion with KVM
  • Xen is an alternative approach that preceeded KVM. It implementaed the entire virtualization in Kernel space; Linus did not like this approach and denied merging it into the mainline Linux Kernel.
  • libvirt is a client/server implementation to allow you to communicate with a Linux machine running an implementation of virtualization like KVM/Qemu or Xen. There are a handful of other implementations as well, but for our purposes, we will focus on the KVM/Qemu approach.
  • libvirtd is the server Daemon for libvirt. It is run via systemd on the current Fedora and Ubuntu releases.
  • virsh is a CLI application that allows you to send commands to the libvirt subsystem
  • virt-manager is a GUI program that lets you send libvirt commands via a more discoverable workflow.

There are other tools, but these are enough to get started.

In order to run a virtual machine, you need a hypervisor machine to host it. This might be your laptop, or it might be a remote server. For example, I am running Ubuntu 22 on a Dell Latitude laptop, and I can run a virtual machine on that directly. Here are the set of libvirt related packages I have installed:

    null
  • gir1.2-libvirt-glib-1.0/jammy
  • libvirt-clients/jammy-updates
  • libvirt-daemon-config-network/jammy-updates
  • libvirt-daemon-config-nwfilter/jammy-updates
  • libvirt-daemon-driver-qemu/jammy-updates
  • libvirt-daemon-system-systemd/jammy-updates
  • libvirt-daemon-system/jammy-updates
  • libvirt-daemon/jammy-updates
  • libvirt-glib-1.0-0/jammy
  • libvirt-glib-1.0-data/jammy
  • libvirt0/jammy-updates
  • python3-libvirt/jammy

I am fairly certain I only had to install libvirt-daemon-system and libvirt-daemon-driver-qemu and then enable the via systemd commands.

Once the daemon is up and listening, you can run the virt-manager gui to connect to it and perform basic operations. On Fedora and Ubuntu, this is provided by the package virt-manager and can be run from the command line as virt-manager.

<figure class="wp-block-image">virt-manager with two servers listed<figcaption>virt-manager with two servers listed</figcaption></figure>

In the above picture, you can see the localhost listed in black, but also another host from our lab listed in gray. It is grayed out because I have not yet attempted to connect to this server this session.

The localhost system has no virtual machines currently defined. Adding one is simple, but you do need to have the install media to do anything with it. I’ll show this on the remote system instead. The steps you use to build a VM remotely are the same as you use to build it locally.

To Create a new connection, start by selecting the FIle menu and then Add Connection. Leave the Hypervisor as QEMU/KVM and select the checkbox to connect to the remote host over ssh. You probably need to connect to the remote machine as root. You can use either a FQDN or an IPv4 address to connect. IPv6 might work, but I have not tried it.

<figure class="wp-block-image">virt-manager add connection dialog</figure>

This is going to use the same key or password you would use to connect via ssh from the command line.

Once you have a connection, the line is displayed darker with any VMs listed. I have one VM, named fedora-server-36.

<figure class="wp-block-image">virt-amanaager remote connection with one VM</figure>

When running a virtual machine, you want to use the same architecture as the underlying hypervisor. For the most part, this means X86_64. However, I work for a company that builds AARCH64 based server chips, and the machine listed above is one of ours.

If you float the mouse over a running VM, right click and select the “open” you will get a terminal console.

<figure class="wp-block-image">VM terminal in virt-maanger</figure>

To see the details you can select the View->details menu item. here you can see the Architecture is aarch64.

<figure class="wp-block-image">VM details</figure>

In the next article, will go through network configuration and launching a virtual machine.

Conan-izing an OpenGL project.

Posted by Adam Young on June 22, 2022 02:25 PM

Now that I can build my app with Autotools, I want to make it work with conan. In my head, I have conan mapped to projects like cargo in rust and pip in Python. However, C++ has a far less homogenized toolchain, and I expect things are going to be more “how to make it work for you.” I started with Autotools to minimize that.

I did, however, install a few packages for development, and I am tempted to start by removing those packages and try to make conan do the work to fetch and install them.

 history | grep "apt install"
 2067  sudo apt install freeglut3-dev
 2095  sudo apt install  libboost-date-time-dev 
 2197  sudo apt install autoconf

So I removed those

sudo apt remove freeglut3-dev libboost-date-time-dev

Now, follow the same general pattern as the getting started guide:

conan search freeglut --remote=conancenter
Existing package recipes:

freeglut/3.2.1

Because i am using python file instead of the file conanfile.txt, I have:

$ git diff conanfile.py
diff --git a/conanfile.py b/conanfile.py
index a59fca3..b545ca0 100644
--- a/conanfile.py
+++ b/conanfile.py
@@ -40,3 +40,8 @@ class OrbitsConan(ConanFile):
     def package(self):
         autotools = Autotools(self)
         autotools.install()
+
+    def build_requirements(self):
+        self.tool_requires("boost/1.79.0")
+        self.test_requires("freeglut/3.2.1")
+

but when I try to install:

mkdir build
cd build
conan install ..

After much output, I get the error:

Installing (downloading, building) binaries...
ERROR: There are invalid packages (packages that cannot exist for this configuration):
freeglut/3.2.1: Invalid ID: freeglut does not support gcc >= 10 and clang >= 11

So…I guess we build that, too….nope. OK, going to punt on that for the moment, and see if I can get the rest to build, including boost. Comment out the freeglut line in conanfile.py and try again.

ERROR: Missing prebuilt package for 'boost/1.79.0'
Use 'conan search boost/1.79.0 --table=table.html -r=remote' and open the table.html file to see available packages
Or try to build locally from sources with '--build=boost'

OK, add in the –build boost flag as recommended. I am not certain that this is building the date library or not, and building all of boost might take a while….but no. Seems like it worked. Let’s try configure and build.

conan build .. 
/home/ayoung/devel/git/admiyo/orbits/./src/orbits.h:2:10: fatal error: boost/date_time/gregorian/gregorian.hpp: No such file or directory
    2 | #include "boost/date_time/gregorian/gregorian.hpp"

OK, let’s see if that is in the cached files from conan:

$ find  ~/.conan/ -name gregorian.hpp | grep date_time
/home/ayoung/.conan/data/boost/1.79.0/_/_/package/dc8aedd23a0f0a773a5fcdcfe1ae3e89c4205978/include/boost/date_time/gregorian/gregorian.hpp
/home/ayoung/.conan/data/boost/1.79.0/_/_/source/source_subfolder/boost/date_time/gregorian/gregorian.hpp

So, yeah, it is there, but Autotools does not seem to be setting the include directory from the package. Looking back at the output from the previous command, I see that the g++ line only has these two -I flags.

-I. -I/home/ayoung/devel/git/admiyo/orbits/./src

I would expect that conan would implicitly add the includes directory from older packages to the build for a new package. However, that does not seem to be the case. I’ll look in to it. But for now, I can work around it by adding the directories myself. That is the power of using a full programming language like python as opposed to a Domain Specific Language. Here’s my build step:

    def build(self):
        env_build = AutoToolsBuildEnvironment(self)
        autotools = Autotools(self)

        CXXFLAGS=""
        for PATH in  env_build.include_paths:
            incdir = " -I%s"%PATH
            CXXFLAGS = CXXFLAGS + incdir
        os.environ["CXXFLAGS"] = CXXFLAGS
        
        autotools.autoreconf()
        autotools.configure()
        autotools.make()

Note that I used the pdb mechanism pixelbeat wrote about way back. It let me inspect what was going on during the built process…huge time saver.

This might be why conan punts on the include path: I had to use environment variables to pass the additional values on down through Autotools to the Makefile. I don’t know cmake well enough to say whether it would have the same issue.

With this change added, the build completes to the point that I have a running orbit executable.

If this were part of a larger workflow, I would post this build to an internal conan repository server to be shared amongst the team. We’ll be doing that with the actual binaries used for production work.

Poking back at the FreeGLUT build, I see a couple things. First, if we look at the conanfile.py that was used to build it, we see that the check is right there in the code. We also see that it points to an old issue that was long ago fixed. Note that the link is not versioned, so if it gets changed, this article will not point to the error anymore.

I tried downloading that recipe and running it locally. It turns out that it depends on a slew of system packages (all of the X stuff required for OpenGL, etc). Which I installed. At that point, the build fails with:

CMake Error: The source directory “/home/ayoung/devel/conan/freeglut” does not appear to contain CMakeLists.txt.

And, while I could debug this, I decided that I am not going to pursue building against the package posted to central. For now, using system packages for the build seems to make more sense, especially for a dependency like FreeGLUT. If a larger ecosystem of native packages emerges, I might revisit this decision.

Upgrade of Copr servers

Posted by Fedora Infrastructure Status on June 22, 2022 12:00 PM

We're updating copr packages to the new versions which will bring new features and bugfixes.

This outage impacts the copr-frontend and the copr-backend.

The syslog-ng disk-buffer

Posted by Peter Czanik on June 22, 2022 10:48 AM

A three parts blog series:

The syslog-ng disk buffer is one of the most often used syslog-ng options to ensure message delivery. However, it is not always necessary and using the safest variant has serious performance impacts. If you utilize disk-buffer in your syslog-ng configuration, it is worth to make sure that you use a recent syslog-ng version.

From this blog, you can learn when to use the disk-buffer option, the main differences between reliable and non-reliable disk-buffer, and why is it worth to use the latest syslog-ng version.

Read more at https://www.syslog-ng.com/community/b/blog/posts/when-not-to-use-the-syslog-ng-disk-buffer

Last time, we had an overview of the syslog-ng disk-buffer. This time, we dig a bit deeper and take a quick look at how it works, and a recent major change that helped speed up the reliable disk-buffer considerably.

Read more at https://www.syslog-ng.com/community/b/blog/posts/how-does-the-syslog-ng-disk-buffer-work

Most people expect to see how many log messages are waiting in the disk-buffer from the size of the syslog-ng disk-buffer file.. While it was mostly true for earlier syslog-ng releases, for recent syslog-ng releases (3.34+) the disk-buffer file can stay large even when it is empty. This is a side effect of a recent syslog-ng performance tuning.

Read more at https://www.syslog-ng.com/community/b/blog/posts/why-is-my-syslog-ng-disk-buffer-file-so-huge-even-when-it-is-empty

<figure><figcaption>

syslog-ng logo

</figcaption> </figure>

Sourceware – GNU Toolchain Infrastructure roadmap

Posted by Mark J. Wielaard on June 22, 2022 08:54 AM

Making email/git based workflow more fun, secure and productive by automating contribution tracking and testing across different distros and architectures.

What is Sourceware?

Sourceware, https://sourceware.org/, is community run infrastructure, mailinglists, git, bug trackers, wikis, etc. hosted in the Red Hat Open Source Community Infrastructure Community Cage together with servers from e.g. Ceph, CentOS, Fedora and Gnome.

Sourceware is mainly known for hosting the GNU Toolchain projects, like gcc at https://gcc.gnu.org/, glibc, binutils and gdb. But also hosts projects like annobin, bunsen, bzip2, cgen, cygwin at https://cygwin.org/, debugedit, dwz, elfutils at http://elfutils.org, gccrs, gnu-abi, insight, kawa, libffi, libabigail, mauve, newlib, systemtap and valgrind at https://valgrind.org/.

A longer list of Sourceware projects, those without their own domain name, including several dormant projects, can be found here: https://sourceware.org/mailman/listinfo.

Most of these projects use a email/git based workflow using mailinglists for discussing patches in preference to web based “forges”.

Zero maintenance automation

Although email based git workflows are great for real patch discussions, they do not always make tracking the state of patches easy.

Just like our other services, such as bugzilla, mailinglists and git repos we like to provide zero maintenance infrastructure for tracking and automation of patches and testing.

So we are trying to consolidate around a shared buildbot for (test) automation and patchwork for tracking the state of contributions. By sharing experiences between the Sourceware projects and coordination and fully automating the infrastructure services.

A shared buildbot

We have a shared buildbot for Sourceware projects at https://builder.sourceware.org/. This includes compute resources (buildbot-workers) for various architectures thanks to some generous sponsors. We have native/VM workers for x86_64, ppc64le, s390x, ppc64, i386, arm64 and armhf for debian, fedora and centos (although not all combinations yet) and x86_64 container builders for fedora, debian and opensuse.

There are currently 95 builders on 15 workers, doing ~300 builds a day (more on week days, less on weekends). There are a couple of full testsuite builders (for gcc and binutils-gdb), but most builders are “quick” CI builders, which will sent email whenever a regression is detected. It seems to catch and report a couple of issues a week across all projects.

Builder is its own project on Sourceware which comes with its own git repo, mailinglist and amazing community, that can help you integrate new builders, add workers, containers and get you access to systems to replicate any failures where the buildbot logs don’t give enough information.

And buildbot itself is automated, so whenever a change is made to add a new builder, or define a new container, the buildbot automatically reconfigures itself and the workers will start using the new container images starting with the next build.

The same mechanism can also be used to run tasks on specific commits or periodically. Tasks which are now often done by a cron job or git hook. For example to update documentation, websites, generate release tars or update bugzilla. The advantage over cron jobs is that it can be done more immediately and/or only when really needed based on specific commit files. The advantage over git hooks is that they run in the builder context, not in the context of the specific user that pushed a commit.

Picking your (CI) tests

Although the buildbot itself is zero maintenance, getting and acting on the results of course is not. We already divide the tests into quick CI tests and full test runs. And most tests upload all results to bunsendb. bunsen can help pick good CI tests by indicating which tests are flaky or compare results across different setups.

A prototype testsuite log comparison bunsenweb widget is running at https://builder.sourceware.org/testruns/

Lots of things will be coming here, including taking advantage of testrun cluster analysis that’s already being done, a per-testrun testcase search/browse engine, other search operators, testsuite summary (vs detail) grids, who knows, ideas welcome!

What about pre-commit checks?

The builder CI checks what has been committed on the main branch of the projects. This makes sure that what is checked out is in a good state and that any pushed regressions are found early and often.

There is also support for git user try branches. When a user pushes to their try branch the same builder CI checks are ran, so a project developer knows their proposed patch(es) won’t break the build or introduce regressions.

The binutils and gdb communities are currently trying this out. Once new builder resources from OSUOSL are installed we’ll roll this out to other Sourceware projects.

What about non-committers?

The above only helps developers that have commit access on sourceware, but not others who sent in patches. For that we have https://patchwork.sourceware.org/ plus the CICD trybot that DJ wrote https://sourceware.org/glibc/wiki/CICDDesign. The glibc community is already using this. We would like to connect patchwork, buildbot and the trybot for other Sourceware projects

The current trybot doesn’t do authentication, this might not be OK for all builders. So we want to either require checking for known GPG keys on the patch emails or let a trusted developer set a flag in patchwork before the trybot triggers. Once we have public-inbox setup we could also use b4 for DKIM attestation for known/trusted hackers.

Some projects have already experimented with public-inbox. But we don’t have an instance running on sourceware itself yet. This would resolve complaints of not very usable mailman archives.

But I really like to have a webforge!

You are in luck. We already have a sourcehut mirror at https://sr.ht/~sourceware/ This allows anybody to fork any sourceware project on sourcehut, prepare their patches and submit a merge request through email (without having to locally setup git send-email or smtp – the patch emails are generated server side).

Sourcehut is designed around email based workflows, fully Free Software, doesn’t use javascript and is much faster and resource constrained compared to (proprietary) alternatives.

The sourcehut mirror is currently read-only (but syncs automatically with any git updates made on sourceware). When sourcehut supports project groups (one of the beta goals) we will test a self-hosted instance to see whether this is a good way to attract more contributors without loosing the advantages of the email based workflow. The various sr.ht components are very modular so we can only use those parts we need.

Using Linux System Roles to implement Clevis and Tang for automated LUKS volume unlocking

Posted by Fedora Magazine on June 22, 2022 08:00 AM

One of the key aspects of system security is encrypting storage at rest. Without encrypted storage, any time a storage device leaves your presence it can be at risk. The most obvious scenario where this can happen is if a storage device (either just the storage device or the entire system, server, or laptop) is lost or stolen.

However, there are other scenarios that are a concern as well: perhaps you have a storage device fail, and it is replaced under warranty — many times the vendor will ask you to return the original device. If the device was encrypted, it is much less of a concern to return it back to the hardware vendor.

Another concern is anytime your storage device is out of sight there is a risk that the data is copied or cloned off of the device without you even being aware. Again, if the device is encrypted, this is much less of a concern.

Fedora (and other Linux distributions) include the Linux Unified Key Setup (LUKS) functionality to support disk encryption. LUKS is easy to use, and is even integrated as an option in the Fedora Anaconda installer.

However there is one challenge that frequently prevents people from implementing LUKS on a large scale, especially for the root filesystem: every single time you reboot the host you generally have to manually access the console and type in the LUKS passphrase so the system can boot up.

If you are running Fedora on a single laptop, this might not be a problem, after all, you probably are sitting in front of your laptop any time you reboot it. However, if you have a large number of Fedora instances, this quickly becomes impractical to deal with.

<figure class="wp-block-image size-large"><figcaption>If you have hundreds of systems, it is impractical to manually type the LUKS passphrase on each system on every reboot</figcaption></figure>

You might be managing Fedora systems that are at remote locations, and you might not even have good or reliable ways to access a console on them. In this case, rebooting the hosts could result in them not coming up until you or someone else travels to their location to type in the LUKS passphrase.

This article will cover how to implement a solution to enable automated LUKS volume unlocking (and the process to implement these features will be done using automation as well!)

Overview of Clevis and Tang

Clevis and Tang are an innovative solution that can help with the challenge of having systems with encrypted storage boot up without manual user intervention on every boot. At a high level, Clevis, which is installed on the client systems, can enable LUKS volumes to be unlocked without user intervention as long as the client system has network access to a configurable number of Tang servers.

The basic premise is that the Tang server(s) are on an internal/private or otherwise secured network, and if the storage devices are lost, stolen, or otherwise removed from the environment, that they would no longer have network access to the Tang server(s), and thus no longer automatically unlock automatically at boot.

Tang is stateless and doesn’t require authentication or even TLS, which means it is very lightweight and easy to configure, and can run from a container. In this article, I’m only setting up a single Tang server, however it is also possible to have multiple Tang servers in an environment, and to configure the number Tang servers the Clevis clients must connect to in order to unlock the encrypted volume. For example, you could have three Tang servers, and require the Clevis clients to be able to connect to at least two of the three Tang servers.

For more information on how Tang and Clevis work, refer to the GitHub pages: Clevis and Tang, or for an overview of the inner workings of Tang and Clevis, refer to the Securing Automated Decryption New Cryptography and Techniques FOSDEM talk.

Overview of Linux System Roles

Linux System Roles is a set of Ansible Roles/Collections that can help automate the configuration and management of many aspects of Fedora, CentOS Stream, RHEL, and RHEL derivatives. Linux System Roles is packaged in Fedora as an RPM (linux-system-roles) and is also available on Ansible Galaxy (as both roles and as a collection). For more information on Linux System Roles, and to see a list of included roles, refer to the Linux System Roles project page.

Included in the list of Linux System Roles are the nbde_client, nbde_server, and firewall roles that will be used in this article. The nbde_client and nbde_server roles are focused on automating the implementation of Clevis and Tang, respectively. The “nbde” in the role names stands for network bound disk encryption, which is another term to refer to using Clevis and Tang for automated unlocking of LUKS encrypted volumes. The firewall role can automate the implementation of firewall settings, and will be used to open a port in the firewall on the Tang server.

Demo environment overview

In my environment, I have a Raspberry Pi, running Fedora 36 that I will install Linux System Roles on and use as my Ansible control node. In addition, I’ll use this same Raspberry Pi as my Tang server. This device is configured with the pi.example.com hostname.

In addition, I have four other systems in my environment: two Fedora 36 systems, and two CentOS Stream 9 systems, named fedora-server1.example.com, fedora-server2.example.com, c9s-server1.example.com, and c9s-server2.example.com. Each of these four systems has a LUKS encrypted root filesystem and currently the LUKS passphrase must be manually typed in each time the systems boot up.

I’ll use the nbde_server and firewall roles to install and configure Tang on my pi.example.com system, and use the nbde_client role to install and configure Clevis on my four other systems, enabling them to automatically unlock their encrypted root filesystem if they can connect to the pi.example.com Tang system.

Installing Linux System Roles and Ansible on the Raspberry Pi

I’ll start by installing the linux-system-roles package on the pi.example.com host, which will act as my Ansible control node. This will also install ansible-core and several other packages as dependencies. These packages do not need to be installed on the other four systems in my environment (which are referred to as managed nodes).

$ sudo dnf install linux-system-roles

SSH keys and sudo configuration need to be configured so that the control node host can connect to each of the managed nodes in the environment and escalate to root privileges.

Defining the Ansible inventory file

Still on the pi.example.com host, I’ll create an Ansible inventory file to group the five systems in my environment into two Ansible inventory groups. The nbde_servers group will contain a list of hosts that I would like to configure as Tang servers (which in this example is only the pi.example.com host), and the nbde_clients group will contain a list of hosts that I would like to configure as Clevis clients. I’ll name this inventory file inventory.yml and it contains the following content:

all:
  children:
    nbde_servers:
      hosts:
        pi.example.com:
    nbde_clients:
      hosts:
        fedora35-server1.example.com:
        fedora35-server2.example.com:
        c9s-server1.example.com:
        c9s-server2.example.com:

Creating Ansible Group variable files

Ansible variables are set to specify what configuration should be implemented by the Linux System Roles. Each role has a README.md file that contains important information on how to use each role, including a list of available role variables. The README.md files for the nbde_server, nbde_client, and firewall roles are available in the following locations, respectively:

  • /usr/share/doc/linux-system-roles/nbde_server/README.md
  • /usr/share/doc/linux-system-roles/nbde_client/README.md
  • /usr/share/doc/linux-system-roles/firewall/README.md

I’ll create a group_vars directory with the mkdir group_vars command. Within this directory, I’ll create a nbde_servers.yml file and nbde_clients.yml file, which will define, respectively, the variables that should be set for systems listed in the nbde_servers inventory group and the nbde_clients inventory group.

The nbde_servers.yml file contains the following content, which will instruct the firewall role to open TCP port 80, which is the default port used by Tang:

firewall:
  - port: ['80/tcp']
    state: enabled

The nbde_clients.yml file contains the following content:

nbde_client_bindings:
  - device: /dev/vda2
    encryption_password: !vault |
          $ANSIBLE_VAULT;1.1;AES256
          62666465373138636165326639633...
    servers:
      - http://pi.example.com

Under nbde_client_bindings, device specifies the backing device of the encrypted root filesystem on the four managed nodes. The encryption_password specifies a current LUKS passphrase that is required to configure Clevis. In this example, I’ve used ansible-vault to encrypt the string rather than place the LUKS passphrase in clear text. And finally, under servers, a list of Tang servers that Clevis should bind to are specified. In this example, the Clevis clients will be configured to bind to the pi.example.com Tang server.

Creating the playbook

I’ll create a simple Ansible playbook, named nbde.yml that will call the firewall and nbde_server roles for systems in the nbde_servers inventory group, and call the nbde_client role for systems in the nbde_clients group:

- name: Open firewall for Tang
  hosts: nbde_servers
  roles:
    - linux-system-roles.firewall

- name: Deploy NBDE Tang server
  hosts: nbde_servers
  roles:
    - linux-system-roles.nbde_server

- name: Deploy NBDE Clevis clients
  hosts: nbde_clients
  roles:
    - linux-system-roles.nbde_client

At this point, I have the following files and directories created:

  • inventory.yml
  • nbde.yml
  • group_vars/nbde_clients.yml
  • group_vars/nbde_servers.yml

Running the playbook

The nbde.yml playbook can be run with the following command:

$ ansible-playbook nbde.yml -i inventory.yml --ask-vault-pass -b

The -i flag specifies which inventory file should be used, the –ask-vault-pass flag will prompt for the Ansible Vault password to decrypt the encryption_password variable, and the -b flag specifies that Ansible should escalate to root privileges.

<figure class="wp-block-image size-full"><figcaption>play recap output from ansible-playbook command showing playbook successfully completed</figcaption></figure>

Validating the configuration

To validate the configuration, I rebooted each of my four managed nodes that were configured as Clevis clients of the Raspberry Pi Tang server. Each of the four managed nodes boots up and briefly pauses on the LUKS passphrase prompt:

<figure class="wp-block-image size-full"><figcaption>Systems boot up to LUKS passphrase prompt, and automatically continue booting after a brief pause</figcaption></figure>

However, after the brief delay, each of the four systems continued booting up without requiring me to enter the LUKS passphrase.

Conclusion

If you would like to secure your data at rest with LUKS encryption, but need a solution that enables systems to boot up without intervention, consider implementing Clevis and Tang. Linux System Roles can help you implement Clevis and Tang, as well as a number of other aspects of your system, in an automated manner.

How to troubleshoot deferred probe issues in Linux

Posted by Javier Martinez Canillas on June 21, 2022 09:14 PM

When working on the retro handheld console mentioned in a previous post, I had an issue where the LCD driver was not probed when booting a custom Linux kernel image built.

To understand the problem, first some knowledge is needed about how devices and drivers are registered in the Linux kernel, how these two sets are matched (bound) and what is a probe deferral.

If you are not familiar with these concepts, please read this post where are explained in detail.

The problem is that the st7735r driver (that’s needed for the SPI TFT LCD panel I was using) requires a GPIO based backlight device. To make it more clear, let’s look at the relevant bits in the adafruit-st7735r-overlay.dts that’s used as a Device Tree Blob (DTB) overlay to register the needed devices:

fragment@2 {
...
    af18_backlight: backlight {
        compatible = "gpio-backlight";
...
    };
};

fragment@3 {
...
    af18: adafruit18@0 {
        compatible = "jianda,jd-t18003-t01";
        backlight = <&af18_backlight>;
...
    };
};

We see that the adafruit18@0 node for the panel has a backlight property whose value is a phandle to the af18_backlight label used to refer to the backlight node.

The drivers/gpu/drm/tiny/st7735r.c probe callback then uses the information in the DTB to attempt getting a backlight device:

static int st7735r_probe(struct spi_device *spi)
{
...
	dbidev->backlight = devm_of_find_backlight(dev);
	if (IS_ERR(dbidev->backlight))
		return PTR_ERR(dbidev->backlight);
...
}

The devm_of_find_backlight() function returns a pointer to the backlight device if this could be found or a -EPROBE_DEFER error pointer if there is a backlight property defined in the DTB but this could not be found.

For example, this can happen if the driver that registers that expected backlight device was not yet probed.

If the probe callback returns -EPROBE_DEFER, the kernel then will put the device that matched the driver but failed to probe in a deferred probe list. The list is iterated each time that a new driver is probed (since it could be that the newly probed driver registered the missing devices that forced the probe deferral).

My problem then was that the needed driver (CONFIG_BACKLIGHT_GPIO since the backlight node has compatible = gpio-backlight) was not enabled in my kernel, leading to the panel device to remain in the deferred probe list indefinitely due a missing backlight device that was never registered.

This is quite a common issue on Device Tree based systems and something that it used to take me a lot of time to root cause when I started working on Linux embedded platforms.

A few years ago I added a /sys/kernel/debug/devices_deferred debugfs entry that would expose the list of devices deferred to user-space, which makes much more easier to figure out what devices couldn’t be bound due their driver probe being deferred.

Later, Andrzej Hajda improved that and added to the devices_deferred debugfs entry support to print the reason of the deferral.

So checking what devices were deferred and the reason is now quite trivial, i.e:

$ cat /sys/kernel/debug/devices_deferred 
spi0.0  spi: supplier backlight not ready

Much better than spending a lot of time looking at kernel logs and adding debug printous to figure out what’s going on.

Happy hacking!

Fedora Hatch

Posted by Fedora Community Blog on June 21, 2022 05:43 PM

Fedora Hatches are still happening all around the world! Along with Nest With Fedora this year, Fedora is hosting local in-person hatch events. This is a great opportunity to connect with your fellow Fedorans and receive some amazing Fedora swag.

Thanks to our Ambassadors, Fedora is hosting eight Hatch events this year! You can expect a social event and possibly a workshop depending on which Hatch event you attend. 

These events are exciting occasions for our community to discuss new ideas, share what they have been working on, connect with each other, and revitalize for the upcoming year.

Our Nuremberg, Germany Hatch event took place on June 2nd-4th. Thank you to Neal Gompa for putting together this two session Hatch event at the openSUSE conference.

A Hatch is coming to a city near you:

  • June 24th & 25th in Vigo, Spain: Organized by Fernando Fernandez Mancera (@ffmancera)
  • July 6th, 7th, and 8th in Pune, India: Organized by Akashdeep Dhar (@t0xic0der)
  • July 13th and 14th in Cork, Ireland: Organized by Aoife Moloney (@amoloney)
  • July 21st in Rochester, NY : Organized by Neal Gompa (@ngompa) and David McCheyne (@dmccheyne) 

The post Fedora Hatch appeared first on Fedora Community Blog.

Converting an OpenGL project to use Autotools

Posted by Adam Young on June 21, 2022 01:51 PM

In a science fiction game I play, we often find ourselves asking “which side of the Sun is Mars from the Earth right now?” Since this is for a game, the exact distance is not important, just “same side” or “90 degrees ahead” is a sufficient answer. But “right now” actually means a couple hundred years in the future.

Perfect problem to write a little code to solve. Here’s what it looks like.

<figure class="wp-block-video"><video controls="controls" src="http://adam.younglogic.com/wp-content/uploads/2022/06/orbits-running.webm"></video></figure>

My team is using Conan for package management of our native code. I want to turn this Orbits project into a Conan package.

We’re using Autotools for other things, so I’l start by creating a new Autotool based project.

conan new orbits/0.1 --template autotools_exe
File saved: Makefile.am
File saved: conanfile.py
File saved: configure.ac
File saved: src/Makefile.am
File saved: src/main.cpp
File saved: src/orbits.cpp
File saved: src/orbits.h
File saved: test_package/conanfile.py

Aaaaaand….I see it is opinionated. I am going to move my code into this structure. Specifically, most of the code will got into orbits.cpp, but I do have stuff that needs to go in to main. Time to restructure.

To start, I move my existing orbit.cpp into src, over writing the orbits.cpp file in there. I rename the main function to orbits, and add in the argc argv parameters so I can call them from main. That should be enough to get it to compile, assuming I can get the Autotools configuration correct. I start by installing make sure.

ayoung@ayoung-Latitude-7420:~/devel/git/admiyo/orbits$ autoconf 
configure.ac:3: error: possibly undefined macro: AM_INIT_AUTOMAKE
...

And many other lines of errors, mostly about missing files. This gives me a way forward.

Many of the files are project administration, such as the AUTHORS file, that credits the people that work on it, or the Licenses file which, while boiler-plate, makes it unambiguous what the terms are for redistribution of code and binaries. It is tempting to just use touch to create them, but you are better off putting in the effort to at least start the files. More info here.

Remember that the README.md file will be displayed on the git repo’s landing page for the major git projects, and it is worth treating that as your primary portal for communicating with any potential users and contributors.

Autotools assumes that you have just the minimal checked in to git, and the rest of the code files are generated. I ended up with this short script to run it in order:

#!/bin/sh

autoreconf --install
./configure
automake
make

Since you are generating a slew of files, it is worth noting the command to remove all but the git-generated files.

First, make sure you have everything you are actively working on committed! If not, you will lose files.

Then run:

git clean -xdf .

The primary file I needed to work on was configure.ac. This is the file that generates the configure script. This script is used to test if the required dependencies are available on the system. Here’s my configure.ac

AC_INIT([orbits], [0.1], [])
AM_INIT_AUTOMAKE([-Wall -Werror foreign])
AC_PROG_CXX
AM_PROG_AR
LT_INIT
AC_CHECK_LIB([glut], [glutInit])
AC_CHECK_LIB([GL],[glVertex2f])
PKG_CHECK_MODULES([GL], [gl])
AC_CONFIG_FILES([Makefile src/Makefile])
AC_OUTPUT

This was the hardest part to get right.

AC_CHECK_LIB([GL],[glVertex2f])
PKG_CHECK_MODULES([GL], [gl])

The top line checks that the libGL shared library is installed on the system, and has the glVertex2f symbol defined in it. You would think that you just need the second line, which checks for the module. It turns out that configure is also responsible for generating the linker flags in the make file. So, while the PKG_CHECK_MODULES will tell you that you have OpenGL installed, the AC_CHECK_LIB line will make you use it.

I’m sure there is more wisdom to be gained in working with Autotools, but this is enough to get me started.

Now that I have the Autotools part working (maybe even correctly?!) I need to work on making it build via conan.

But that is another story.

Episode 328 – The Security of Jobs or Job Security

Posted by Josh Bressers on June 20, 2022 12:01 AM

Josh and Kurt talk about the security of employees leaving jobs. Be it a voluntary departure or in the context of the current layoffs we see, what are the security implications of having to remove access for one or more people departing their job?

<audio class="wp-audio-shortcode" controls="controls" id="audio-2809-2" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_328_The_Security_of_Jobs_or_Job_Security.mp3?_=2" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_328_The_Security_of_Jobs_or_Job_Security.mp3</audio>

Show Notes

A World Of Different Switches

Posted by Jon Chiappetta on June 19, 2022 05:07 PM

So I recently ordered a Drop Alt keyboard with Halo True switches as it offered the ability to hot-swap them out. The case and frame is much more stable and solid compared to the Ducky Mini and it produces a much more clunky-analogue, type-writer kind of sound. I put in some Holy Panda switches and they have a greater feeling tactile bump at the start of the key travel along with a slightly lighter spring force for an easier press downwards. Both are great switches to type on overall!

Edit: For comparison, the Cherry Brown switch has a tactile resistance during the key travel, but the bump in the beginning is not as noticeable or pronounced as to what some of the other switches offer and the spring force is much lighter.

  • <figure></figure>
  • <figure></figure>

Efecto Cobra (pensamientos)

Posted by Arnulfo Reyes on June 19, 2022 03:32 AM

“Muéstrame el incentivo y te mostraré el resultado”. — Charlie Munger

Los incentivos bien diseñados tienen el poder de crear grandes resultados; los incentivos mal diseñados tienen el poder de… bueno… crear resultados terribles.

Qué es el "efecto cobra" (y cómo demuestra que a veces es peor el remedio que la enfermedad) - BBC News Mundo

Para leer y seguir investigando:

En pocas palabras, si una medida de rendimiento se convierte en un objetivo declarado, los humanos tienden a optimizarla, independientemente de las consecuencias asociadas. La medida a menudo pierde su valor como medida.

<figure><figcaption>Photo by S. H. Gue on Unsplash</figcaption></figure>

Una vez que internalizas este marco, lo ves a tu alrededor:

  • Escándalo de apertura de cuenta de Wells Fargo.
  • Problema de “Contratar para despedir” de Amazon.

Planeo escribir más sobre el tema de los incentivos en el futuro. Estén atentos.

¡Gracias por llegar hasta acá!

Si gustas puedes seguirme en mis redes sociales en Instagram @arnulfo

Always, read, error, messages

Posted by Tomas Tomecek on June 18, 2022 11:00 PM

Carefully.

And think about them, Tomas! After you’re done, interpret them.

/noted

Okay, less poetry, more science.

This is a short story on how I spent a few hours trying to renew a Let’s Encrypt certificate for my home server. And kept on failing. Until I succeeded and learnt a lesson (which is in the $title).

Tor 0.4.7.8 is ready

Posted by Kushal Das on June 18, 2022 07:16 AM

Last night I built and pushed the Tor RPM(s) for 0.4.7.8. This is a security update, so please make sure that you upgrade your relays and bridges.

You can know more about the Tor's RPM respository at https://support.torproject.org/rpm/

If you have any queries, feel free to find us over #tor channel on OFTC.

Friday’s Fedora Facts: 2022-24

Posted by Fedora Community Blog on June 17, 2022 05:34 PM

Here’s your weekly Fedora report. Read what happened this week and what’s coming up. Your contributions are welcome (see the end of the post)!

I have weekly office hours on Wednesdays in the morning and afternoon (US/Eastern time) in #fedora-meeting-1. Drop by if you have any questions or comments about the schedule, Changes, elections, or anything else. See the upcoming meetings for more information.

Announcements

CfPs

<figure class="wp-block-table">
ConferenceLocationDateCfP
PyCon SKBratislava, SK9–11 Sepcloses 30 Jun
SREcon22 EMEAAmsterdam, NL25–27 Octcloses 30 Jun
Write the Docs Praguevirtual11–13 Sepcloses 30 Jun
React IndiaGoa, IN & virtual22–24 Sepcloses 30 Jun
NodeConf EUKilkenny, IE & virtual3–5 Octcloses 6 Jul
Nest With Fedoravirtual4–6 Augcloses 8 Jul
CentOS Dojo at DevConf USBoston, MA, US17 Augcloses 22 Jul
</figure>

Help wanted

Prioritized Bugs

See the Prioritized Bugs documentation for information on the process, including how to nominate bugs.

<figure class="wp-block-table">
Bug IDComponentStatus
1955416shimCLOSED
2079833cmakeNEW
</figure>

Meetings & events

Fedora Hatches

Hatches are local, in-person events to augment Nest With Fedora. Here are the upcoming Hatch events.

<figure class="wp-block-table">
DateLocation
24–25 JunVigo, ES
25 JunStockholm, SE
6–8 JulPune, IN
13–14 JulCork, IE
21 JulRochester, NY, US
28 JulMexico City, MX
11 AugBrno, CZ
</figure>

As a reminder, the Nest With Fedora CfP is open. We’ll see you online for Nest With Fedora 4–6 August.

Releases

<figure class="wp-block-table">
Releaseopen bugs
F354256
F362911
Rawhide7200
</figure>

Fedora Linux 37

Schedule

Below are some upcoming schedule dates. See the schedule website for the full schedule.

  • 2022-06-22 — Changes requiring infrastructure changes due
  • 2022-06-28 — System-Wide Changes, Changes requiring mass rebuild due
  • 2022-07-19 — Self-Contained Changes due
  • 2022-07-20 — Mass rebuild begins

Changes

The table below lists proposed Changes. See the ChangeSet page or Bugzilla for information on approved Changes.

<figure class="wp-block-table">
ProposalTypeStatus
Build all JDKs in Fedora against in-tree libraries and with static stdc++libSystem-WideApproved
Enhance Persian Font SupportSelf-ContainedApproved
Return Cloud Base to Edition StatusSystem-WideApproved
Fallback HostnameSystem-WideFESCo #2804
RPM Macros for Build FlagsSystem-WideFESCo #2805
LLVM 15Self-ContainedFESCo #2806
Supplement of Server distributables by a KVM VM disk imageSelf-ContainedFESCo #2807
Erlang 25Self-ContainedAnnounced
Add -fno-omit-frame-pointer to default compilation flagsSystem-WideAnnounced
Gettext Runtime SubpackageSystem-WideAnnounced
Golang 1.19System-WideAnnounced
</figure>

Fedora Linux 38

Changes

The table below lists proposed Changes. See the ChangeSet page or Bugzilla for information on approved Changes.

<figure class="wp-block-table">
ProposalTypeStatus
SPDX License Phase 1Self-ContainedApproved
</figure>

Fedora Linux 40

Changes

The table below lists proposed Changes.

<figure class="wp-block-table">
ProposalTypeStatus
Retire python3.7Self-ContainedAnnounced
</figure>

Contributing

Have something you want included? You can file an issue or submit a pull request in the fedora-pgm/pgm_communication repo.

The post Friday’s Fedora Facts: 2022-24 appeared first on Fedora Community Blog.

CPE Weekly Update – Week 24 2022

Posted by Fedora Community Blog on June 17, 2022 10:00 AM
featured image with CPE team's name

This is a weekly report from the CPE (Community Platform Engineering) Team. If you have any questions or feedback, please respond to this report or contact us on #redhat-cpe channel on libera.chat (https://libera.chat/).

Week: 13th – 17th June 2022

Highlights of the week

Infrastructure & Release Engineering

Goal of this Initiative

Purpose of this team is to take care of day to day business regarding CentOS and Fedora Infrastructure and Fedora release engineering work.
It’s responsible for services running in Fedora and CentOS infrastructure and preparing things for the new Fedora release (mirrors, mass branching, new namespaces etc.).
The ARC (which is a subset of the team) investigates possible initiatives that CPE might take on.
Link to planning board: https://zlopez.fedorapeople.org/I&R-2022-06-15.pdf
Link to docs: https://docs.fedoraproject.org/en-US/infra/

Update

Fedora Infra

  • Resultsdb almost moved to ocp4 in prod, just a few parts to finish (Thanks Leo!)
  • Ocp4 cluster now on our vpn, so all proxies can reach apps (thanks darknao!)
  • Wiki upgrade looking good in staging, prod to come (thanks ryan!)
  • Some more vm’s to f36
  • About 50% done moving apps to ocp4.
  • Image builder prod move blocked due to firewall issues

CentOS Infra including CentOS CI

Release Engineering

  • ELN composes were broken over the weekend because of ODCS backend / front end version mismatch
  • Nodejs-sig removed as the default assignee on a bunch of components in BZ
  • we have discovered workflow in bodhi that locks update in a weird state more info https://github.com/fedora-infra/bodhi/issues/4566

CentOS Stream

Goal of this Initiative

This initiative is working on CentOS Stream/Emerging RHEL to make this new distribution a reality. The goal of this initiative is to prepare the ecosystem for the new CentOS Stream.

Updates

  • We imported all RPMs for modules (CentOS Stream 8) to the shared buildsystem
  • All sources imported to GitLab (CentOS Stream 8)

CentOS Duffy CI

Goal of this Initiative

Duffy is a system within CentOS CI infrastructure allowing tenants to provision and access machines (physical and/or virtual, of different architectures and configurations) for the purposes of CI testing. Development of Duffy is largely finished, we’re currently planning and testing deployment scenarios.

Updates

  • Test and polish duffy client … experience
  • Docs and CentOS Dojo talk prep

Package Automation (Packit Service)

Goal of this initiative

Automate RPM packaging of infra apps/packages

Updates

  • Almost finished, only mirrormanager2 remaining from our critical apps on Github
  • Couple of outliers (fasjson, flask-mod-auth) need downstream repos created
  • Datanommer.models manually packaged so datagrepper can be automated
  • Noggin now fully automated

Flask-oidc: oauth2client replacement

Goal of this initiative

Flask-oidc is a library used across the Fedora infrastructure and is the client for ipsilon for its authentication. flask-oidc uses oauth2client. This library is now deprecated and no longer maintained. This will need to be replaced with authlib.

Updates:

  • Working poc app which authenticates against noggin/ipa using authlib and OIDC.
  • Working on an upstream PR with the working code now.

EPEL

Goal of this initiative

Extra Packages for Enterprise Linux (or EPEL) is a Fedora Special Interest Group that creates, maintains, and manages a high quality set of additional packages for Enterprise Linux, including, but not limited to, Red Hat Enterprise Linux (RHEL), CentOS and Scientific Linux (SL), Oracle Linux (OL).

EPEL packages are usually based on their Fedora counterparts and will never conflict with or replace packages in the base Enterprise Linux distributions. EPEL uses much of the same infrastructure as Fedora, including buildsystem, bugzilla instance, updates manager, mirror manager and more.

Updates

Kindest regards,
CPE Team

The post CPE Weekly Update – Week 24 2022 appeared first on Fedora Community Blog.

Fedora Workstation’s State of Gaming – A Case Study of Far Cry 5 (2018)

Posted by Fedora Magazine on June 17, 2022 08:00 AM

First-person shooter video games are a great proving ground for strategies that make you finish on the top, reflexes that help you to shoot before getting shot and agility that adjusts you to whatever a situation throws at you. Add the open-ended nature brought in by large intricately-designed worlds into the mix, and it dials the player experience to eleven and, with that, it also becomes great evidence of what a platform is capable of. Needless to say, I have been a great fan of open-world first-person shooter games. And Ubisoft’s Far Cry series happens to be the one which remains closest to my heart. So I tried the (second) most recent release in the long-running series, Far Cry 5 which came out in 2018, on Fedora Workstation 35 to see how it performs.

Just like in my previous case study, the testing hardware has an AMD RDNA2-based GPU, where the video game was configured to the highest possible graphical preset to stress the hardware into performing as much as its limiting factor. To ensure that we have a fair comparison, I set up two environments – one with Windows 10 Pro 21H2 and one with Fedora Workstation 35, both having up-to-date drivers and support software such as MSI Afterburner or MangoHUD for monitoring, Steam or Lutris for video game management and OBS Studio for footage recording. Adding to that, the benchmarks were ensured to be both representatives of a common gameplay scenario and variable enough to address resolution scaling and HD textures.

<figure class="wp-block-image is-resized"><figcaption>Cover art for “Far Cry 5”, Ubisoft, fair use, via Wikimedia Commons</figcaption></figure>

Before we get into some actual performance testing and comparison results, I would like to go into detail about the video game that is at the centre of this case study. Far Cry 5 is a first-person action-adventure video game developed by Ubisoft Montreal and Ubisoft Toronto. The player takes the role of an unnamed junior deputy sheriff who is trapped in Hope County, a fictional region based in Montana and has to fight against a doomsday cult to take back the county from the grasp of its charismatic and powerful leader. The video game has been well received for the inclusion of branching storylines, role-playing elements and side quests, and is optimized enough to be a defining showcase of what the underlying hardware and platform are capable of.

Preliminary

Framerate

The first test that was performed had a direct implication on how smooth the playing experience would be across different platforms but on the same hardware configuration.

Without HD textures

On a default Far Cry 5 installation, I followed the configuration stated above but opted out of the HD textures pack to warm up the platforms with a comparatively easier test. Following are the results.

<figure class="wp-block-image"></figure>
  1. On average, the video game had around a whopping 59.25% more framerate on Fedora Workstation 35 than on Windows 10 Pro 21H2.
  2. To ensure an overall consistent performance, both the minimum and maximum framerates were also noted to monitor dips and rises.
  3. The minimum framerates on Fedora Workstation 35 were ahead by a big 49.10% margin as compared to those on Windows 10 Pro 21H2.
  4. The maximum framerates on Fedora Workstation 35 were ahead by a big 62.52% margin as compared to those on Windows 10 Pro 21H2.
  5. The X11 display server had roughly 0.52% more minimum framerate as compared to Wayland, which can be taken as a margin of error.
  6. The Wayland display server had roughly 3.87% more maximum framerate as compared to X11, which can be taken as a margin of error.

With HD textures

On a default Far Cry 5 installation, I followed the configuration stated above, but this time I enabled the HD textures pack to stress the platforms with a comparatively harder test. Following are the results.

  1. On average, the video game had around a whopping 65.63% more framerate on Fedora Workstation 35 than on Windows 10 Pro 21H2.
  2. To ensure an overall consistent performance, both the minimum and maximum framerates were also noted to monitor dips and rises.
  3. The minimum framerates on Fedora Workstation 35 were ahead by a big 59.11% margin as compared to those on Windows 10 Pro 21H2.
  4. The maximum framerates on Fedora Workstation 35 were ahead by a big 64.21% margin as compared to those on Windows 10 Pro 21H2.
  5. The X11 display server had roughly 9.77% more minimum framerate as compared to Wayland, which is big enough to be considered.
  6. The Wayland display server had roughly 1.12% more maximum framerate as compared to X11, which can be taken as a margin of error.

Video memory usage

The second test that was performed had less to do with the playing experience and more with the efficiency of graphical resource usage. Following are the results.

Without HD textures

On a default Far Cry 5 installation, I followed the configuration stated above but opted out of the HD textures pack to use comparatively lesser video memory across the platforms. Following are the results.

  1. On average, Fedora Workstation 35 uses around 31.94% lesser video memory as compared to Windows 10 Pro 21H2.
  2. The Wayland display server uses roughly 1.78% more video memory as compared to X11, which can be taken as a margin of error.
  3. The video game usage estimated is closer to the actual readings on Fedora Workstation 35 than they are those on Windows 10 Pro 21H2.
  4. Adding this to the previous results speaks about how Fedora Workstation 35 performs better while using fewer resources.

With HD textures

On a default Far Cry 5 installation, I followed the configuration stated above but this time I enabled the HD textures pack to stress the platforms by occupying more video memory. Following are the results.

  1. On average, Fedora Workstation 35 uses around 22.79% lesser video memory as compared to Windows 10 Pro 21H2.
  2. The Wayland display server uses roughly 2.73% more video memory as compared to X11, which can be taken as a margin of error.
  3. The video game usage estimated is closer to the actual readings on Fedora Workstation 35 than they are those on Windows 10 Pro 21H2.
  4. Adding this to the previous results speaks about how Fedora Workstation 35 performs better while using fewer resources.

System memory usage

The third test that was performed had less to do with the playing experience and more with how other applications can fit in the available memory while the video game is running. Following are the results.

Without HD textures

On a default Far Cry 5 installation, I followed the configuration stated above but opted out of the HD textures pack to warm up the platforms with a comparatively easier test. Following are the results.

  1. On average, Fedora Workstation 35 uses around 38.10% lesser system memory as compared to Windows 10 Pro 21H2.
  2. The Wayland display server uses roughly 4.17% more system memory as compared to X11, which can be taken as a margin of error.
  3. Adding this to the previous results speaks about how Fedora Workstation 35 performs better while using fewer resources.
  4. Lesser memory usage by the video game leaves out extra headroom for other applications to run simultaneously with no compromises.

With HD textures

On a default Far Cry 5 installation, I followed the configuration stated above, but this time I enabled the HD textures pack to stress the platforms with a comparatively harder test. Following are the results.

  1. On average, Fedora Workstation 35 uses around 33.58% lesser system memory as compared to Windows 10 Pro 21H2.
  2. The Wayland display server uses roughly 7.28% more system memory as compared to X11, which is big enough to be considered.
  3. Adding this to the previous results speaks about how Fedora Workstation 35 performs better while using fewer resources.
  4. Lesser memory usage by the video game leaves out extra headroom for other applications to run simultaneously with no compromises.

Advanced

Without HD textures

On a default Far Cry 5 installation, I followed the previously stated configuration without the HD textures pack and ran the tests with varied resolution multipliers. Following are the results.

Minimum framerates recorded

<figure class="wp-block-image"></figure>
  1. A great deal of inconsistent performance is visible on Fedora Workstation 35 with both display servers in lower resolution scales.
  2. The inconsistencies seem to normalize for the resolution multipliers on and beyond the 1.1x resolution scale for Fedora Workstation 35.
  3. Resolution multipliers do not seem to have a great effect on the framerate on Windows 10 Pro 21H2 as much as on Fedora Workstation 35.
  4. Although Windows 10 Pro 21H2 misses out on potential performance advantages in lower resolution multipliers, it has been consistent.
  5. Records on Windows 10 Pro 21H2 in the 2.0x resolution multiplier appear to be marginally better than those on Fedora Workstation 35.

Maximum framerates recorded

<figure class="wp-block-image"></figure>
  1. A small amount of inconsistent performance is visible on Fedora Workstation 35 with both display servers in lower resolution scales.
  2. The inconsistencies seem to normalize for the resolution multipliers on and beyond the 1.1x resolution scale for Fedora Workstation 35.
  3. Resolution multipliers change starts noticeably affecting performance on Windows 10 Pro 21H2 on a 1.6x scale, beyond which it falls greatly.
  4. Although Windows 10 Pro 21H2 misses out on potential performance advantages in lower resolution multipliers, it has been consistent.
  5. Records on Windows 10 Pro 21H2 in the 1.6x resolution multiplier and beyond appear to be better than those on Fedora Workstation 35.

Average framerates recorded

<figure class="wp-block-image"></figure>
  1. A minor amount of inconsistent performance is visible on Fedora Workstation 35 with both display servers in lower resolution scales.
  2. The inconsistencies seem to normalize for the resolution multipliers on and beyond the 1.1x resolution scale for Fedora Workstation 35.
  3. Resolution multipliers change starts noticeably affecting performance on Windows 10 Pro 21H2 on a 1.6x scale, beyond which it falls greatly.
  4. Although Windows 10 Pro 21H2 misses out on potential performance advantages in lower resolution multipliers, it has been consistent.
  5. Records on Windows 10 Pro 21H2 in the 1.9x resolution multiplier and beyond appear to be better than those on Fedora Workstation 35.

With HD textures

On a default Far Cry 5 installation, I followed the previously stated configuration with the HD textures pack and ran the tests with varied resolution multipliers. Following are the results.

Minimum framerates recorded

<figure class="wp-block-image"></figure>
  1. A great deal of inconsistent performance is visible on Fedora Workstation 35 with both display servers in lower resolution scales.
  2. The inconsistencies seem to normalize for the resolution multipliers on and beyond the 1.5x resolution scale for Fedora Workstation 35.
  3. Resolution multipliers do not seem to have a great effect on the framerate on Windows 10 Pro 21H2 as much as on Fedora Workstation 35.
  4. Although Windows 10 Pro 21H2 misses out on potential performance advantages in lower resolution multipliers, it has been consistent.
  5. Records on Windows 10 Pro 21H2 in the 2.0x resolution multiplier appear to be marginally better than those on Fedora Workstation 35.

Maximum framerates recorded

<figure class="wp-block-image"></figure>
  1. A great deal of inconsistent performance is visible on Fedora Workstation 35 with both display servers in lower resolution scales.
  2. The inconsistencies seem to normalize for the resolution multipliers on and beyond the 1.0x resolution scale for Fedora Workstation 35.
  3. Resolution multipliers change starts noticeably affecting performance on Windows 10 Pro 21H2 on a 1.6x scale, beyond which it falls greatly.
  4. Although Windows 10 Pro 21H2 misses out on potential performance advantages in lower resolution multipliers, it has been consistent.
  5. Records on Windows 10 Pro 21H2 in the 1.6x resolution multiplier and beyond appear to be better than those on Fedora Workstation 35.

Average framerates recorded

<figure class="wp-block-image"></figure>
  1. A minor amount of inconsistent performance is visible on Fedora Workstation 35 with both display servers in lower resolution scales.
  2. The inconsistencies seem to normalize for the resolution multipliers on and beyond the 1.1x resolution scale for Fedora Workstation 35.
  3. Resolution multipliers change starts noticeably affecting performance on Windows 10 Pro 21H2 on a 1.6x scale, beyond which it falls greatly.
  4. Although Windows 10 Pro 21H2 misses out on potential performance advantages in lower resolution multipliers, it has been consistent.
  5. Records on Windows 10 Pro 21H2 in the 1.9x resolution multiplier and beyond appear to be better than those on Fedora Workstation 35.

Inferences

If the test results and observations baffle you, please allow me to tell you that you are not the only one who feels like that. For a video game that was created to run on Windows, it is hard to imagine how it ends up performing way better on Fedora Workstation 35, all while using a much lesser amount of system resources at all times. Special attention has been given to noting down the highest highs and lowest lows of framerates to ensure that consistent performance is made available.

But wait a minute – how is it that Fedora Workstation 35 manages to make this possible? Well, while I do not have a clear idea of what exactly goes on behind the scenes, I do have a certain number of assumptions that I suspect might be the reasons attributing to such brilliant visuals, great framerates and efficient resource usage. These can potentially act as starting points for us to understand the features of Fedora Workstation 35 for compatibility layers to make use of.

  1. Effective caching of graphical elements and texture assets in the video memory allows for keeping only those data in the memory which are either actively made use of or regularly referenced. The open-source AMD drivers help Fedora Workstation 35 make efficient use of the available frame buffer.
  2. Quick and frequent cycling of data elements from the video memory helps to bring down total occupancy per application at any point in time. The memory clocks and shader clocks are left at the application’s disposal by the open-source AMD drivers, and firmware bandwidth limits are all but absent.
  3. With AMD Smart Access Memory (SAM) enabled, the CPU is no longer restricted to using only 256MiB of the video memory at a time. A combination of leading-edge kernel and up-to-date drivers makes it available on Fedora Workstation 35 and capable of harnessing the technology to its limits.
  4. Extremely low system resource usage by supporting software and background services leaves out a huge majority of them to be used by the applications which need it the most. Fedora Workstation 35 is a lightweight distribution, which does not get in your way and puts the resources on what’s important.
  5. Faster loading of data elements to and from the physical storage devices to the system memory is greatly enhanced with the use of high-capacity modern copy-on-write file systems like BTRFS and journaling file systems like EXT4, which happens to be the suggested file system for Fedora Workstation 35.

Performance improvements like these only make me want to indulge more in testing and finding out what else Fedora Workstation is capable of. Do let me know what you think in the comments section below.

Fedora Linux 36 election results

Posted by Fedora Community Blog on June 17, 2022 01:20 AM
Fedora 26 Supplementary Wallpapers: Vote now!

The Fedora Linux 36 election cycle has concluded. Here are the results for each election. Congratulations to the winning candidates, and thank you all candidates for running in this election!

Results

Council

One Council seat was open this election. A total of 263 ballots were cast, meaning a candidate could accumulate up to 526 votes.

<figure class="wp-block-table">
# votesCandidate
348Sumantro Mukherjee
259Eduard Lucena
</figure>

FESCo

Four FESCo seats were open this election. A total of 282 ballots were cast, meaning a candidate could accumulate up to 1099 votes.

<figure class="wp-block-table">
# votesCandidate
875Neal Gompa
825Stephen Gallagher
777Major Hayden
669Benjamin Beasley
624Tom Stellard
</figure>

Mindshare

One Mindshare seat was open this election. A total of 230 ballots were cast, meaning a candidate could accumulate up to 690 votes.

<figure class="wp-block-table">
# votesCandidate
395Madeline Peck
393David Duncan
337Sumantro Mukherjee
</figure>

David Duncan is elected to fill the remainder of the term being vacated by Till Maas.

Stats

The Fedora Linux 36 election cycle showed a near-record engagement for all three bodies.

<figure class="wp-block-image size-full"><figcaption>Candidate counts for elections by cycle</figcaption></figure>

While FESCo and Council elections had fewer candidates than in previous cycles, Council had the highest voter count since the F32 cycle. FESCo had more voters than in any election except F21 and Mindshare had its highest voter total on record.

<figure class="wp-block-image size-full"><figcaption>Election voters by cycle</figcaption></figure>

The post Fedora Linux 36 election results appeared first on Fedora Community Blog.

Fedora Wiki Updates

Posted by Fedora Infrastructure Status on June 16, 2022 09:00 PM

Updating the host for the Fedora Wiki to Fedora 36, which bumps the version of MediaWiki to 1.37.1