/rss20.xml">

Fedora People

Fedora Infra musings for the Second week of july 2024

Posted by Kevin Fenzi on 2024-07-13 01:38:19 UTC

This week started out fun with some Oral surgery on monday. Luckily it all went very well. I went to sleep, woke up when they were done and had a bunch of pain medication on board. I’m getting pretty sick of ‘soft’ foods however.

Tuesday we had our logserver 100% full. Turns out a toddler (thing that takes actions on message bus messages) was crashing in a loop. When it does this it puts the message back on the queue and tries again. This works fine if it’s some kind of transitory error and it can process it after a short while, but doesn’t work very well at all if it needs intervention. So, 350GB of syslog later we disabled it until we can fix it. We did have some disucssion about this problem, and it seems like the way to go might be to cause the entire pod to crash on these things. That way it would alert us and require intervention instead of looping on something it can’t ever process. Also, right now the toddlers are just generic pods that run all the handlers, but we are looking at a poddlers setup where each handler has it’s own pod. That way a crash of one will not block all the rest. Interesting stuff.

Our new, updated mailman instance has been having memory pressure problems. We were finally able to track it down to the ‘full text search’ causing memory spikes in gunicorn workers. It’s rebuilding it’s indexes, but it’s not been able to finish doing so yet, so without those this search is really memory intensive. So, we are going to disable it for now until the indexing is all caught up. This seems to have really helped it out. Fingers crossed.

This week was a mass update/reboot cycle. We try and do these every few months to pick up on non security updates (security updates get applied daily). So, on tuesday I did all the staging hosts and various other hosts I could do that wouldn’t cause any outages for users/maintainers. Wed was the big event and all the rest were done. Ansible does make this pretty reasonable to do, but of course there’s always things that don’t apply right, don’t reboot right, or break somehow. There’s a share of those this time:

  • All of our old lenovo emag aarch64 buildhw’s wouldn’t reboot. (see below)
  • koji hubs fedora-messaging plugin wasn’t working. Turns out the hardening in the f40 httpd service file prevented it from working. I’ve overridden that for now, but we should fix it to not need that override.
  • Our staging openshift cluster had a node with a disk that died. This disk was used for storage, so the upgrade couldn’t continue. Finally got it to delete that and continue today.
  • flatpak builds were broken because f40 builders meant that we switched to createrepo_c 1.0, and thus, zstd by default. flatpak sig folks have fixes in the pipeline.
  • epel8 builds were broken by f40’s dnf no longer downloading filelists. rhel8 has requirements for /usr/libexec/platform-python that wouldn’t work anymore, so no builds.I’ve just added platform-python to the koji epel8 build groups for now. Perhaps there will be a larger fix in mock.

So, we have a number of old lenovo emags. They have been our primary aarch64 builders for ages (since about 2019 or so). They are now no longer under warentee, and we have slowly been replacing them with newer boxes. They now will no longer boot at all. It seems like it has to be a shim or grub problem, but I can’t really seem to get it working even with older versions, so I am now thinking it might be a firmware problem. There is actually a (slightly) newer firmware, if I can get a copy. Failing that we may have to accelerate our retirement’s around these. They really served long and well, and are actually pretty nice hardware, but all things must end. Anyhow, looking for the new firmware to try that before giving up.

Been dealing with this bug in rawhide kernels lately. The last two days I have come in in the morning and my laptop is completely unresponsive. A few other times I have hit the kswapd storm, and backups have been taking many hours. I sure hope the fix lands soon. I might go back to f40 kernels if the upstream fix doesn’t land soon. I know I could just make my own kernel, but… I’ve done that enough in my life.

Till next week, be kind to others!

Infra and RelEng Update – Week 28 2024

Posted by Fedora Community Blog on 2024-07-12 10:00:00 UTC

This is a weekly report from the I&R (Infrastructure & Release Engineering) Team. It also contains updates for CPE (Community Platform Engineering) Team as the CPE initiatives are in most cases tied to I&R work.

We provide you both infographic and text version of the weekly report. If you just want to quickly look at what we did, just look at the infographic. If you are interested in more in depth details look below the infographic.

Week: 08 – 12 July 2024

Infra & Releng infographic

Infrastructure & Release Engineering

The purpose of this team is to take care of day to day business regarding CentOS and Fedora Infrastructure and Fedora release engineering work.
It’s responsible for services running in Fedora and CentOS infrastructure and preparing things for the new Fedora release (mirrors, mass branching, new namespaces etc.).
List of planned/in-progress issues

Fedora Infra

CentOS Infra including CentOS CI

Release Engineering

CPE Initiatives

EPEL

Extra Packages for Enterprise Linux (or EPEL) is a Fedora Special Interest Group that creates, maintains, and manages a high quality set of additional packages for Enterprise Linux, including, but not limited to, Red Hat Enterprise Linux (RHEL), CentOS, Scientific Linux (SL) and Oracle Linux (OL).

Updates

Community Design

CPE has few members that are working as part of Community Design Team. This team is working on anything related to design in Fedora Community.

Updates

  • Podman: Improving table pages 📃
  • CoreOS 5th anniversary designs
  • Swag designs for Flock 🐤
  • Creative Freedom Summit organizers met to start planning 2025’s summit 🤩

If you have any questions or feedback, please respond to this report or contact us on #redhat-cpe channel on matrix.

The post Infra and RelEng Update – Week 28 2024 appeared first on Fedora Community Blog.

Authselect in Fedora Linux 40: Migrating to the new “local” profile

Posted by Fedora Magazine on 2024-07-12 08:00:00 UTC

Of the many changes in Fedora Linux 40, there is one important under-the-hood Authselect profile change that is applied to new Fedora Linux 40 installations. The active Authselect profile will not be changed for upgrades. A new Authselect profile — local — will replace the minimal profile and it will be the default profile for fresh installations of Fedora Linux 40+.

To understand why this change is important and why it might make sense to manually migrate your current installation to use the new profile, this article will explain how the Linux authentication system works and compare and contrast the sssd* and local profiles.

* sssd was the default Authselect profile for installations prior to Fedora Linux 40.

An overview of Linux authentication

The first system component we should look into is PAM. Here’s an excerpt from Red Hat’s PAM documentation:

Pluggable Authentication Modules (PAMs) provide a centralized authentication mechanism which system applications can use to relay authentication to a centrally configured framework. PAM is pluggable because there is a PAM module for different types of authentication sources (such as Kerberos, SSSD, NIS, or the local file system). Different authentication sources can be prioritized.

So, PAM is a system of modules which you can combine to manage how authentication works. But why is that useful?

Several features can be enabled, either by adding new PAM modules, or by reconfiguring existing ones. Examples include:

  • auto mounting disks or paths (encrypted or not)
  • running a command on login
  • Active Directory integration
  • U2F (generic or Yubikey-specific)
  • fingerprint reader support
  • autologin

There are two drawbacks (depending on the experience level of the user/sysadmin) to managing the PAM configuration manually — the syntax used in configuration files is somewhat complex and if they are misconfigured, it might not be possible to log back into the system and undo the mistake.

With that in mind, we arrive at the legacy Authconfig tool. A description of the Authconfig tool from Red Hat is:

The Authconfig tool can help configure what kind of data store to use for user credentials, such as LDAP. On Red Hat Enterprise Linux, Authconfig has both GUI and command-line options to configure any user data stores. The Authconfig tool can configure the system to use specific services — SSSD, LDAP, NIS, or Winbind — for its user database, along with using different forms of authentication mechanisms.

Authconfig was as a tool that operated above the PAM layer and it was used to configure the authentication for a system. However, it has since been replaced by Authselect.

Some insight into Authselect can be gained by reading the rationale for its introduction to Fedora Linux 27. Here is an excerpt from the Fedora Linux 27 change set:

Authselect is a tool to select system authentication and identity sources from a list of supported profiles.

It is designed to be a replacement for Authconfig but it takes a different approach to configuring the system. Instead of letting the administrator build the PAM stack with a tool (which may potentially end up with a broken configuration), it would ship several tested stacks (profiles) that solve a use-case and are well tested and supported. At the same time, some obsolete features of Authconfig would not be supported by Authselect.

With the addition of profiles, Authselect simplifies the task of configuring system authentication and it makes authentication failures due to human error when editing PAM configuration files less likely.

To recap:

  • PAM is the underlying system for authentication in Fedora Linux. It sources configuration files under /etc/pam.d on each login and, if one of its configuration files is corrupt or invalid, logins might fail (or succeed when they shouldn’t).
  • Authconfig is a tool that provides higher-level management of the PAM configuration. It is a script that sets up PAM as directed by the user/sysadmin.
  • Authselect is a replacement for Authconfig. Authselect adds the concept of “profiles” (with optional features that may be enabled or disabled as desired). Authselect should cover most use-cases and it makes manually editing PAM’s low-level configuration files unnecessary.

Authselect became the default in Fedora Linux 28 and later became mandatory in Fedora Linux 36.

Fedora Linux 40, the sssd profile, and the local profile

Now that the fundamentals of PAM, Authconfig, and Authselect have been covered, it is time to compare and contrast the Authselect profiles — the sssd profile versus the new local profile.

First, let’s see what SSSD is. Here is an excerpted from SSSD’s website:

SSSD is an acronym for System Security Services Daemon. It is the client component of centralized identity management solutions such as FreeIPA, 389 Directory Server, Microsoft Active Directory, OpenLDAP, and other directory servers. The client serves and caches the information stored in the remote directory server and provides identity, authentication, and authorization services to the host machine.

To simplify, you might think of SSSD as a system for accessing an organization’s server which holds the user data (passphrases, full names, etc.). Normally, you would only be able to login and access the data from user accounts that are present on your local machine (normally stored in the /etc/passwd file). With SSSD, you can access remote user accounts that are stored on a remote server such as Active Directory or 389 Directory Server.

But why is the sssd profile relevant?

Up to Fedora Linux 39, the default Authselect profile was sssd:

$ authselect current
Profile ID: sssd
Enabled features:
- with-silent-lastlog
- with-mdns4
- with-fingerprint

However, as part of Changes/SSSDRemoveFilesProvider for Fedora Linux 40, the way the sssd profile handles local users has changed:

• New “local” profile to handle local users without SSSD will be introduced. This profile will be based on “minimal”, but it may gain more features.

• “minimal” profile will be removed and replaced by “local”.

• “Local” profile will be now the default profile

• ‘sssd’ profile will lose with‐files‐domain and with‐files‐access‐provider options, and will gain ‐‐with‐tlog option.

So, the upgraded local profile replaces the previous minimal profile and local becomes the default Authselect profile for new installs (instead of sssd).

Below is what Authselect shows on a fresh Fedora Linux 40 installation (or from a Live CD session):

$ authselect current
Profile ID: local
Enabled features:
- with-silent-lastlog
- with-mdns4

Migration

The new local profile should be sufficient for those whom only need local user accounts. If you’ve upgraded to Fedora Linux 40 from an earlier release, you will not be switched to the new local profile automatically. You might want to manually switch your system to use the local profile if you do not need the remote account abilities that the sssd profile provides.

The list of features available for the local profile is:

$ authselect list-features local
with-ecryptfs
with-faillock
with-fingerprint
with-libvirt
with-mdns4
with-mdns6
with-mkhomedir
with-pam-gnome-keyring
with-pam-u2f
with-pam-u2f-2fa
with-pamaccess
with-pwhistory
with-silent-lastlog
with-systemd-homed
without-nullok
without-pam-u2f-nouserok

A description of the profile and each of its features can be seen with authselect show local.

To migrate between profiles, use the authselect select <profile> [feature] [feature...] command.

To migrate to the local profile with the with-silent-lastlog and with-mdns4 features enabled:

# authselect select local with-silent-lastlog with-mdns4 

If, for example, you want to add support for fingerprint readers, add that feature name to the list on the command line:

# authselect select local with-silent-lastlog with-mdns4 with-fingerprint

Conclusion

Fedora took the safe route of applying this change only to new Fedora Linux installations. Existing users who upgrade their systems do not need to worry that their PAM authentication stacks will be changed. However, end users may benefit from manually switching to the new local profile on their local-only PCs.

Failed to download metadata for repo ‘fedora-cisco-openh264’: GPG verification is enabled, but GPG signature is not available

Posted by Huiren Woo on 2024-07-11 18:21:51 UTC
I encountered this error recently while trying to install the openh264 on Fedora 39. I did so using the following commands. sudo dnf config-manager --set-enabled fedora-cisco-openh264sudo dnf install gstreamer1-plugin-openh264 mozilla-openh264 This was the full error output I’ve gotten when I…

Using the ATEN CV211 (all-in-one KVM adapter) with Fedora Linux

Posted by Andreas Haerter on 2024-05-16 17:32:00 UTC

The ATEN CV211 is an all-in-one KVM (Keyboard, Video, Mouse) adapter that turns your laptop into a KVM console, combining the functionality of a wormhole switch, capture box, external DVD-ROM, keyboard, mouse, and monitor, all in one compact and convenient unit. I really like the hardware in daily operations, especially when I have to a takeover new environments with “historically grown” cabling. It is nice to have the ability to get the screen and keyboard control of a yet unknown server without hassle—all with a small USB adapter in your backpack:

ATEN CV211 KVM switch: photo of the hardware

If you connect the adapter, you’ll get a 10 MiB drive mounted with the following contents, containing a Microsoft Windows Client WinClient.exe (basically a Runtime Environment and wrapper) and the real application JavaClient.jar:

$ ll
total 9,1M
drwxr-xr-x. 2 user user  16K  1. Jan 1970  .
drwxr-x---+ 3 root root   60 30. Apr 19:08 ..
-rw-r--r--. 1 user user 3,7M 30. Dez 2019  JavaClient.jar
-rw-r--r--. 1 user user 2,0M 30. Dez 2019  Vplayer.jar
-rwxr-xr-x. 1 user user 3,5M 30. Dez 2019  WinClient.exe

The “login failed” problem

The JavaClient.jar KVM console is mostly the same as ATEN uses for all their IP KVM stuff. They just bind the service to some high port on localhost and use the hardcoded credentials -u administrator -p password to connect (which is obvious in several places):

ATEN CV211 KVM switch: credentials

Sadly, the Java application is not able to run out-of-the-box on a Fedora 40 Linux with OpenJDK / Java SE. The application will start but sometimes does not even list the device. And if there is a device to connect to, the login will fail:

ATEN CV211 KVM switch: login failed with OpenJDK

The JavaClient.jar will not be able to connect with any supported OpenJDK or Azul Zulu Java RE:

# incompatible Java version :-(
$ java -version
openjdk version "17.0.9" 2023-10-17

Solution: Oracle JDK 7

For anybody having the same problem, the following should help:

  1. Use a copy of the Oracle JDK 7 (the patch level does not matter) and the application will work without flaws.1
  2. Make sure the current working directory is the USB mount point so the .jar files are in ./.

For example, if you just extract jdk-7u80-linux-x64.tar.gz to /tmp, you can use the application as follows:

tar -xvf jdk-7u80-linux-x64.tar.gz -C /tmp
cd /run/media/user/disk # or wherever the ATEN CV211 storage was mounted
sudo /tmp/jdk1.7.0_80/bin/java -jar ./JavaClient.jar
ATEN CV211 KVM switch: screenshot of the working application

You can download the Oracle JDK 7 from https://www.oracle.com/de/java/technologies/javase/javase7-archive-downloads.html, but keep in mind to check the license conditions, especially if you are operating in a commercial environment.

If the problem persists…

Sometimes, the “login failed” error occurs even when following the procedure described above (i.e., when using Oracle Java and the current working directory is the mount point). I have not yet been able to determine the exact cause of these (rare) cases. However, this behavior has never occurred during operation but only during the first use. In such cases, a reboot of the hardware by unplugging and reconnecting it to the USB port always helped.


  1. Do not use this old, unpatched Java RE for anything else because of known security vulnerabilities. ↩︎

server updates/reboots

Posted by Fedora Infrastructure Status on 2024-07-10 21:00:00 UTC

We will be applying various updates and rebooting servers. During the outage window various services may be down for short periods of time.

Additionally, we will be upgrading builders to Fedora 40. This will mean that koji buildroot repodata will change to being zstd based (per fedora 40 createrepo_c defaults …

Forcibly Set Array Size in Vala

Posted by Michael Catanzaro on 2024-07-10 20:33:20 UTC

Vala programmers probably already know that a Vala array is equivalent to a C array plus an integer size. (Unfortunately, the size is gint rather than size_t, which seems likely to be a source of serious bugs.) When you create an array in Vala, the size is set for you automatically by Vala. But what happens when receiving an array that’s created by a library?

Here is an example. I recently wanted to use the GTK function gdk_texture_new_for_surface(), which converts from a cairo_surface_t to a GdkTexture, but unfortunately this function is internal to GTK (presumably to avoid exposing more cairo stuff in GTK’s public API). I decided to copy it into my application, which is written in Vala. I could have put it in a separate C source file, but it’s nicer to avoid a new file and rewrite it in Vala. Alas, I hit an array size roadblock along the way.

Now, gdk_texture_new_for_surface() calls cairo_image_surface_get_data() to return an array of data to be used for creating a GBytes object to pass to gdk_memory_texture_new(). The size of the array is cairo_image_surface_get_height (surface) * cairo_image_surface_get_stride (surface). This is GTK’s code to create the GBytes object:

bytes = g_bytes_new_with_free_func (cairo_image_surface_get_data (surface),
                                    cairo_image_surface_get_height (surface)
                                    * cairo_image_surface_get_stride (surface),
                                    (GDestroyNotify) cairo_surface_destroy,
                                    cairo_surface_reference (surface));

The C function declaration of g_bytes_new_with_free_func() looks like this:

GBytes*
g_bytes_new_with_free_func (
  gconstpointer data,
  gsize size,
  GDestroyNotify free_func,
  gpointer user_data
)

Notice it takes both the array data and its size size. But because Vala arrays already know their size, Vala bindings do not contain a separate parameter for the size of the array. The corresponding Vala API looks like this:

public Bytes.with_free_func (owned uint8[]? data, DestroyNotify? free_func, void* user_data)

Notice there is no way to pass the size of the data array, because the array is expected to know its own size.

I actually couldn’t figure out how to pass a DestroyNotify in Vala. There’s probably some way to do it (please let me know in the comments!), but I don’t know how. Anyway, I compromised by creating a GBytes that copies its data instead, using this simpler constructor:

public Bytes (uint8[]? data)

My code looked something like this:

unowned uchar[] data = surface.get_data ();
return new Gdk.MemoryTexture (surface.get_width (),
                              surface.get_height (),
                              BYTE_ORDER == ByteOrder.LITTLE_ENDIAN ? Gdk.MemoryFormat.B8G8R8A8_PREMULTIPLIED : Gdk.MemoryFormat.A8R8G8B8_PREMULTIPLIED,
                              new Bytes (data),
                              surface.get_stride ());

But this code crashes because the size of data is unset. Without any clear way to pass the size of the array to the Bytes constructor, I was initially stumped.

My first solution was to create my own array on the stack, use Posix.memcpy to copy the data to my new array, then pass the new array to the Bytes constructor. And that actually worked! But now the data is being copied twice, when the C version of the code doesn’t need to copy the data at all. I knew I had to copy the data at least once (because I didn’t know how else to manually call cairo_surface_destroy() and cairo_surface_ref() in Vala, at least not without resorting to custom language bindings), but copying it twice seemed slightly egregious.

Eventually I discovered there is an easier way:

data.length = surface.get_height () * surface.get_stride ();

It’s that simple. Vala arrays have a length property, and you can assign to it. In this case, the bindings didn’t know how to set the length, so it defaulted to the maximum possible value, leading to the crash. After setting the length manually, the code worked properly. Here it is.

Your project is political, people’s identities aren’t

Posted by Ben Cotton on 2024-07-10 12:00:00 UTC

Every so often, I run across a project that claims to be all about the code. “No politics!” they say. Whether they mean it or not, the message is often received as “this isn’t a space for you”. Any identity that is not the same as the project’s leadership (typically: cisgender white man) is treated as political. This is ridiculous. A person’s political views may form part of their identity, but their identity is not political.

The SerenityOS found itself in the middle of this conversation recently after a pull request to make documentation more inclusive was closed by the project for violating the “no controversial topics” rule. (The changes were later accepted in a subsequent pull request.) But as the author pointed out: “The change I proposed is specifically as to not alienate people who aren’t men”.

Developing free and open source software is an inherently political act. People voluntarily coming together to cooperatively produce something for the public good? That’s political as hell!

Setting aside the impossibility of being apolitical, the SerenityOS contributing file has this policy:

This is a purely technical project. As such, it is not an appropriate arena to advertise your personal politics or religious beliefs. Any changes that appear ideologically motivated will be rejected.

Contributing to SerenityOS

No project that involves people is “purely technical.” And “ideologically motivated” is not a synonym for “bad”. In fact, if the change is technically sound, then prohibiting it on the basis of being ideologically motivated seems…ideologically motivated. “Let’s change references from ‘he’ to ‘they’ in the documentation” is no more ideologically motivated than “let’s use ‘he’ throughout the documentation” (even if the latter wasn’t a conscious choice).

I’m not writing this post to pick on the SerenityOS project or any of its contributors, but it’s a timely reminder that not only do our choices have consequences, but our defaults do, too. If your project is going to say “we only want contributors who are like us”, that’s certainly a choice you can make. But if that’s not the choice you want, make sure you’re not saying it accidentally.

Sidebar: a subtle example?

This felt like a topic I had surely written about here before, so I searched Google. The only post that came up when I searched for “politics” was “Should you prohibit pseudonyms?” When I searched for “political”, the top result was my post about offering a donation to Black Girls Code for every sale in March 2023. Neither of those mention any variation of the word “politics”, but the former talks about transgender people and the latter talks about Black people.

This post’s featured photo by Maayan Nemanov on Unsplash.

The post Your project is political, people’s identities aren’t appeared first on Duck Alignment Academy.

System insights with command-line tools: dmidecode and lspci

Posted by Fedora Magazine on 2024-07-10 08:00:00 UTC

In our ongoing series on Fedora Linux system insights, we are looking into essential command-line utilities that provide information about the system’s hardware and status. Following our previous discussion on lscpu and lsusb, we now turn our attention to dmidecode and lspci.

dmidecode – Decoding your system’s DMI table

dmidecode is a command-line utility for retrieving detailed information about the system’s hardware. It reads the DMI (Desktop Management Interface) table, which contains data provided by the system’s firmware. This data includes details about the system’s BIOS, processor, memory, and other hardware components. Using dmidecode, you can gain insights into the hardware configuration without the need to be on-site or opening the system case.

Basic usage

To start with, let’s execute the basic dmidecode command to get an overview of the system’s DMI table:

$ sudo dmidecode

This command outputs a comprehensive list of DMI table entries, which can be overwhelming.

To narrow down the output to specific information, you can use various options, especially via defining a type with -t number:

$ sudo dmidecode -t number

where number is an integer. The following is from man dmidecode:

[... output omitted for readability ...]
The SMBIOS specification defines the following DMI types:
Type Information
────────────────────────────────────────────
0 BIOS
1 System
2 Baseboard
3 Chassis
4 Processor
5 Memory Controller
6 Memory Module
7 Cache
8 Port Connector
9 System Slots
10 On Board Devices
11 OEM Strings
12 System Configuration Options
13 BIOS Language
14 Group Associations
15 System Event Log
16 Physical Memory Array
17 Memory Device
18 32-bit Memory Error
19 Memory Array Mapped Address
20 Memory Device Mapped Address
21 Built-in Pointing Device
22 Portable Battery
23 System Reset
24 Hardware Security
25 System Power Controls
26 Voltage Probe
27 Cooling Device
28 Temperature Probe
29 Electrical Current Probe
30 Out-of-band Remote Access
31 Boot Integrity Services
32 System Boot
33 64-bit Memory Error
34 Management Device
35 Management Device Component
36 Management Device Threshold Data
37 Memory Channel
38 IPMI Device
39 Power Supply
40 Additional Information
41 Onboard Devices Extended Information
42 Management Controller Host Interface

Additionally, type 126 is used for disabled entries and type 127 is an end-of-table marker. Types 128 to 255 are for OEM-specific data. dmidecode will display these entries by default, but it can only decode them when the vendors have contributed documentation or code for them.
[... further output omitted for readability ...]

Example 1: Retrieving BIOS information

To fetch details about the BIOS, use the -t option followed by the type number for BIOS information (type 0 ):

$ sudo dmidecode -t 0

This command outputs information such as the BIOS version, release date, and vendor. Example output (here using a ThinkPad T480S):

$ sudo dmidecode -t 0
# dmidecode 3.6
Getting SMBIOS data from sysfs.
SMBIOS 3.0.0 present.

Handle 0x000B, DMI type 0, 24 bytes
BIOS Information
Vendor: LENOVO
Version: N22ET80W (1.57 )
Release Date: 02/27/2024
Address: 0xE0000
Runtime Size: 128 kB
ROM Size: 16 MB
Characteristics:
PCI is supported
PNP is supported
BIOS is upgradeable
BIOS shadowing is allowed
Boot from CD is supported
Selectable boot is supported
EDD is supported
3.5"/720 kB floppy services are supported (int 13h)
Print screen service is supported (int 5h)
8042 keyboard services are supported (int 9h)
Serial services are supported (int 14h)
Printer services are supported (int 17h)
CGA/mono video services are supported (int 10h)
ACPI is supported
USB legacy is supported
BIOS boot specification is supported
Targeted content distribution is supported
UEFI is supported
BIOS Revision: 1.57
Firmware Revision: 1.23

Example 2: Extracting baseboard (mainboard) and memory information

For specific details about memory, you can query the baseboard type 2 and memory device type 17. This will provide details about the system’s memory modules, including size, speed, and manufacturer. This information is particularly useful when upgrading or troubleshooting system memory or if you need to buy additional, compatible RAM for a server you did not provide by yourself.

A real world example of a small labserver with four memory sticks:

$ sudo dmidecode -t 2,17
[... output omitted for readability ...]
Manufacturer: Supermicro
Product Name: X11SPL-F
Version: 1.02

[... output omitted for readability ...]
Handle 0x0029, DMI type 17, 84 bytes
Memory Device
Array Handle: 0x0025
Error Information Handle: Not Provided
Total Width: 72 bits
Data Width: 64 bits
Size: 64 GB
Form Factor: DIMM
Set: None
Locator: DIMMB1
Bank Locator: P0_Node0_Channel1_Dimm0
Type: DDR4
Type Detail: Synchronous Registered (Buffered)
Speed: 2933 MT/s
Manufacturer: Samsung
Serial Number: 167D51E1
Asset Tag: DIMMB1_AssetTag (date:22/38)
Part Number: M393A8G40MB2-CVF
Rank: 2
Configured Memory Speed: 2400 MT/s
Minimum Voltage: 1.2 V
Maximum Voltage: 1.2 V
Configured Voltage: 1.2 V
Memory Technology: DRAM
Memory Operating Mode Capability: Volatile memory
Firmware Version: 0000
Module Manufacturer ID: Bank 1, Hex 0xCE
Module Product ID: Unknown
Memory Subsystem Controller Manufacturer ID: Unknown
Memory Subsystem Controller Product ID: Unknown
Non-Volatile Size: None
Volatile Size: 64 GB
Cache Size: None
Logical Size: None


$ sudo dmidecode -t 17 | grep -E "(Manufacturer|Part Number):"
Manufacturer: Samsung
Part Number: M393A8G40MB2-CVF
Manufacturer: Samsung
Part Number: M393A8G40MB2-CVF
Manufacturer: Samsung
Part Number: M393A8G40MB2-CVF
Manufacturer: Samsung
Part Number: M393A8G40MB2-CVF

By using this simple command, it is easy to determine what hardware is in use. This information is particularly useful when there is a need for upgrading or replacing hardware.

lspci: Listing PCI devices

The lspci command is used to list all PCI devices in the system. PCI (Peripheral Component Interconnect) devices include network cards, graphics cards, USB controllers, and more. This command provides a snapshot of the devices connected to the system’s PCI bus, offering a detailed view of their configuration and status.

lspci does not need extended privileges, a common user is enough to determine useful information.

Basic usage

A simple execution of the lspci command lists all PCI devices:

$ lspci

For more detailed information about a specific device, you can use the -v (verbose) option:

$ lspci -v

Example 1: Finding Graphics Card Information

To find detailed information about the system’s graphics card, you can filter the lspci output using grep. Example output (here using a ThinkPad T480S):

$ lspci | grep -i vga
00:02.0 VGA compatible controller: Intel Corporation UHD Graphics 620 (rev 07)

Example 2: Check which Kernel driver is used by your hardware

To see which kernel driver is being used by a specific device, you can use the -k option. This lists the kernel driver in use for each PCI device, which can be useful for troubleshooting driver-related issues, especially by being able to do a web search for problems using the driver’s name and your hardware model.

Example output (here using a ThinkPad T480S):

$ lspci -k

[... output omitted for readability ...]
00:02.0 VGA compatible controller: Intel Corporation UHD Graphics 620 (rev 07)
Subsystem: Lenovo Device 2258
Kernel driver in use: i915
Kernel modules: i915

[... output omitted for readability ...]
00:1f.6 Ethernet controller: Intel Corporation Ethernet Connection (4) I219-V (rev 21)
Subsystem: Lenovo Device 2258
Kernel driver in use: e1000e
Kernel modules: e1000e

As you can see, the graphics card is using the drm/i915 Intel GFX Driver driver and the network card is utilizing e1000e.

Conclusion

The dmidecode and lspci commands are powerful tools for extracting detailed hardware information from a Linux system. Even though they are simple, both commands offer insights into the system’s configuration and status. Whether you’re troubleshooting, optimizing, or simply curious, these tools provide valuable data that can help you better understand and manage your Linux environment. See you next time when we will have a look at more useful listing and information command line tools and how to use them.

some websites unavailable

Posted by Fedora Infrastructure Status on 2024-07-09 16:30:00 UTC

Some fedoraproject.org websites are showing unavailable. We are investigating the issue and hope to restore service soon.

UPDATE: sites should all be back online, sorry for the outage.

Contribute at the FreeIPA HSM Test Week

Posted by Fedora Magazine on 2024-07-08 08:00:00 UTC

The IDM/IPA team is working on testing the new HSM support and functionality. As a result, the Identity Management and QA teams have organized test days from Tuesday, July 09, 2024, to Thursday, July 11, 2024. The wiki page in this article contains links to the test images you’ll need to participate. Please continue reading for details.

How does a test week work?

A test week is an event where anyone can help ensure changes in Fedora Linux work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed before, this is a perfect way to get started.

To contribute, you only need to be able to do the following things:

  • Download test materials, which include some large files
  • Read and follow directions step by step

The wiki page for the kernel test week has a lot of good information on what and how to test. After you’ve done some testing, you can log your results in the test week web application. If you’re available on or around the days of the event, please do some testing and report your results. We have a document which provides all the necessary steps.

Happy testing, and we hope to see you on one of the test days.

RIT Malayalam fonts: supporting a range of OpenType shapers

Posted by Rajeesh KV on 2024-07-06 11:13:23 UTC

The open fonts for Malayalam developed by Rachana Institute of Technology use our independently developed advanced shaping rules since 2020. A conscious decision was made to support only the revised OpenType specification for Indic scripts (the script tag mlm2, which fixed issues with the v1 specification such as halant shifting). Our shaping rules provide precise, exact and definite shaping for Malayalam Unicode fonts on all major software platforms.

And yet, there are many users who still use either old or buggy softwares/platforms. Hussain and Bhattathiri have expressed angst and displeasure in seeing their beautifully and meticulously designed fonts not shaped correctly on some typeset works and prints (for instance, Ezhuthu is used by Mathrubhumi newspaper showing detached ു/ൂ-signs). I have received many requests over the years to add support for those obsolete (or sometimes proprietary) platforms, but have always refused.

Fig. 1: Ezhuthu font shaped with detached ൃ/ ു-signs. They should conjoin with base character. Source: Mathrubhumi.

Few weeks ago, CVR and I were trying to generate Malayalam epub content to read on a Kobo ebook reader (which supports loading user’s own fonts, unlike Kindle). We found that Kobo’s shaping software (quite possibly an old version of Pango) does not support the v2 OpenType specification. That did irk me and I knew it is going to be a rabbit hole. A little bit of reverse engineering and a day later, we were happy to read Malayalam properly shaped in Kobo, by adding rudimentary support for v1 spec.

Fig. 2: RIT Rachana shaped perfectly with Kobo ebook reader (ignore the book title).

Out of curiosity, I checked whether those small additions work with Windows XP, but it did not (hardly surprising). But now that the itch has been scratched; a bunch of shaping rules were added to support XP era applications as well (oh, well).

Fig. 3: RIT Rachana shaped perfectly in Windows XP.

Few days later, a user also reported (known) shaping issue with Adobe InDesign. Though I was inclined to close it as NOTABUG pointing to use HarfBuzz instead, the user was willing to help test a few attempts I promised to make. Adobe 2020/2021 (and earlier) products use Lipika shaper, but recent versions are using HarfBuzz natively. Lipika seems to support v2 OpenType specification, yet doesn’t work well with our existing shaping rules. Quite some reverse engineering and half a dozen attempts later, I have succeeded in writing shaping rules that support Lipika along with other shapers.

Fig.4: RIT Rachana shaped perfectly with InDesign 2021 (note: the characters outside margins is a known issue only with InDesign, and it is fixed with a workaround).

All published (and in progress) RIT Malayalam fonts are updated with these new set of shaping rules; which means all of them will be shaped exactly, precisely and correctly (barring the well-known limitation of v1 specification and bugs in legacy shapers ) all the way from Windows XP (2002) to HarfBuzz 8.0 (present day) and all applications in between.

Supported shaping engines

With this extra engineering work, RIT fonts now tested to work well with following shaping engines/softwares. Note: old Pango and Qt4 have shaping issues (with below base ല forms and ു/ൂ forms of conjuncts, in respective shapers), but those won’t be fixed. Any shaper other than HarfBuzz (and to a certain extent Uniscribe) is best effort only.

New releases

New releases are made available for all the fonts developed by Rachana Institute of Typography, viz.

Acknowledgements

A lot of invaluable work was done by Narayana Bhattathiri, Ashok Kumar and CV Radhakrishnan in testing and verifying the fonts with different platforms and typesetting systems.

End users who reported issues and helped with troubleshooting have also contributed heavily in shaping (pun intended) community software like RIT Malayalam open fonts.

Nebulosa do Camarão

Posted by Avi Alkalay on 2024-07-05 21:49:25 UTC

Esta semana o Ricardo Kahn me mostrou seus equipamentos astronômicos e fizemos um “workshop hands-on bootcamp” de fotografia estelar no rooftop do prédio dele na zona sul de São Paulo.

Ele programou seu telescópio todo hitech para perseguir esta nebulosa por 1 hora mais ou menos, enquanto a Terra se mexia, e bater múltiplas fotos dela no céu. Depois as fotos são empilhadas e tratadas digitalmente para aumentar brilho e nitidez do artefato cósmico.

Aqui a descrição do Ricardo sobre essa maravilha:

Nebulosa do Camarão. Ela está localizada há 6.000 anos luz de distância, fica na região da constelação de Escorpião. Ela é gigantesca, tem um diâmetro de 250 anos luz. Nosso sistema solar inteiro, são 49 minutos luz, a título de comparação. Se fosse mais brilhante, veríamos ela 3× maior que uma lua cheia no céu. É o berçário de centenas de estrelas.

Sobre o equipamento, era bem suntuoso com câmera especial acoplada, com resfriamento para tirar essas fotos. A base do telescópio tem motores para movê-lo para a posição certa. E essa “posição certa” é determinada e controlada por um computador raspberry, rodando Linux obviamente, junto com alguns open source software que têm dados astronômicos. Ele calibrou e controlou tudo isso com um tablet e um aplicativo que se comunica com o raspberry e mostra na tela o que a câmera do telescópio está vendo. O aplicativo nos auxiliou a encontrar os artefatos celestes que estavam visíveis naquele momento (acima do horizonte) e viáveis de serem fotografados. Usou também um filtro ótico caríssimo acoplado na lente do telescópio, com o propósito de diminuir a poluição luminosa da cidade, que estava radiante e bem bonita a nossa volta.

Esse telescópio é otimizado para o espaço profundo — nebulosas e galáxias. Ele tem também outro telescópio bem grosso, que parece um tambor, formado por um grande jogo de espelhos, otimizado para ver corpos celestes próximos, como a lua.

Outro “equipamento” que não podia faltar foi a pizza que traçamos enquanto o telescópico fazia seu trabalho.

Foi uma noite memorável e lembrarei dela para sempre.

Publicado também no LinkedIn, Facebook e Instagram.

First week of july 2024 random musings

Posted by Kevin Fenzi on 2024-07-05 18:27:09 UTC

A bit of a short week this week, as thursday is 4th of july holiday in the US and I am taking friday off, but still a bunch of goings on.

koji was having some bad days on sunday/monday. Turns out that our block_retired script that runs at the start of every rawhide compose figured out that all the packages in epel7 were now end of life/retired and so it started trying to untag and block all of them. Unfortunately this script is normally meant for a few packages being retired and it wasn’t really setup well to handle trying to deal with 15k packages at once. Also, we don’t actually want it to untag those packages, we want to keep them around for historical reasons mostly. So, I managed to figure out what was going on and stop it, but it had untagged about 10k packages by then. Will look at cleaning this up more next week.

There was some discussion around the new ssh host key on fedorapeople.org. I had failed to announce that it had changed and a few folks (very rightly!) asked if the change was expected. I actually thought about just preserving the old host keys, but they were made 10+ years ago now, so I figured it would be time to generate new ones (and prefer the newer algo also). I did then send an announcement to devel-announce about it. Out of this docs were improved and I tried to push the idea of trusting the fedora infrastructure ssh CA. This just requires you to add the CA to your .ssh/known_hosts and it will trust host keys that are signed by it. This includes all fedora infra hosts. You can of course also use SSHFP, but that requires some ssh settings and confirming that you are using a dnssec enabled resolver, so it’s a good deal more work. Anyhow, hopefully this host key will last us at least 10 years too.

There was a bunch more rhel7 vm cleanup that happened. We are still sadly not 100% done on that, but there’s only a few left, almost all just internal things and blocked by various things that will hopefully unblock in the next few weeks and we can get them all dealt with too. I think the rhel7 EOL has been harder on us because of the python2->python3 move. Not that I disagree with it, but it’s just been a lot of development work to move things up and get them back in a maintained state. Really it’s our own fault, but it’s hard to prioritize things when they are working fine.

Finally, just a note about next week: I’m off monday, not doing anything fun, but going in to have a molar pulled. So, I am likely to be a bit grumpy later in the week (I can’t even have coffee for a while, thats going to hurt).

PHP 8.4 as Software Collection

Posted by Remi Collet on 2024-07-05 13:59:00 UTC

Version 8.4.0alpha1 has been released. It's still in development and will enter soon in the stabilization phase for the developers, and the test phase for the users (see the schedule).

RPM of this upcoming version of PHP 8.4, are available in remi repository for Fedora ≥ 38 and Enterprise Linux ≥ 8 (RHEL, CentOS, Alma, Rocky...) in a fresh new Software Collection (php84) allowing its installation beside the system version.

As I (still) strongly believe in SCL's potential to provide a simple way to allow installation of various versions simultaneously, and as I think it is useful to offer this feature to allow developers to test their applications, to allow sysadmin to prepare a migration or simply to use this version for some specific application, I decide to create this new SCL.

I also plan to propose this new version as a Fedora 42 change (as F41 should be released a few weeks before PHP 8.4.0).

Installation :

yum install php84

emblem-important-2-24.pngTo be noticed:

  • the SCL is independent from the system and doesn't alter it
  • this SCL is available in remi-safe repository (or remi for Fedora)
  • installation is under the /opt/remi/php84 tree, configuration under the /etc/opt/remi/php84 tree
  • the FPM service (php84-php-fpm) is available, listening on /var/opt/remi/php84/run/php-fpm/www.sock
  • the php84 command gives simple access to this new version, however, the module or scl command is still the recommended way.
  • for now, the collection provides 8.4.0-alpha1, and alpha/beta/RC versions will be released in the next weeks
  • some of the PECL extensions are already available, see the extensions status page
  • tracking issue #258 can be used to follow the work in progress on RPMS of PHP and extensions
  • the php84-syspaths package allows to use it as the system's default version

emblem-notice-24.pngAlso, read other entries about SCL especially the description of My PHP workstation.

$ module load php84
$ php --version
PHP 8.4.0alpha1 (cli) (built: Jul  2 2024 13:43:13) (NTS gcc x86_64)
Copyright (c) The PHP Group
Zend Engine v4.4.0-dev, Copyright (c) Zend Technologies
    with Zend OPcache v8.4.0alpha1, Copyright (c), by Zend Technologies

As always, your feedback is welcome on the tracking ticket, a SCL dedicated forum is also open.

Software Collections (php84)

Manual action needed to resolve boot failure for Fedora Atomic Desktops and Fedora IoT

Posted by Fedora Magazine on 2024-07-05 14:15:00 UTC

Since the 39.20240617.0 and 40.20240617.0 updates for Atomic Desktops and the 40.20240617.0 update for IoT, systems with Secure Boot enabled may fail to boot if they have been installed before Fedora Linux 40. You might see the following error:

error: ../../grub-core/kern/efi/sb.c:182:bad shim signature.
error: ../../grub-core/loader/i386/efi/linux.c:258:you need to load the kernel first.

Press any key to continue...

Workaround

In order to resolve this issue, you must first boot into the previous version of your system. It should still be functional. In order to do this, reboot your system and select the previous boot entry in the selection menu displayed on boot. Its name should be something like:

Fedora Linux 39.20240610.0 (Silverblue)  (ostree:1)

Once you have logged in, search for the terminal application for your desktop and open a new terminal window. On Fedora IoT, log in via SSH or on the console. Make sure that you are not running in a toolbox for all the commands listed on this page.

If you are running a Fedora Atomic Desktop based on Fedora 39 and have not yet updated to Fedora 40, you first need to update to the latest working Fedora 39 version with those commands:

$ sudo rpm-ostree cleanup --pending
$ sudo rpm-ostree deploy 39.20240616.0

If you are running Fedora IoT, then first update to the latest working version with this command:

$ sudo rpm-ostree cleanup --pending
$ sudo rpm-ostree deploy 40.20240614.0

Then reboot your system.

Once you are logged in again on the latest working version, proceed with the following commands:

$ sudo -i
$ cp -rp /usr/lib/ostree-boot/efi/EFI /boot/efi
$ sync

Once completed, reboot your system. You should now be able to update again, as normal, using the graphical interface or the command line:

$ sudo rpm-ostree update

Why did this happen?

On Fedora Atomic Desktops and Fedora IoT systems, the components that are part of the boot chain (Shim, GRUB) are not (yet) automatically updated alongside the rest of the system. Thus, if you have installed a Fedora Atomic Desktop or a Fedora IoT system before Fedora 40, it uses an old versions of the Shim and bootloader binaries to boot your system.

When Secure Boot is enabled, the EFI firmware loads Shim first. Shim is signed by the Microsoft Third Party Certificate Authority so that it can be verified on most hardware out of the box. The Shim binary includes the Fedora certificates used to verify binaries signed by Fedora. Then Shim loads GRUB, which in turn loads the Linux kernel. Both are signed by Fedora.

Until recently, the kernel binaries where signed two times, with an older key and a newer one. With the 6.9 kernel update, the kernel is no longer signed with the old key. If GRUB or Shim is old enough and does not know about the new key, the signature verification fails.

See the initial report in the Fedora Silverblue issue tracker.

What are we doing to prevent it from happening again?

We have known for a while that not updating the bootloader was not a satisfying situation . We have been working on enabling bootupd for Fedora Atomic Desktops and Fedora IoT. bootupd is a small application that is responsible only for bootloader updates. While initially planned for Fedora Linux 38 (!), we had to delay enabling it due to various issues and missing functionality in bootupd itself and changes needed in Anaconda.

We are hoping to enable bootupd in Fedora Linux 41, hopefully by default, which should finally resolve this situation. See the Enable bootupd for Fedora Atomic Desktops and Fedora IoT Fedora Change page.

Note that the root issue also impacts Fedora CoreOS but steps have been put in place to force a bootloader update before the 6.9 kernel update. See the tracking issue for Fedora CoreOS.

Infra and RelEng Update – Week 27 2024

Posted by Fedora Community Blog on 2024-07-05 10:00:00 UTC

This is a weekly report from the I&R (Infrastructure & Release Engineering) Team. It also contains updates for CPE (Community Platform Engineering) Team as the CPE initiatives are in most cases tied to I&R work.

We provide you both infographic and text version of the weekly report. If you just want to quickly look at what we did, just look at the infographic. If you are interested in more in depth details look below the infographic.

Week: 01 – 05 July 2024

I&R infographic

Infrastructure & Release Engineering

The purpose of this team is to take care of day to day business regarding CentOS and Fedora Infrastructure and Fedora release engineering work.
It’s responsible for services running in Fedora and CentOS infrastructure and preparing things for the new Fedora release (mirrors, mass branching, new namespaces etc.).
List of planned/in-progress issues

Fedora Infra

CentOS Infra including CentOS CI

Release Engineering

CPE Initiatives

EPEL

Extra Packages for Enterprise Linux (or EPEL) is a Fedora Special Interest Group that creates, maintains, and manages a high quality set of additional packages for Enterprise Linux, including, but not limited to, Red Hat Enterprise Linux (RHEL), CentOS, Scientific Linux (SL) and Oracle Linux (OL).

Updates

Community Design

CPE has few members that are working as part of Community Design Team. This team is working on anything related to design in Fedora Community.

Updates

  • Cast your vote in this fedora wallpaper poll to find F42’s inspiration!
  • Flock 2024 assets in progress! 🐤 ticket here
  • Podman Desktop 🦭
    • Reviewing experimental light mode ☀
    • Working on consistency across pages

List of new releases of apps maintained by CPE

If you have any questions or feedback, please respond to this report or contact us on #redhat-cpe channel on matrix.

The post Infra and RelEng Update – Week 27 2024 appeared first on Fedora Community Blog.

Fedora’s openQA Cloud Deployment

Posted by Fedora Magazine on 2024-07-05 08:00:00 UTC

Fedora benefits from a long-standing, comprehensive, and robust openQA deployment used in release validation testing and testing updates. A recent project extended Fedora’s openQA infrastructure to use containers and cloud resources. Deploying openQA in the cloud protects against physical hardware failures and simplifies the addition of resources to scale up future testing. Another benefit of the cloud deployment is that it makes it easier for anyone to run, test, and experiment with Fedora’s openQA. Are you already familiar with openQA? Try out the cloud deployment examples at: https://pagure.io/ansible-openqa-cloud.

OpenQA runs comprehensive end-to-end testing on operating system images. It loads images into a virtual machine and sends commands, clicks buttons, moves the cursor, types input, and visually compares the results with expected images. Anyone who has installed Fedora Linux can appreciate that the process takes a little while and needs a bit of manual attention. Now multiply this by the many images generated by release engineers each day; by different flavours for each image; by different architectures; plus add new updates every hour more or less, and you can understand openQA’s tagline “Life is too short for manual testing!” Here is just one example of an openQA worker testing Fedora Linux 41:

Fedora’s openQA versus upstream openQA

A good point of entry for developers wishing to extend, debug or otherwise experiment with Fedora’s openQA is to understand how Fedora’s deployment relates to the upstream project. There are two main upstream repositories:

  1. openQA which handles the web user interface, websockets, livehandler, scheduler, a PostgreSQL database and various background tasks like asset cleanup; and
  2. os-autoinst the backend responsible for running the tests and reporting their results.

Fedora relies on these upstream repositories and includes them in the openqa and os-autoinst packages respectively. When you install these packages, you can find the upstream libraries at: /usr/share/openqa/lib/OpenQA and the upstream scripts at: /usr/share/openqa/script. The new cloud deployment provides access to these libraries and scripts inside the openqa-webserver container.

Similarly, find the backend code of os-autoinst inside any instance of the openqa-worker container at: /usr/lib/os-autoinst. Here you can see the inner workings of how os-autoinst starts the QEMU virtual machines for running tests.

The new openqa-database container provides access to the PostgreSQL database. Use psql to easily inspect and modify your local version of the database; for example:

# psql -U postgres -d openqa
psql (15.1)
Type "help" for help.
openqa=# \dt
List of relations
Schema | Name | Type | Owner
--------+---------------------------------------+-------+-----------
public | api_keys | table | geekotest
public | assets | table | geekotest
public | audit_events | table | geekotest
...

Since the cloud deployment allows unique worker containers to be brought up and down frequently and essentially without limit, deleting workers from the database directly can sometimes be the quickest way to get rid of the many more ephemeral workers than would normally exist outside of a containerized deployment.

Fedora-specific tests

Where Fedora, and every distribution that uses openQA, is unique is in its repository of tests to run on operating system images. Fedora’s tests are located in the os-autoinst-distri-fedora repository. Although the content is unique, openQA requires that tests be located in a specific “tests” directory: /var/lib/openqa/share/tests/fedora.

Find these test repositories in both of the new openqa-webserver and openqa-worker containers. In the cloud deployment a small service, openqa-test-update, runs every six hours to pull any changes to the tests. If you would rather test your own fork of os-autoinst-distri fedora, stop the test update service and pull changes from a forked repository instead.

Fetching and Scheduling images to test

Another unique feature of Fedora’s openQA deployment is how it schedules operating system images for testing. At the most rudimentary level, it’s possible to just manually add images to the directory where openQA and the job settings expect to find them: either /var/lib/openqa/share/factory/iso or /var/lib/openqa/share/factory/hdd . For example, inside the openqa-webserver container, try fetching an ISO image by running:

curl https://kojipkgs.fedoraproject.org/compose/rawhide/Fedora-Rawhide-20240612.n.0/compose/Server/x86_64/iso/Fedora-Server-netinst-x86_64-Rawhide-20240612.n.0.iso --output /var/lib/openqa/share/factory/iso/Fedora-Server-netinst-x86_64-Rawhide-20240612.n.0.iso

If this particular ISO isn’t available, get a new link from a current test in production by clicking through the colored dot to its Settings tab.

A successful openQA test appears as 
a single row with two columns.  The first column shows the test's name, "base_package_install_remove", and the second column shows a green dot.

Once you have downloaded an ISO and placed it in its correct directory, schedule a test with the general upstream tool openqa-cli:

openqa-cli api -X POST isos \
ISO=Fedora-Server-netinst-x86_64-Rawhide-20240612.n.0.iso \
DISTRI=fedora \
VERSION=Rawhide \
CURRREL=40 \
FLAVOR=Server-boot-iso \
ARCH=x86_64 \
BUILD=Fedora-Rawhide-20240612.n.0 \
--apikey 1234567890ABCDEF --apisecret 1234567890ABCDEF \
TEST=install_default

Luckily, we don’t have to do this hard work to test every image. Instead, Fedora uses its own, customized command line tool, fedora_openqa, to figure out all the necessary setting before scheduling a test. Even better, fedora_openqa listens to messages from release engineering and automatically schedules tests in response. Since the messages are public, no particular access keys are required for testing.

In the cloud deployment, a special container, openqa-dispatcher runs fedora_openqa and listens for these messages. Detailed logs of all the messages consumed are available this container in the /fedora-messaging-logs directory.

Testing Fedora Updates

A more complex scenario to handle is testing updates which need to be applied to existing operating system images. Fedora handles this challenge with its customized createhdds application that generates base disk images using the host’s kernel image. Save the disk images in /var/lib/openqa/share/factory/hdd/fixed to prevent openQA’s asset cleanup minions from deleting the images. openQA then applies incoming updates to these base disk images for testing.

One difficulty of the cloud deployment project was that some cloud instances, being virtual machines themselves, could not create further “nested” virtual machines using /dev/kvm. Without nested virtualization, it’s not possible to run the openqa-worker or createhdds efficiently. Make sure that /dev/kvm is available on any cloud instance you are using. For AWS this meant we used EC2 instances in the “metal” family (e.g. c5n.metal and c6in.metal) for workers and the createhdds application.

The cloud deployment provides a service openqa-createhdds to build these disk images once per week, in a process that may take several hours and use around 60 GB of the host’s disk space. As a more casual approach, you can disable the openqa-createhdds service and many of the openQA tests will continue to pass.

Conclusion

Before this project, we could only run openQA using farms of dedicated physical machines in a Fedora data center. Those machines have to be maintained and kept up to date, and increasing our capacity means acquiring new machines and finding the space to house them. This project has proved that we can potentially break the tight link to specific clusters of hardware and make our openQA deployments much more flexible — allowing us to scale up or down and maybe add new instances for different purposes much more easily than we could before. It also makes it much easier for individuals to deploy their own openQA instance for their own purposes.

In the future, we will consider migrating the official Fedora openQA instances to use this deployment method, starting with the staging instance. Eventually, we may retire the dedicated openQA hardware clusters entirely. Special thanks to Meta for sponsoring this work and to Collabora for implementing it. If you’re interested in helping out, you can drop by the Fedora Quality room on Fedora Chat and say hi.

PHP version 8.2.21 and 8.3.9

Posted by Remi Collet on 2024-07-05 05:17:00 UTC

RPMs of PHP version 8.3.9 are available in the remi-modular repository for Fedora ≥ 38 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).

RPMs of PHP version 8.2.21 are available in the remi-modular repository for Fedora ≥ 38 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).

emblem-notice-24.png The packages are available for x86_64 and aarch64.

emblem-notice-24.pngThere is no security fix this month, so no update for version 8.1.29.

emblem-important-2-24.pngPHP version 8.0 has reached its end of life and is no longer maintained by the PHP project.

These versions are also available as Software Collections in the remi-safe repository.

Version announcements:

emblem-notice-24.pngInstallation: use the Configuration Wizard and choose your version and installation mode.

Replacement of default PHP by version 8.3 installation (simplest):

dnf module switch-to php:remi-8.3/common

Parallel installation of version 8.3 as Software Collection

yum install php83

Replacement of default PHP by version 8.2 installation (simplest):

dnf module switch-to php:remi-8.2/common

Parallel installation of version 8.2 as Software Collection

yum install php82

And soon in the official updates:

emblem-important-2-24.pngTo be noticed :

  • EL-9 RPMs are built using RHEL-9.4
  • EL-8 RPMs are built using RHEL-8.10
  • EL-7 repository is closed
  • intl extension now uses libicu73 (version 73.2)
  • mbstring extension (EL builds) now uses oniguruma5php (version 6.9.9, instead of the outdated system library)
  • oci8 extension now uses the RPM of Oracle Instant Client version 21.13 on x86_64, 19.23 on aarch64
  • a lot of extensions are also available, see the PHP extensions RPM status (from PECL and other sources) page

emblem-notice-24.pngInformation:

Base packages (php)

Software Collections (php81 / php82 / php83)

PHP version 8.1.28, 8.2.18 and 8.3.6

Posted by Remi Collet on 2024-04-12 15:59:00 UTC

RPMs of PHP version 8.3.6 are available in the remi-modular repository for Fedora ≥ 38 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...) and in the remi-php83 repository for EL 7.

RPMs of PHP version 8.2.18 are available in the remi-modular repository for Fedora ≥ 38 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...) and in the remi-php82 repository for EL 7.

RPMs of PHP version 8.1.28 are available in the remi-modular repository for Fedora ≥ 38 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...) and in the remi-php81 repository for EL 7.

emblem-notice-24.png The Fedora 39, 40, EL-8 and EL-9 packages (modules and SCL) are available for x86_64 and aarch64.

emblem-important-2-24.pngPHP version 8.0 has reached its end of life and is no longer maintained by the PHP project.

These versions are also available as Software Collections in the remi-safe repository.

security-medium-2-24.pngThese Versions fix 3 security bugs (CVE-2024-2756, CVE-2024-3096 and CVE-2024-2757), so update is strongly recommended.

Version announcements:

emblem-notice-24.pngInstallation: use the Configuration Wizard and choose your version and installation mode.

Replacement of default PHP by version 8.3 installation (simplest):

dnf module switch-to php:remi-8.3/common

or, the old EL-7 way:

yum-config-manager --enable remi-php83
yum update php\*

Parallel installation of version 8.3 as Software Collection

yum install php83

Replacement of default PHP by version 8.2 installation (simplest):

dnf module switch-to php:remi-8.2/common

or, the old EL-7 way:

yum-config-manager --enable remi-php82
yum update

Parallel installation of version 8.2 as Software Collection

yum install php82

And soon in the official updates:

emblem-important-2-24.pngTo be noticed :

  • EL-9 RPMs are built using RHEL-9.3
  • EL-8 RPMs are built using RHEL-8.9
  • EL-7 RPMs are built using RHEL-7.9
  • intl extension now uses libicu73 (version 73.2)
  • mbstring extension (EL builds) now uses oniguruma5php (version 6.9.9, instead of the outdated system library)
  • oci8 extension now uses the RPM of Oracle Instant Client version 21.13 on x86_64, 19.19 on aarch64
  • a lot of extensions are also available, see the PHP extensions RPM status (from PECL and other sources) page

emblem-notice-24.pngInformation:

Base packages (php)

Software Collections (php81 / php82 / php83)

PHP version 8.1.29, 8.2.20 and 8.3.8

Posted by Remi Collet on 2024-06-06 15:10:00 UTC

RPMs of PHP version 8.3.8 are available in the remi-modular repository for Fedora ≥ 38 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...) and in the remi-php83 repository for EL 7.

RPMs of PHP version 8.2.20 are available in the remi-modular repository for Fedora ≥ 38 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...) and in the remi-php82 repository for EL 7.

RPMs of PHP version 8.1.29 are available in the remi-modular repository for Fedora ≥ 38 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...) and in the remi-php81 repository for EL 7.

emblem-notice-24.png The Fedora 39, 40, EL-8 and EL-9 packages (modules and SCL) are available for x86_64 and aarch64.

emblem-important-2-24.pngPHP version 8.0 has reached its end of life and is no longer maintained by the PHP project.

These versions are also available as Software Collections in the remi-safe repository.

security-medium-2-24.pngThese Versions fix 1 security bug (CVE-2024-5458), so update is strongly recommended.

Version announcements:

emblem-notice-24.pngInstallation: use the Configuration Wizard and choose your version and installation mode.

Replacement of default PHP by version 8.3 installation (simplest):

dnf module switch-to php:remi-8.3/common

Parallel installation of version 8.3 as Software Collection

yum install php83

Replacement of default PHP by version 8.2 installation (simplest):

dnf module switch-to php:remi-8.2/common

Parallel installation of version 8.2 as Software Collection

yum install php82

And soon in the official updates:

emblem-important-2-24.pngTo be noticed :

  • EL-9 RPMs are built using RHEL-9.4
  • EL-8 RPMs are built using RHEL-8.10
  • EL-7 RPMs are built using RHEL-7.9
  • intl extension now uses libicu73 (version 73.2)
  • mbstring extension (EL builds) now uses oniguruma5php (version 6.9.9, instead of the outdated system library)
  • oci8 extension now uses the RPM of Oracle Instant Client version 21.13 on x86_64, 19.23 on aarch64
  • a lot of extensions are also available, see the PHP extensions RPM status (from PECL and other sources) page

emblem-notice-24.pngInformation:

Base packages (php)

Software Collections (php81 / php82 / php83)

“Finished” and “no longer developed” aren’t the same

Posted by Ben Cotton on 2024-07-03 12:00:00 UTC

Later on in the thread that sparked “Combinatorial releases won’t help,” Kurt Seifried said

We used to have finished code. It shipped on a CD installed it used it for a year and then you got a new CD at some point later. We don’t live in that world anymore. Why are we using the same software processes from 30 years ago?

Kurt Seyfried

That’s not really the word to use, though. The software wasn’t finished, it just had to ship at some point so they stopped working on it. The software was delivered on a CD (or floppy disks or tape or …) and it wasn’t trivial to ship updates. So there had to be a “this is good enough” point in order to put the bits on media. Otherwise, you’d end up with Duke Nukem Forever.

So what does it mean for software to be finished? Software is finished when it reliably does what it’s intended to. Many basic Unix commands are like this. ls doesn’t need any new features. cal is pretty solid at this point. Of course, there may still be bugs to fix and maybe the occasional new feature to implement (for example: adding support for SELinux contexts to ls), but by and large the work is mostly done. You just need someone to keep the proverbial lights on.

That’s a much different case from “this is enough to call version 1. Let’s ship it and start working on version 2.” It’s also different from abandoned software. That’s where development stops because no one is around to do it anymore.

Which variation your project ends up in will have a big impact on how you manage it. If the project is abandoned, all you have to do is turn the lights off on your way out. (Which you should read to mean “mark it as abandoned in the README, etc”.) If it’s the “we need to ship eventually” sort, then I have a whole book for you (particularly the chapter “Ship the Release”, which covers how you decide it’s good enough to ship). Ultimately, though, the “it does all it needs to do” option is the real goal. That’s what success looks like.

How do you determine if the project does all it needs to do? The simplest way is to define what you want it to do. What contribution does your project make to the world? Which things are great, but best left to another project to handle? If you’ve clearly defined your project’s vision and mission, these questions become easier to answer.

This post’s featured photo by chris robert on Unsplash.

The post “Finished” and “no longer developed” aren’t the same appeared first on Duck Alignment Academy.

OpenSSH Vulnerability: regreSSHion (CVE-2024-6387), Remote Code Execution (RCE)

Posted by Andreas Haerter on 2024-07-01 11:50:00 UTC

Long story short: Update your OpenSSH packages today, no matter what. The original Security Advisory by Qualsys is worth a read and not to complicated to follow if you are interested in a bit of background information. A bit of a problem might be that this bug dropped literally the day after CentOS 8 and FreeBSD 13.2 went out of support (even though both seem to be unaffected).

Mitigation

If there are old boxes, make sure you apply proper firewalling to mitigate the issue. It seems that a exploit might need a lot of login tries on average to be successful without further optimizations. This might buy you a bit of time to patch everything today, even if the box was exposed to the public internet. Additionally, the exploit was only shown for 32-bit systems yet. There is a high probability that a yet-to-come exploit for 64-bit systems will be slower (=more login attempts needed on average for a successful hack).

If you cannot upgrade your system yet, the FreeBSD security advisory states that setting LoginGraceTime to 0 might help:

If sshd(8) cannot be updated, this signal handler race condition can be mitigated by setting LoginGraceTime to 0 in /etc/ssh/sshd_config and restarting sshd(8). This makes sshd(8) vulnerable to a denial of service (the exhaustion of all MaxStartups connections), but makes it safe from the remote code execution presented in this advisory.

Systems known to be affected

The list is probably incomplete and will be updated as more confirmations are obtained. Do not forget to restart the sshd service (for example, it seems that there are otherwise issues with key exchange on Arch Linux).

Propably unaffected systems

Even if your systems seem to be not affected in general: Update. Now. There will be a lot of research about this vulnerability and maybe new ways to exploit it will be discovered.

Currently not affected are:

  • Red Hat Enterprise Linux (RHEL) 6, 7 und 81 (and therefore probably all related CentOS versions)
  • Debian 11 Bullseye2
  • FreeBSD3
  • OpenBSD4

Additional links and reports:


  1. From the Red Hat: CVE-2024-6387 advisory: “This flaw doesn’t affect the OpenSSH versions as shipped with Red Hat Enterprise Linux 6, 7 and 8 as the vulnerability was introduced by a regression in upstream on OpenSSH 8.5p1 which is newer then shipped with the mentioned Red Hat Enterprise Linux versions.” ↩︎

  2. See https://security-tracker.debian.org/tracker/CVE-2024-6387↩︎

  3. See https://www.freebsd.org/security/advisories/FreeBSD-SA-24:04.openssh.asc. Unclear situation but not glibc based and the syslog code looks as if it doesn’t do anything that corrupts the state if called from a signal handler. Patch in doubt. ↩︎

  4. From the Qualsys advisory: “OpenBSD is notably not vulnerable, because its SIGALRM handler calls syslog_r(), an async-signal-safer version of syslog() that was invented by OpenBSD in 2001.” Additionally, not glibc based. ↩︎

Let’s Publish a Knowledge Base from Ask Fedora

Posted by Fedora Magazine on 2024-07-03 08:00:00 UTC

“We often get questions..”

Just like bug reports or support tickets, a user Q&A support forum is a gold mine for understanding users and contributors.

Newcomers get help from contributors to solve problems in user Q&A support forums like Ask Fedora. Some of the solutions work for many situations where users can apply the solutions without modifications. To our surprise, there are repeated questions for similar or related issues every week.

Content reuse and rewriting

This is where content reuse comes in. The idea is to select top posts in Ask Fedora and consolidate them for How-To and FAQ-style user documentation. Why? Content reuse and rewriting can help make maintenance and enhancement of articles easier than in forum posts. With this in mind, I gave a talk about “Documenting Top Answers in a User Support Forum” at DevConfCZ in June 2024.

Call for Participation: Reviewers and Writers

We are looking for anyone interested in this effort to join us.

To kick off this effort I analyzed forum posts by pulling a list of tags that relate to multimedia. Next I grouped them by top answers on multimedia.

How to debug sound problems
Audio input not recognized
Use of multiple Bluetooth device
PulseAudio and PipeWire troubleshooting

Docs team have contributor guides for submitting your draft to a Quick Docs repository. If you need help with the process, please stop by at the Matrix Docs room for more informal communication. Thank you!