/rss20.xml">

Fedora People

Arriving At Flock To Fedora 2025

Posted by Akashdeep Dhar on 2025-07-05 18:30:43 UTC
Arriving At Flock To Fedora 2025

On 03rd June 2025, it was that time of the year again when we departed for the annual Fedora Project community conference, Flock To Fedora 2025. Waking up at 05:00AM Indian Standard Time, I was soon on my way to the Netaji Subhash Chandra Bose International Airport (CCU) after some quick breakfast bites from my aunt. It roughly took around twenty minutes for me to make it to the airport with my luggage, where I met up with Sumantro Mukherjee, who was waiting for me at around 06:30AM Indian Standard Time. We got ourselves checked in at the Emirates counter for the flight EK0571 to Dubai International Airport (DXB) and headed towards the lightly populated immigration queue. Weirdly enough, I was held back for some superficial questioning at the security queue, but it was nothing to worry about as we were quite ahead of the schedule for the flight.

Arriving At Flock To Fedora 2025
Our Emirates flight EK0571 getting ready for departure

We were at the departure gates by 07:30AM Indian Standard Time, which was a little under three hours from the estimated departure time for the Emirates EK0571 flight. After connecting with my friends and family about having made it to the departure gates, Sumantro and I had conversations about a bunch of things. Starting from community affairs and agile implementation, it was fun catching up with Sumantro after FOSDEM 2025. The time until the boarding passed quite quickly, and we soon made it inside the flight, capturing the painstakingly booked window seats of Zone E, with me seated at 29K and Sumantro seated at 30K. The first flight from Kolkata (CCU) to Dubai (DXB) was expected to be around five hours long, so I decided to spend time watching a couple of movies, namely Unthinkable (2010) and Despicable Me 4 (2024) from the seat display.

Arriving At Flock To Fedora 2025
Bountiful meals for the in-flight lunch satisfaction

At around 11:45AM Indian Standard Time, the breakfast was served, and I decided to catch some sleep after I was done with the food. The seemingly short sleep did help reduce the apparent duration of the flight because at around 04:00PM Gulf Standard Time, the flight started landing at the Dubai International Airport (DXB). As our next flight, Emirates EK0125 towards Airport Flughafen Wien-Schwechat (VIE), was departing from the same terminal, we had some time in our hands to traverse through the gates. Sumantro and I had to catch a subway that would take us from the A Gates to the B Gates after we passed through a crowded security queue. In about thirty minutes, we found ourselves at the departure Gate B20, from where our flight was designated to depart. We went around browsing the Duty Free stores as we waited for the boarding announcement to be made.

Arriving At Flock To Fedora 2025
If flights, buses and taxis were not enough - Let us add a subway into the mix too

Of course, we kept ourselves from purchasing goodies from the stores, and the boarding announcement that was made some time later helped fortify our resolve. As Sumantro was planning on traveling ahead with his partner from DevConf.CZ 2025 and I was planning on doing loads of shopping in Prague, it made little sense to get encumbered there. Just like the previous flight, we were seated at the window seats, with me on 39K and Sumantro on 40K on this flight. An odd event ended up changing that arrangement as the passengers traveling beside Sumantro wanted us to switch seats for the window seats that we had coordinated and selected well in advance from the web check-in. In the end, I was seated beside Sumantro on an aisle seat, 40I, while Sumantro retained his seat, 40K, as the two belligerent passengers went somewhere else after some arrangements were made.

Arriving At Flock To Fedora 2025
A LEGO paradise for those who want to visit one while travelling

While not dwelling much on this uncomfortable encounter with some entitled passengers, I decided to watch Fast X (2023) from the in-flight entertainment system as the flight took off. Sumantro decided to catch up on some assignments and finish preparing his slide decks, as this was supposed to be a longer flight of around six hours. We were looking into the renewed Test Days application, which was recently deployed in the Fedora Infrastructure, and we ended up finding various bugs and oversights with the production deployment. I decided to watch Inglourious Basterds (2009) after having lunch, but I found myself dozing off every now and then. I decided to use the time to catch up on some sleep, as both Sumantro and I had to travel on an overnight bus from Airport Flughafen Wien-Schwechat (VIE) to Prague Central Station and finish the movie at a later time.

Arriving At Flock To Fedora 2025
Bountiful meals for the in-flight dinner satisfaction

The next time I opened my eyes - I was gifted with the wonderful vistas of the country skyline as the flight was slowly descending into Airport Flughafen Wien-Schwechat (VIE). It was around 08:30PM Central European Summer Time then, and the sun was barely setting in summertime Vienna. We got off the flight at around 09:00PM Central European Summer Time and made our way into the crowded immigration queue before picking up our checked-in luggage from the belts. I wished that our onward journey had ended there, as the time zone shift and the lack of sleep were taking a toll on my body. The one silver lining that kept us going was that the bus taking us from Airport Flughafen Wien-Schwechat (VIE) to Prague Central Station departed from the arrival gates at around 11:00PM Central European Summer Time, so we did not have to rush anywhere anymore.

Arriving At Flock To Fedora 2025
Glimpses from outside the flight window near Vienna

After sharing some conversations with friends and family back at home and from the Fedora Project, we kicked around at the arrival gates. The waiting was most certainly easier said than done, but I would much rather be in a situation where I was ahead of the schedule than one where I was running behind. Sumantro and I discussed just how long we would have been active by the time we ended up getting to the hotel, and the calculation did help keep us from getting bored. At around 10:50PM Central European Summer Time, a bus #N60 operated by Flixbus arrived at Station #04 for the pickup. After getting our passports checked before boarding the bus, we decided to keep ourselves to the bottom deck of the double-decker bus. It would have been fun visiting the top deck, but at the twenty-fourth hour of being active, all we wanted was to get some sleep at the hotel.

Arriving At Flock To Fedora 2025
Felt both quieter and busier at the same time in this airport

We had a couple of stops before making it to Prague Central Station, so we were seated near the gates for convenience. While the evening started off with some pretty mild temperatures and normal humidity, the temperature started falling and the humidity started rising as the night grew darker. I knew for a fact that I would doze off as soon as I found a soft seat to place myself on, so I decided to schedule some alarms for 03:30AM Central European Summer Time, which was still over three hours away from then. It was just as important to schedule multiple alarms in the rare occurrence of one not being enough, and the last thing that we wanted to do then was end up in Berlin, where the bus was actually headed. We soon found ourselves at our stop after a combination of looking into the darkness from the window and failing miserably to catch some well deserved slumber.

Arriving At Flock To Fedora 2025
Some place we kicked around for a couple of hours

Prague Central Station welcomed us with 14 degrees Celsius and 75% humidity on the early morning of 04th June 2025. Sumantro booked an Uber for us, and after an uneventful yet swift fifteen minutes, we found ourselves at the entrance of the Ibis Praha Mala Strana hotel. Thankfully, we had the reservation done from the day before, so we could easily find ourselves a bed to rest on and not wait until the scheduled check-in time of 03:00PM Central European Summer Time. Sumantro had some issues with the inclusion of breakfast in his booking, but we decided that it was for the best that he took it up with Julia Bley the next day. Thanks to the Red Hat Corporate Card that we were provided with weeks before the commencement of our journey, Sumantro and I were able to retire to rooms #239 and #225 at around 04:00AM Central European Summer Time, ending the onward journey.

Splitting Taskwarrior tasks to sub-tasks

Posted by Ankur Sinha on 2025-07-05 12:11:35 UTC
Logo for Taskwarrior

A feature that I often miss in Taskwarrior (which I use for managing my tasks in a Getting Things Done method) is the ability to split tasks into sub-tasks.

A common use case, for example, is when I add a research paper that I want to read to my task list. It's usually added as "Read <title of research paper>", with the URL or the file path as an annotation. However, when I do get down to read it, I want to break it down into smaller, manageable tasks that I can do over a few days such as "Read introduction", "Read results". This applies for lots of other tasks too, which turn into projects with sub-tasks when I finally do get down to working on them.

The way to do it is to create new tasks for each of these, and then replace the original task with them. It is also a workflow that cab be easily scripted so that one doesn't have to manually create the tasks and copy over annotations and so on.

Here is a script I wrote:

#!/usr/bin/env python3
"""
Split a taskwarrior task into sub-tasks

File: task-split.py

Copyright 2025 Ankur Sinha
Author: Ankur Sinha <sanjay DOT ankur AT gmail DOT com>
"""


import typing
import typer
import subprocess
import json


import logging


logging.basicConfig(level=logging.NOTSET)
logger = logging.getLogger("task-split")
logger.setLevel(logging.INFO)
logger.propagate = False

formatter = logging.Formatter("%(name)s (%(levelname)s): %(message)s")
handler = logging.StreamHandler()
handler.setLevel(logging.INFO)
handler.setFormatter(formatter)

logger.addHandler(handler)


def split(src_task: int, new_project: str, sub_tasks: typing.List[str],
          dry_run: bool = True) -> None:
    """Split task into new sub-tasks

    For each provided sub_tasks string, a new task is created using the string
    as description in the provided new_project. Annotations from the provided
    src_task are copied over and the src_task is removed.

    If dry_run is enabled (default), the src_task will be obtained but not
    processed.

    :param src_task: id of task to split
    :type src_task: int
    :param sub_tasks: list of sub-tasks to create
    :type sub_tasks: list(str)
    :returns: None

    """
    # Always get info on the task
    get_task_command = f"task {src_task} export"
    logger.info(get_task_command)
    ret = subprocess.run(get_task_command.split(), stdout=subprocess.PIPE, stderr=subprocess.PIPE)

    if ret.returncode == 0:
        task_stdout = ret.stdout.decode(encoding="utf-8")
        task_json = (json.loads(task_stdout)[0])
        logger.info(task_json)
        tags = task_json.get('tags', [])
        priority = task_json.get('priority')
        due = task_json.get('due')
        estimate = task_json.get('estimate')
        impact = task_json.get('impact')
        annotations = task_json.get('annotations', [])
        description = task_json.get('description')
        uuid = task_json.get('uuid')

        for sub_task in sub_tasks:
            new_task_command = f"task add project:{new_project} tags:{','.join(tags)} priority:{priority} due:{due} impact:{impact} estimate:{estimate} '{sub_task}'"
            logger.info(new_task_command)
            if not dry_run:
                ret = subprocess.run(new_task_command.split())
                if dry_run or ret.returncode:
                    annotate_task_command = f"task +LATEST annotate '{description}'"
                    logger.info(annotate_task_command)
                    if not dry_run:
                        ret = subprocess.run(annotate_task_command.split())
                        for annotation in annotations:
                            annotation_description = annotation['description']
                            annotate_task_command = f"task +LATEST annotate '{annotation_description}'"
                            logger.info(annotate_task_command)
                            if not dry_run:
                                ret = subprocess.run(annotate_task_command.split())

        mark_original_as_done_command = f"task uuid:{uuid} done"
        logger.info(mark_original_as_done_command)
        if not dry_run:
            ret = subprocess.run(mark_original_as_done_command.split())


if __name__ == "__main__":
    typer.run(split)

It uses typer to provide command line features:

task-split --help

Usage: task-split [OPTIONS] SRC_TASK NEW_PROJECT SUB_TASKS...

Split task into new sub-tasks

Arguments
*    src_task         INTEGER       [default: None]
*    new_project      TEXT          [default: None]
*    sub_tasks        SUB_TASKS...  [default: None]

Options
--dry-run    --no-dry-run      [default: dry-run]
--help                         Show this message and exit.

So, if one has a task "Put up shelves" with ID 800, it can now be broken into a number of smaller tasks:

task-split 800 "personal.shelves" "Buy shelves" "Buy drill" "Buy tools"

This will add the new tasks to the "personal.shelves" topic, and copy over meta-data from the original task, such as annotations, priority, due date and other user defined attributes. It runs in "dry-run" mode by default to give me a chance to double-check the commands/tasks. To carry out the operations, pass the --no-dry-run flag to the script too.

The script is heavily based on my personal workflow, but can easily be tweaked. It lives here on GitHub and you are welcome to modify it to suit your own workflow.

Please remember to make it executable and put it in your PATH to be able to run the command on your terminal, and do remember to install typer. On Fedora, this would be sudo dnf install python3-typer.

Datacenter Move Complete

Posted by Fedora Infrastructure Status on 2025-07-04 18:00:00 UTC

The Datacenter move has been completed. Almost all services are back online and processing. Followups and known minor issues are being tracked in the above ticket.

If you notice anything amiss, please use our usual issue reporting path:

https://docs.fedoraproject.org/en-US/infra/day_to_day_fedora/

Thanks everyone for your support …

Recovering a FP2 which gives "flash write failure" errors

Posted by Hans de Goede on 2025-07-04 16:14:48 UTC
This blog post describes my successful os re-install on a fairphone 2 which was giving "flash write failure" errors when flashing it with fastboot, with the flash_FP2_factory.sh script. I'm writing down my recovery steps for this in case they are useful for anyone else.

I believe that this is caused by the bootloader code which implements fastboot not having the ability to retry recoverable eMMC errors. It is still possible to write the eMMC from Linux which can retry these errors.

So we can recover by directly fastboot-ing a recovery.img and then flashing things over adb.

See step by step instructions... )

comment count unavailable comments

Infra and RelEng Update – Week 27, 2025

Posted by Fedora Community Blog on 2025-07-04 11:55:06 UTC

This is a weekly report from the I&R (Infrastructure & Release Engineering) Team. We provide you both infographic and text version of the weekly report. If you just want to quickly look at what we did, just look at the infographic. If you are interested in more in depth details look below the infographic.

Week: 30 June – 04 July 2025

Infrastructure & Release Engineering

The purpose of this team is to take care of day to day business regarding CentOS and Fedora Infrastructure and Fedora release engineering work.
It’s responsible for services running in Fedora and CentOS infrastructure and preparing things for the new Fedora release (mirrors, mass branching, new namespaces etc.).
List of planned/in-progress issues

Fedora Infra

CentOS Infra including CentOS CI

Release Engineering

Fedora Data Center Move: “It’s Move Time!” and Successful Progress!

This week was “move time” for the Fedora Data Center migration from IAD2 to RDU3, and thanks to the collective effort of the entire team, it’s been a significant success! We officially closed off the IAD2 datacenter, with core applications, databases, and the build pipeline successfully migrated to RDU3. This involved meticulously scaling down IAD2 OpenShift apps, migrating critical databases, and updating DNS, followed by the deployment and activation of numerous OpenShift applications in RDU3. While challenges arose, especially with networking and various service configurations, our dedicated team worked tirelessly to address them, ensuring most services are now operational in the new environment. We’ll continue validating and refining everything, but we’re thrilled with the progress made in establishing Fedora’s new home!

If you have any questions or feedback, please respond to this report or contact us on #redhat-cpe channel on matrix.

The post Infra and RelEng Update – Week 27, 2025 appeared first on Fedora Community Blog.

🛡️ PHP version 8.1.33, 8.2.29, 8.3.23 and 8.4.10

Posted by Remi Collet on 2025-07-04 04:49:00 UTC

RPMs of PHP version 8.4.10 are available in the remi-modular repository for Fedora ≥ 40 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).

RPMs of PHP version 8.3.23 are available in the remi-modular repository for Fedora ≥ 40 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).

RPMs of PHP version 8.2.29 are available in the remi-modular repository for Fedora ≥ 40 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).

RPMs of PHP version 8.1.33 are available in the remi-modular repository for Fedora ≥ 40 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).

ℹ️ The packages are available for x86_64 and aarch64.

⚠️ PHP version 8.0 has reached its end of life and is no longer maintained by the PHP project.

These versions are also available as Software Collections in the remi-safe repository.

🛡️ These Versions fix 3 security bugs (CVE-2025-1220, CVE-2025-1735, CVE-2025-6491), so the update is strongly recommended.

Version announcements:

ℹ️ Installation: use the Configuration Wizard and choose your version and installation mode.

Replacement of default PHP by version 8.4 installation (simplest):

dnf module switch-to php:remi-8.4/common

Parallel installation of version 8.4 as Software Collection

yum install php84

Replacement of default PHP by version 8.3 installation (simplest):

dnf module switch-to php:remi-8.3/common

Parallel installation of version 8.3 as Software Collection

yum install php83

And soon in the official updates:

⚠️ To be noticed :

  • EL-10 RPMs are built using RHEL-10.0
  • EL-9 RPMs are built using RHEL-9.6
  • EL-8 RPMs are built using RHEL-8.10
  • intl extension now uses libicu74 (version 74.2)
  • mbstring extension (EL builds) now uses oniguruma5php (version 6.9.10, instead of the outdated system library)
  • oci8 extension now uses the RPM of Oracle Instant Client version 23.8 on x86_64 and aarch64
  • a lot of extensions are also available; see the PHP extensions RPM status (from PECL and other sources) page

ℹ️ Information:

Base packages (php)

Software Collections (php83 / php84)

🎲 PHP 8.5 as Software Collection

Posted by Remi Collet on 2025-07-04 06:14:00 UTC

Version 8.5.0alpha1 has been released. It's still in development and will enter soon in the stabilization phase for the developers, and the test phase for the users (see the schedule).

RPM of this upcoming version of PHP 8.5, are available in remi repository for Fedora ≥ 41 and Enterprise Linux ≥ 8 (RHEL, CentOS, Alma, Rocky...) in a fresh new Software Collection (php85) allowing its installation beside the system version.

As I (still) strongly believe in SCL's potential to provide a simple way to allow installation of various versions simultaneously, and as I think it is useful to offer this feature to allow developers to test their applications, to allow sysadmin to prepare a migration or simply to use this version for some specific application, I decide to create this new SCL.

I also plan to propose this new version as a Fedora 44 change (as F43 should be released a few weeks before PHP 8.5.0).

Installation :

yum install php85

⚠️ To be noticed:

  • the SCL is independent from the system and doesn't alter it
  • this SCL is available in remi-safe repository (or remi for Fedora)
  • installation is under the /opt/remi/php85 tree, configuration under the /etc/opt/remi/php85 tree
  • the FPM service (php85-php-fpm) is available, listening on /var/opt/remi/php85/run/php-fpm/www.sock
  • the php85 command gives simple access to this new version, however, the module or scl command is still the recommended way.
  • for now, the collection provides 8.5.0-alpha1, and alpha/beta/RC versions will be released in the next weeks
  • some of the PECL extensions are already available, see the extensions status page
  • tracking issue #307 can be used to follow the work in progress on RPMS of PHP and extensions
  • the php85-syspaths package allows to use it as the system's default version

ℹ️ Also, read other entries about SCL especially the description of My PHP workstation.

$ module load php85
$ php --version
PHP 8.5.0alpha1 (cli) (built: Jul  1 2025 21:58:05) (NTS gcc x86_64)
Copyright (c) The PHP Group
Built by Remi's RPM repository  #StandWithUkraine
Zend Engine v4.5.0-dev, Copyright (c) Zend Technologies
    with Zend OPcache v8.5.0alpha1, Copyright (c), by Zend Technologies

As always, your feedback is welcome on the tracking ticket.

Software Collections (php85)

🧱 Building better initramfs: A deep dive into dracut on Fedora & RHEL

Posted by Fedora Magazine on 2025-07-04 08:00:00 UTC

Understanding how to use dracut is critical for kernel upgrades, troubleshooting boot issues, disk migration, encryption, and even kernel debugging.

🚀 Introduction: What is dracut?

dracut is a powerful tool used in Fedora, RHEL, and other distributions to create and manage initramfs images—the initial RAM filesystem used during system boot. Unlike older tools like mkinitrd, dracut uses a modular approach, allowing you to build minimal or specialized initramfs tailored to your system.

📦 Installing dracut (if not already available)

dracut comes pre-installed in Fedora and RHEL. If it is missing, install it with:

$ sudo dnf install dracut

Verify the version:

$ dracut --version

📂 Basic usage

📌 Regenerate the current initramfs

$ sudo dracut --force

This regenerates the initramfs for the currently running kernel.

📌 Generate initramfs for a specific kernel

$ sudo dracut --force /boot/initramfs-$(uname -r).img $(uname -r)

Or Manually!

$ sudo dracut --force
$ sudo dracut --force /boot/initramfs-5.14.0-327.el9.x86_64.img 5.14.0-327.el9.x86_64

🧠 Understanding key dracut options (with examples)

–force

Force regeneration even if the file already exists:

$ sudo dracut --force

–kver <kernel-version>

Generate initramfs for a specific kernel:

$ sudo dracut --force --kver 5.14.0-327.el9.x86_64

–add <module> / –omit <module>

Include or exclude specific modules (e.g., lvm, crypt, network).

Include LVM module only:

$ sudo dracut --force --add lvm

Omit network module:

$ sudo dracut --force --omit network

–no-hostonly

Build a generic initramfs that boots on any compatible machine:

$ sudo dracut --force --no-hostonly

–hostonly

Create a host-specific image for minimal size:

$ sudo dracut --force --hostonly

–print-cmdline

Show the kernel command line:

$ dracut --print-cmdline

–list-modules

List all available dracut modules:

$ dracut --list-modules

–add-drivers “driver1 driver2”

Include specific drivers:

$ sudo dracut --add-drivers "nvme ahci" --force

🧪 Test cases and real-world scenarios

1. LVM root disk fails to boot after migration

$ sudo dracut --force --add lvm --hostonly

2. Initramfs too large

Shrink it by omitting unused modules:

$ sudo dracut --force --omit network --omit plymouth

3. Generic initramfs for provisioning

$ sudo dracut --force --no-hostonly --add network --add nfs

4. Rebuild initramfs for rollback kernel

$ sudo dracut --force /boot/initramfs-5.14.0-362.el9.x86_64.img 5.14.0-362.el9.x86_64

🪛 Advanced use: Debugging and analysis

Enable verbose output:

$ sudo dracut -v --force

Enter the dracut shell if boot fails:

Use rd.break in the GRUB kernel line.

📖 Where is dracut configuration stored?

There are two locations where configuration setting may occur.

The global settings location is at:

/etc/dracut.conf

and the drop-in location is at:

/etc/dracut.conf.d/*.conf

Example using the drop-in location:

$ cat /etc/dracut.conf.d/custom.conf

The contents might appear as follows for omitting and adding modules:

omit_dracutmodules+=" plymouth network "
add_dracutmodules+=" crypt lvm "

⚠ Note: Always include a space at the beginning and end of the value when using += in these configuration files. These files are sourced as Bash scripts, so

add_dracutmodules+=" crypt lvm "
ensures proper spacing when multiple config files are concatenated. Without the spaces, the resulting string could concatenate improperly (e.g., mod2mod3) and cause module loading failures.

🧠 Deep dive: /usr/lib/dracut/modules.d/ – the heart of dracut

The directory /usr/lib/dracut/modules.d includes all module definitions. Each contains:

  • A module-setup.sh script
  • Supporting scripts and binaries
  • Udev rules, hooks, and configs

List the modules using the following command:

$ ls /usr/lib/dracut/modules.d/

Example output:

01fips/ 30crypt/ 45ifcfg/ 90lvm/ 95resume/
02systemd/ 40network/ 50drm/ 91crypt-gpg/ 98selinux/

Inspect specific module content (module-setup.sh, in this example) using this:

$ cat /usr/lib/dracut/modules.d/90lvm/module-setup.sh

You can also create custom modules at this location for specialized logic.

🏁 Final thoughts

dracut is more than a utility—it’s your boot-time engineer. From creating lightweight images to resolving boot failures, it offers unparalleled flexibility.

Explore man dracut, read through /usr/lib/dracut/modules.d/, and start customizing.

💡 This article is dedicated to my wife, Rupali Suraj Patil, for her continuous support and encouragement.

🛡️ PHP version 8.1.32, 8.2.28, 8.3.19 and 8.4.5

Posted by Remi Collet on 2025-03-14 06:26:00 UTC

RPMs of PHP version 8.4.5 are available in the remi-modular repository for Fedora ≥ 40 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).

RPMs of PHP version 8.3.19 are available in the remi-modular repository for Fedora ≥ 40 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).

RPMs of PHP version 8.2.28 are available in the remi-modular repository for Fedora ≥ 40 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).

RPMs of PHP version 8.1.32 are available in the remi-modular repository for Fedora ≥ 40 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).

ℹ️ The packages are available for x86_64 and aarch64.

⚠️ PHP version 8.0 has reached its end of life and is no longer maintained by the PHP project.

These versions are also available as Software Collections in the remi-safe repository.

🛡️ These Versions fix 6 security bugs (CVE-2024-11235, CVE-2025-1217, CVE-2025-1734, CVE-2025-1861, CVE-2025-1736, CVE-2025-1219), so the update is strongly recommended.

Version announcements:

ℹ️ Installation: use the Configuration Wizard and choose your version and installation mode.

Replacement of default PHP by version 8.4 installation (simplest):

dnf module switch-to php:remi-8.4/common

Parallel installation of version 8.4 as Software Collection

yum install php84

Replacement of default PHP by version 8.3 installation (simplest):

dnf module switch-to php:remi-8.3/common

Parallel installation of version 8.3 as Software Collection

yum install php83

And soon in the official updates:

⚠️ To be noticed :

  • EL-10 RPMs are built using RHEL-10.0-beta
  • EL-9 RPMs are built using RHEL-9.5
  • EL-8 RPMs are built using RHEL-8.10
  • intl extension now uses libicu74 (version 74.2)
  • mbstring extension (EL builds) now uses oniguruma5php (version 6.9.10, instead of the outdated system library)
  • oci8 extension now uses the RPM of Oracle Instant Client version 23.7 on x86_64 and aarch64
  • a lot of extensions are also available; see the PHP extensions RPM status (from PECL and other sources) page

ℹ️ Information:

Base packages (php)

Software Collections (php83 / php84)

⚙️ PHP version 8.3.22 and 8.4.8

Posted by Remi Collet on 2025-06-06 05:14:00 UTC

RPMs of PHP version 8.4.8 are available in the remi-modular repository for Fedora ≥ 40 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).

RPMs of PHP version 8.3.22 are available in the remi-modular repository for Fedora ≥ 40 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).

ℹ️ The packages are available for x86_64 and aarch64.

ℹ️ There is no security fix this month, so no update for version 8.1.32 and 8.2.28.

⚠️ PHP version 8.0 has reached its end of life and is no longer maintained by the PHP project.

These versions are also available as Software Collections in the remi-safe repository.

Version announcements:

ℹ️ Installation: use the Configuration Wizard and choose your version and installation mode.

Replacement of default PHP by version 8.4 installation (simplest):

dnf module switch-to php:remi-8.4/common

Parallel installation of version 8.4 as Software Collection

yum install php84

Replacement of default PHP by version 8.3 installation (simplest):

dnf module switch-to php:remi-8.3/common

Parallel installation of version 8.3 as Software Collection

yum install php83

And soon in the official updates:

⚠️ To be noticed :

  • EL-10 RPMs are built using RHEL-10.0
  • EL-9 RPMs are built using RHEL-9.5 (next builds will use 9.6)
  • EL-8 RPMs are built using RHEL-8.10
  • intl extension now uses libicu74 (version 74.2)
  • mbstring extension (EL builds) now uses oniguruma5php (version 6.9.10, instead of the outdated system library)
  • oci8 extension now uses the RPM of Oracle Instant Client version 23.7 on x86_64 and aarch64
  • a lot of extensions are also available; see the PHP extensions RPM status (from PECL and other sources) page

ℹ️ Information:

Base packages (php)

Software Collections (php83 / php84)

Loadouts For Genshin Impact v0.1.9 Released

Posted by Akashdeep Dhar on 2025-07-03 18:30:11 UTC
Loadouts For Genshin Impact v0.1.9 Released

Hello travelers!

Loadouts for Genshin Impact v0.1.9 is OUT NOW with the addition of support for recently released characters like Skirk and Dahlia and for recently released weapons like Azurelight from Genshin Impact v5.7 Phase 1. Take this FREE and OPEN SOURCE application for a spin using the links below to manage the custom equipment of artifacts and weapons for the playable characters.

Resources

Changelog

  • Automated dependency updates for GI Loadouts by @renovate in #342
  • Automated dependency updates for GI Loadouts by @renovate in #344
  • Automated dependency updates for GI Loadouts by @renovate in #345
  • Automated dependency updates for GI Loadouts by @renovate in #346
  • Automated dependency updates for GI Loadouts by @renovate in #347
  • Add the recently added character Dahlia to the GI Loadouts roster by @sdglitched in #348
  • Add the recently added character Skirk to the GI Loadouts roster by @sdglitched in #349
  • Add the recently added weapon Azurelight to the GI Loadouts roster by @sdglitched in #351
  • Stage the release v0.1.9 for Genshin Impact v5.7 Phase 1 by @sdglitched in #352
  • Update dependency ruff to ^0.2.0 || ^0.3.0 || ^0.6.0 || ^0.7.0 || ^0.11.0 || ^0.12.0 by @renovate in #353
  • Automated dependency updates for GI Loadouts by @renovate in #354
  • Automated dependency updates for GI Loadouts by @renovate in #355
  • Update dependency pillow to v11.3.0 [SECURITY] by @renovate in #356

Characters

Skirk

Escoffier is a sword-wielding Cryo character of five-star quality.

Dahlia

Dahlia is a catalyst-wielding Hydro character of four-star quality.

Weapons

Azurelight

Loadouts For Genshin Impact v0.1.9 Released

Appeal

While allowing you to experiment with various builds and share them for later, Loadouts for Genshin Impact lets you take calculated risks by showing you the potential of your characters with certain artifacts and weapons equipped that you might not even own. Loadouts for Genshin Impact has been and always will be a free and open source software project, and we are committed to delivering a quality experience with every release we make.

Disclaimer

With an extensive suite of over 1428 diverse functionality tests and impeccable 100% source code coverage, we proudly invite auditors and analysts from MiHoYo and other organizations to review our free and open source codebase. This thorough transparency underscores our unwavering commitment to maintaining the fairness and integrity of the game.

The users of this ecosystem application can have complete confidence that their accounts are safe from warnings, suspensions or terminations when using this project. The ecosystem application ensures complete compliance with the terms of services and the regulations regarding third-party software established by MiHoYo for Genshin Impact.

All rights to Genshin Impact assets used in this project are reserved by miHoYo Ltd. and Cognosphere Pte., Ltd. Other properties belong to their respective owners.

openSUSE turned 20

Posted by Peter Czanik on 2025-07-03 12:34:53 UTC

Last week, I was in Nürnberg for the openSUSE conference. The project turned 20 years old this year, and I was there right from the beginning (and even before that, if we also count the S.u.S.E. years). There were many great talks, including a syslog-ng talk from me, and even a birthday party… :-)

This year marks not just 20 years of openSUSE but also a major new SLES and openSUSE Leap release: version 16.0. There were many talks about what is coming and how things are changing. I already have a test version running on my laptop, and you should too, if you want to help to make version 16 the best release ever! :-) Slowroll also had a dedicated talk. It is a new openSUSE variant, offering a rolling Linux distribution with a bit more stability. So it is positioned somewhere between Leap and Tumbleweed, but of course it is a bit closer to the latter.

That said, I also had a couple of uncomfortable moments. I ended up working in open-source, because it’s normally a place without real-world politics. In other words, people from all walks of life can work together on open-source software, regardless of whether they are religious or atheist, LGBTQ+ allies or conservatives, or come from the east or the west. And even though I agree that we are in a geopolitical situation in which European software companies are needed to ensure our digital sovereignty, it’s not a topic I was eager to hear at an open-source event. I enjoy the technology and spirit of open-source, but I’m not keen on the politics surrounding it, especially at this time of geopolitical tensions.

syslog-ng logo

As usual, I delivered a talk on log management, specifically about message parsing. While my configuration examples came from syslog-ng, I tried to make sure that anything I said could be applied to other log management applications as well. I also introduced my audience to sequence, which allows you to create parsing rules to parse free-form text messages: https://github.com/ccin2p3/sequence-RTG In the coming weeks, I plan to package it for openSUSE.

Happy birthday to openSUSE, and here’s to another successful 20 years!

A COPR for Ansible roles

Posted by Emmanuel Seyman on 2025-07-02 16:53:00 UTC

Packaging Ansible roles

Since Fedora's Server SIG has decided to promote using Ansible, I've decided to package a number of roles I find interesting. Packaging solves two problems in my opinion:

  1. This allows users to get roles and playbooks without having to learn how to get them from Ansible Galaxy
  2. It allows us to patch the roles to work properly on Fedora systems

I've started submitting rpms to Fedora but I thought having a copr in the meantime that includes all my ansible rpms would make it easier for people to install and test them.

Activating the COPR on a Fedora system:

You can run the command "dnf copr enable eseyman/ansible" on a F42 or rawhide system. From there, you'll be able to "dnf search" or "dnf install" any of the packages in the copr. On that system, you'll be able to run a playbook that uses the role on any host you can ssh to.

Using AI moderation tools

Posted by Ben Cotton on 2025-07-02 12:00:00 UTC

Ben Balter recently announced a new tool he created: AI Community Moderator. This project, written by an AI coding assistant at Balter’s direction, takes moderation action in GitHub repositories. Using any AI model supported by GitHub, it automatically enforces a project’s code of conduct and contribution guidelines. Should you use it for your project?

For the sake of this post, I’m assuming that you’re open to using large language model tools in certain contexts. If you’re not, then there’s nothing to discuss.

Why to not use AI moderation tools

Moderating community interactions is a key part of leading an open source project. Good moderation creates a safe and welcoming community where people can do their best work. Bad moderation drives people away — either because toxic members are allowed to run roughshod over others or because good-faith interactions are given heavy-handed punishment. Moderation is one of the most important factors in creating a sustainable community — people have to want to be there.

Moderation is hard — and often thankless — work. It requires emotional energy in addition to time. I understand the appeal of offloading that work to AI. AI models don’t get emotionally invested. They can’t feel burnout. They’re available around the clock.

But they also don’t understand a community’s culture. They can’t build relationships with contributors. They’re not human. Communities are ultimately a human endeavor. Don’t take the humanity out of maintaining your community.

Why you might use AI moderation tools

Having said the above, there are cases where AI moderation tools can help. In a multilingual community, moderators may not have fluency in all of the languages people use. Anyone who has used AI translations know they can sometimes be hilariously wrong, but they’re (usually) better than nothing.

AI tools are also ever-vigilant. They don’t need sleep or vacations and they don’t get pulled away by their day job, family obligations, or other hobbies. This is particularly valuable when a community spans many time zones and the moderation team does not.

Making a decision for your project

“AI” is a broad term, so you shouldn’t write off everything that has that label. Machine learning algorithms can be very helpful in detecting spam and other forms of antisocial behavior. The people who I’ve heard express moral or ethical objections to large language models seem to generally be okay with machine learning models in appropriate contexts.

Using spam filters and other abuse detection tools to support human moderators is a good thing. It’s reasonable to allow them to take basic reversible actions, like hiding a post until a human has had the chance to review it. However, I don’t recommend using AI models to take more permanent actions or to interact with people who have potentially violated your project’s code of conduct. It’s hard, but you need to keep the humanity in your community.

This post’s featured photo by Mohamed Nohassi on Unsplash.

The post Using AI moderation tools appeared first on Duck Alignment Academy.

Pagure Exporter v0.1.4 Released

Posted by Akashdeep Dhar on 2025-07-02 06:30:00 UTC
Pagure Exporter v0.1.4 Released

The first and second quarters of 2025 was the time when a bunch of free and open source software communities seemed to be actively moving away from Pagure to either GitLab (in case of CentOS Project and OpenSUSE Project) and Forgejo (in case of Fedora Project). Having written Pagure Exporter about a couple of years back and being deeply involved in the Fedora To Forgejo initiative, I found myself in the middle of all the Git Forge migration craziness. With a bunch of feature requests and feature requests reaching the doors of the project, I wanted to make the best use of my time to deliver the first release of 2025 for Pagure Exporter using the effective workflows and community personnel at my disposal. I would cover my experiences with the efforts in making this release possible in this article.

Pagure Exporter v0.1.4 Released
Homepage of Pagure Exporter - https://github.com/fedora-infra/pagure-exporter

Impressions

Contributing to a hustling and bustling free and open source software community like those of Fedora Project and CentOS Project means that there are always some tasks required to completed soon. Thankfully, there are also a bunch of passionate contributors willing to roll up their sleeves and hit the ground running as long as they are aware of it. While I was sometimes affected by the unreliability of certain software libraries and the intermittent AI scraper attack on Pagure, I was also joined by the likes of Greg Sutcliffe, Fabian Arrotin, Yashwanth Rathakrishnan, Shounak Dey, Peter Olamide in the efforts. Furthermore, I made it a point to use assistive artificial intelligence technologies for purposes like explaining extended logs and generating code inspirations to kick things off from, at my discretion.

Pagure Exporter v0.1.4 Released
CentOS Git Server migration to GitLab by Davide Cavalca - https://pagure.io/centos-infra/issue/1654

Apes (Are) Strong Together

The request for working on extending Pagure Exporter to support various other hostnames (like those of Fedora Dist Git and CentOS Git Server) was scoped first at around January 2025. With me occupied with the Fedora To Forgejo migration efforts, it was only until March 2025 when the work on it was started by an Outreachy applicant, Rajesh Patel. As the request had an increase in priority by April 2025, I decided to briefly context switch from my existing work to implement the support for different Pagure hostnames. While this was reviewed positively by Michal Konecny and Aurelien Bompard, the readability of the introduced codebase itself was in question so that had to be resolved separately and by someone else, to ensure that I do not end up introducing code changes that only I could understand.

Pagure Exporter v0.1.4 Released
Wrapper to check / create projects on GitLab using the REST API by Greg Sutcliffe - https://pagure.io/centos-infra/issue/1658

Leading up to the v0.1.4 release of Pagure Exporter, I was helped by Greg who himself explored the GitLab API to build a simple Python script that automatically created projects on GitLab under a certain namespace. Pagure Exporter was expected to work in tandem with the said script to migrate repository contents and issue tickets from Pagure as soon as the projects are created on GitLab. We also discussed the possibility of offloading the migration to the GitLab infrastructure to minimize potential network hiccups during the transfer process. Davide Cavalca also joined in to help tailor fit the approach of the migration proceedings and Fabian imported the CentOS Board and CentOS Infra namespaces as dry runs while making observations as to how the tool can be used at scale in automation.

Pagure Exporter v0.1.4 Released
Create a repo and FAS group for FRCL by Fabian Arrotin - https://pagure.io/centos-infra/issue/1709

Gifted With Zealous Mentees

While Rajesh's work could not be merged, I did appreciate the effort that he put into understanding the project and I hoped that I was able to provide learnings. Just like him, we had another enthusiastic Outreachy applicant, Peter who helped in fixing the deprecation status of the datetime library. Yashwanth helped out with going around the codebase to update the copyright years across the code headers. The one contributor who was immensely helpful was Shounak who assisted in moving from using absolute imports to relative ones and in renaming identifiers for improved readability, thus addressing the previously stated concerns. Finding external contributors was difficult due to the challenges we faced with the VCR.py library failing inexplicably but amazing mentees use this as a learning opportunity.

Pagure Exporter v0.1.4 Released
Fix the deprecation status of the datetime library usage by Peter Olamide - https://github.com/fedora-infra/pagure-exporter/pull/157

Patience probably is one of the most defining characteristics for those working on free and open source projects. While I try to keep my turnaround time under a week to address any open issue tickets or pull requests as evidenced by those under the v0.1.4 release, sometimes it could take months to get back to a certain work as evidenced by the codebase changes for improving readability. As I have been taking on more work after my promotion to Senior Software Engineer, I have also begun to include open source artificial intelligence tooling like Ramalama, Ollama and Cursor in my workflow for reviewing external codebase changes and finding alternative performance optimizations - all to ensure that the quality of my work remains high while I context switch from one task to another in momentum.

Pagure Exporter v0.1.4 Released
Rename identifiers for improved readability by Shounak Dey - https://github.com/fedora-infra/pagure-exporter/pull/191

The AI Scraper Attack

While I wrote about how including open source artificial intelligence technologies in my workflow was helpful in making me productive in the previous section, this section is more about how external AI scrapers hindered the progress of the v0.1.4 release of Pagure Exporter. Pagure has been receiving unreasonable amounts of traffic from various AI scrapers for a while now, but things seemed to worsen at the second half of June 2025 when the bombardment of millions of heavy requests led to the service becoming inaccessible to legitimate users. As the project relied on making actual HTTPS Git requests (but masqueraded HTTPS REST requests) for testing purposes, we could not reliably verify the correctness of the codebase changes, thus negatively affecting the initiative of moving CentOS repos to GitLab.

Pagure Exporter v0.1.4 Released
Trigger CI to run on push or pull_request towards main by Shounak Dey - https://github.com/fedora-infra/pagure-exporter/pull/210

Even though I run a bunch of selfhosted applications and services on my homelab infrastructure, I am by no means a system administrator, so I had to rely on Kevin Fenzi to block out the offending IP addresses. I have had fair share of problems from AI scrapers on my testing deployment of Forgejo that I had to keep it behind the Cloudflare verification so I understood just how difficult it must have been for him to keep the unreasonable requestors at bay. Learning from the deployment of Codeberg, I have been looking into Anubis to understand just how we can leverage it to protect the upstream resources from the AI scrapers. Given that the Fedora Infrastructure was undergoing a datacenter move as of the first week of July 2025, the experimentation (or implementation) of this solution has to wait for later.

Pagure Exporter v0.1.4 Released
Fedora Infrastructure status page as of 02nd July 2025 - https://status.fedoraproject.org/

Unreliable Libraries For Testing

Imagine something pissing me off so much that I had to write about my experience with that in its own dedicated section! I want to preface the section by saying that for whatever trouble VCR.py had given me since the beginning of 2025, it had been immensely helpful in ensuring that I do not have to make a bunch of requests to an actual server. For some reason, the tests involving VCR.py used to work just fine during development but fail inexplicably on GitHub Actions - and error messages would be of no help especially when they are related to failing matchers, existing cassettes, non-existent cassettes, count mismatch etc. There happened to be a bunch of pull requests lined up to address to mentioned concerns, but they were not actively looked into - so I decided that it was about time for me to move away.

Pagure Exporter v0.1.4 Released
Move away from VCR.py to responses by Akashdeep Dhar - https://github.com/fedora-infra/pagure-exporter/pull/200

And move away I did - to Responses. It was more than methodology switch though as it included a shift in philosophy as unlike VCR.py which used to record real HTTP requests and replay them, Responses mocks the HTTP call entirely. With the increasing roster of over 90 testcases that ensured a stellar 100% codebase coverage, converting the cassettes to Responses would have been a chore. In came my trustworthy AMD Radeon RX6800XT and Ramalama to rescue, I was able to parse through the VCR.py cassettes to obtain Response Definition objects during the testing runtime. The solution was great, even if I say so myself, as I saved approximately ten to fifteen hours of trudging along (and of course, boredom) to painstakingly port the associated recordings to the respective HTTP testcases.

Pagure Exporter v0.1.4 Released
List of pull requests under kevin1024/vcrpy as of 02nd July 2025 - https://github.com/kevin1024/vcrpy/pulls

Changelog

Published on PyPI - Pagure Exporter v0.1.4
Published on Fedora Linux - Pagure Exporter v0.1.4
Published on GitHub - Pagure Exporter v0.1.4

Pagure Exporter v0.1.4 Released
GitHub release of Pagure Exporter v0.1.4 - https://github.com/fedora-infra/pagure-exporter/releases/tag/0.1.4

From maintainers

  • Fixed the deprecation status of the datetime library usage
  • Tailor fitted the filters to remove credentials before recordings are stored locally
  • Updated the Packit configuration to satiate Packit v1.0.0 release
  • Moved away from using absolute imports to using relative imports
  • Introduced support for CentOS Git Server (i.e. https://git.centos.org)
  • Introduced support for Fedora Dist Git (i.e. https://src.fedoraproject.org)
  • Introduced support for different custom Pagure hostnames
  • Updated copyright headers across all the codebase headers
  • Renamed the identifiers for improved codebase readability
  • Moved away from VCR.py to Responses for test caching purposes
  • Made various automated dependency and security updates
  • Marked the first release of Pagure Exporter in 2025

From GitHub

  • Automated dependency updates by @renovate in #90
  • Automated dependency updates by @renovate in #91
  • Attempt to not mess up the repository secrets by @gridhead in #155
  • Fix the deprecation status of the datetime library usage by @olamidepeterojo in #157
  • Update dependency black to v25 by @renovate in #159
  • Update dependency ruff to ^0.0.285 || ^0.1.0 || ^0.2.0 || ^0.3.0 || ^0.4.0 || ^0.5.0 || ^0.6.0 || ^0.9.0 by @renovate in #146
  • Update dependency vcrpy to v7 by @renovate in #150
  • Automated dependency updates by @renovate in #149
  • Update Packit config after Packit v1.0.0 release by @gridhead in #160
  • Automated dependency updates by @renovate in #161
  • Automated dependency updates by @renovate in #162
  • Automated dependency updates by @renovate in #169
  • Automated dependency updates by @renovate in #170
  • Automated dependency updates by @renovate in #171
  • Update dependency ruff to ^0.0.285 || ^0.1.0 || ^0.2.0 || ^0.3.0 || ^0.4.0 || ^0.5.0 || ^0.6.0 || ^0.9.0 || ^0.10.0 by @renovate in #173
  • Update dependency ruff to ^0.0.285 || ^0.1.0 || ^0.2.0 || ^0.3.0 || ^0.4.0 || ^0.5.0 || ^0.6.0 || ^0.9.0 || ^0.10.0 || ^0.11.0 by @renovate in #174
  • Update dependency pytest-cov to v6 by @renovate in #147
  • Automated dependency updates by @renovate in #175
  • Automated dependency updates by @renovate in #176
  • Automated dependency updates by @renovate in #177
  • Automated dependency updates by @renovate in #178
  • Automated dependency updates by @renovate in #179
  • Move from using relative imports instead of absolute imports by @sdglitched in #185
  • chore: updated copyright years across all the codebase headers by @iamyaash in #183
  • chore(deps): automated dependency updates by @renovate in #189
  • Introduce support for different Pagure hostnames by @gridhead in #188
  • chore(deps): automated dependency updates by @renovate in #192
  • chore(deps): automated dependency updates by @renovate in #193
  • chore(deps): automated dependency updates by @renovate in #194
  • fix(deps): update dependency requests to v2.32.4 [security] by @renovate in #195
  • chore(deps): automated dependency updates by @renovate in #196
  • Rename identifiers for improved readability by @sdglitched in #191
  • Move away from VCR.py to Responses by @gridhead in #200
  • chore(deps): update dependency ruff to ^0.0.285 || ^0.1.0 || ^0.2.0 || ^0.3.0 || ^0.4.0 || ^0.5.0 || ^0.6.0 || ^0.9.0 || ^0.10.0 || ^0.11.0 || ^0.12.0 by @renovate in #197
  • chore(deps): automated dependency updates by @renovate in #198
  • Version bump from v0.1.3 to v0.1.4 by @gridhead in #202

New contributors

🎲 PHP version 8.3.23RC1 and 8.4.9RC1

Posted by Remi Collet on 2025-06-19 14:03:00 UTC

Release Candidate versions are available in the testing repository for Fedora and Enterprise Linux (RHEL / CentOS / Alma / Rocky and other clones) to allow more people to test them. They are available as Software Collections, for parallel installation, the perfect solution for such tests, and as base packages.

RPMs of PHP version 8.4.9RC1 are available

  • as base packages in the remi-modular-test for Fedora 40-42 and Enterprise Linux ≥ 8
  • as SCL in remi-test repository

RPMs of PHP version 8.3.23RC1 are available

  • as base packages in the remi-modular-test for Fedora 40-42 and Enterprise Linux ≥ 8
  • as SCL in remi-test repository

ℹ️ The packages are available for x86_64 and aarch64.

ℹ️ PHP version 8.2 is now in security mode only, so no more RC will be released.

ℹ️ Installation: follow the wizard instructions.

ℹ️ Announcements:

Parallel installation of version 8.4 as Software Collection:

yum --enablerepo=remi-test install php84

Parallel installation of version 8.3 as Software Collection:

yum --enablerepo=remi-test install php83

Update of system version 8.4:

dnf module switch-to php:remi-8.4
dnf --enablerepo=remi-modular-test update php\*

Update of system version 8.3:

dnf module switch-to php:remi-8.3
dnf --enablerepo=remi-modular-test update php\*

ℹ️ Notice:

  • version 8.4.9RC1 is in Fedora rawhide for QA
  • EL-10 packages are built using RHEL-10.0 and EPEL-10.0
  • EL-9 packages are built using RHEL-9.6
  • EL-8 packages are built using RHEL-8.10
  • oci8 extension uses the RPM of the Oracle Instant Client version 23.7 on x86_64 and aarch64
  • intl extension uses libicu 74.2
  • RC version is usually the same as the final version (no change accepted after RC, exception for security fix).
  • versions 8.3.23 and 8.4.9 are planed for July 3rd, in 2 weeks.

Software Collections (php83, php84)

Base packages (php)

Fedora DEI Outreachy Intern – My first month Recap 🎊

Posted by Fedora Community Blog on 2025-07-01 12:00:00 UTC

Hey everyone!

It’s already been a month, I can’t imagine how time flies so fast, busy time?? Flock, Fedora DEI and Documentation workshop?? All in one month.

As a Fedora Outreachy intern, my first month has been packed with learning and contributions. This blog shares what I worked on and how I learned to navigate open source communities.

First, I would like to give a shoutout to my amazing Mentor, Jona Azizaj for all the effort she has put into supporting me. Thank You, Jona!

Highlights from June

Fedora DEI & Docs Workshop

One of the biggest milestones this month was planning and hosting my first Fedora DEI & Docs Workshop. This virtual event introduced new contributors to Fedora documentation, showed them how to submit changes, and gave a live demo of fixing an issue – definitely a learning experience in event organizing!

You can check the Discourse post; all information is in the post itself, including slides and comments.

Flock 2025 recap

I wrote a detailed Flock to Fedora recap article, covering the first two days of talks streamed from Prague. From big announcements about Fedora’s future to deep dives into mentorship, the sessions were both inspiring and practical. Read the blog magazine recap.

Documentation contributions

This month, I have contributed to multiple docs areas, including:

  • DEI team docs – Updated all the broken links in the docs.
  • Outreachy DEI page, and Outreachy mentored projects pages(under review) – I updated content and added examples of past interns, how Outreachy shaped their journey even beyond the internship.
  • How to Organize events section – Created a step guide for event planning.
  • Past event section – Documented successful Fedora DEI activities. It serves as an archive for our past events.

Collaboration and learning

The good part? It’s great to work closely with others, and I’m learning this in the open source space. I spend some time working with other teams as well:

  • Mindshare Committee – Learned how to request funding for events
  • Design team – I had an amazing postcards prepared, thanks to the Design team
  • Marketing – Got the Docs workshop promoted to different Fedora social accounts
  • Documentation team – Especially with Petr Bokoc, who shared a detailed guide on how you can easily contribute to the Docs pages.

A great learning experience. One thing I could say about people in Open source (in Fedora), they’re super amazing, gentle 🙂 Cheers – I’m enjoying my journey. 

My role in Join Fedora SIG

Oh, I thought it’s good to mention this as well, I am also part of the Join SIG, which helps newcomers find their place in Fedora. I’ve been able to understand how the community works, onboarding and mentorship.

What I’ve learned

  • How to collaborate asynchronously – Video calls, and chats. 
  • How to chair meetings – I chaired two DEI Team meetings this month. The first one was challenging, but the second, I felt confident and even enjoyed it. I can tell I didn’t know how meetings are held in text 🙂 
  • How open source works – From budgeting to marketing, I’m learning how many moving pieces make Fedora possible. 

What’s next

I plan to revisit the Event checklist and revamp it, work with my mentor Jona and make it meaningful and useful for future events. 

Also to continue improving the DEI docs, and promoting Fedora’s DEI work.

Last word

This month has already been full of learning and growth. If you’re also interested in helping out the DEI work, reach out to us in the matrix room.

Thanks for reading!

Your Friend in Open Source.

The post Fedora DEI Outreachy Intern – My first month Recap 🎊 appeared first on Fedora Community Blog.

Simplifying Fedora Package Submission Progress Report – GSoC ’25

Posted by Fedora Community Blog on 2025-07-01 10:07:58 UTC

Student: Mayank Singh

  • Fedora Account: manky201

About Project

Hi everyone, I’m working on building a service to make it easier for packagers to submit new packages to Fedora, improving upon and staying in line with the current submission process. My main focus is to automate away trivial tasks, provide fast and clear feedback, and tightly integrate with Git-based workflows that developers are familiar with.

This month

I focused on presenting a high-level architecture of the service of the project to the Fedora community and collecting early feedback. These discussions were incredibly helpful in shaping the design of the project. In particular, they helped surface early concerns and identify important edge cases that we will need to support.

The key decision is to go with a monorepo model:
Each new package submission will be a Pull Request to a central repository where contributors submit their spec files and related metadata.

The service will focus on:

  • Running a series of automated checks on the package (e.g rpmlint)
  • Detecting common issues early.
  • Reporting the feedback and results in the same PR thread for fast feedback loops.
  • Keeping the logic abstract and forge-agnostic, reuse packit-service’s code and layer new handlers on top of it.

Currently, Working on setting up the local development environment and testing for the project with packit-service.

What’s Next ?

I’ll be working on getting a reliable testing environment ready and write code for COPR integration for builds and the next series of post build checks. All the code can be found at avant .

Thanks to my mentor Frantisek Lachman and the community for the great feedback and support.

Looking forward to share further updates.

The post Simplifying Fedora Package Submission Progress Report – GSoC ’25 appeared first on Fedora Community Blog.

Datacenter Move outage

Posted by Fedora Infrastructure Status on 2025-06-30 01:00:00 UTC

We will be moving services and applications from our IAD2 datacenter to a new RDU3 one.

End user services such as: docs, mirrorlists, dns, pagure.io, torrent, fedorapeople, fedoraproject.org website, and tier0 download server will be unaffected and should continue to work normally through the outage window.

Other services …

2 days to datacenter move!

Posted by Kevin Fenzi on 2025-06-28 18:07:20 UTC
Scrye into the crystal ball

Just a day or two more until the big datacenter move! I'm hopeful that it will go pretty well, but you never know.

Datacenter move

Early last week we were still deploying things in the new datacenter, and installing machines. I ran into all kinds of trouble installing our staging openshift cluster. Much of it around versions of images or installer binaries or kernels. Openshift seems fond of 'latest' as a version, but thats not really helpfull all the time. Especially when we wanted to install 4.18 instead of the just released 4.19. I did manage to finally fix all my mistakes and get it going in the end though.

We got our new ipa clusters setup and replicating from the old dc to new. We got new rabbitmq clusters (rhel9 instead of rhel8 and newer rabbitmq) setup and ready.

With that almost everything is installed (except for a few 'hot spare' type things that we can do after the move, and buildvm's...which I will be deploying this weekend.

On thursday we moved our staging env and it mostly went pretty well I think. There's still some applications that need to be deployed or fixed up, but overall it should mostly be functional. We can fix things up as time permits.

We still have an outstanding issue with how our power10's are configured. Turns out we do need a hardware management console to set things up as we had planned. We have ordered this and will be reconfiguring things post move. For normal ppc64le builds this shouldn't have any impact. For composes that need nested virt, they will just fail until the week following the move (when we have some power9's on hand to handle this case). So, sorry ppc64le users, likely a bit of failed rawhide composes, sorry about that.

Just a reminder about next week:

  • mirrorlists (dnf updates), docs/websites, downloads, discourse, matrix should all be unaffected

  • YOU SHOULD PLAN TO NOT TRY AND USE ANY OTHER SERVICES until the goahead (wed).

Monday:

  • Around 10:00 UTC services will start going down.

  • We will be moving storage and databases for a while.

  • Once databases and storage are set we will bring services back up

  • On monday koji will be up and you can probibly even do builds (but I strongly advise you to not). However, bodhi will be down, so no updates will move forward from builds done in this period.

Tuesday:

  • koji/build pipeline goes down.

  • We will be moving it's storage databases for a while.

  • We will bring things up once those are moved.

Wed:

  • Start fixing outstanding issues, deploy missing/lower pri services

  • At this point we can start taking problem reports to fix things (hopefully)

Thursday:

  • More fixing outstanding items.

  • Will be shutting down machines in old DC

Friday:

  • Holiday in the US

  • Hopefully things will be in a stable state by this time.

comments? additions? reactions?

As always, comment on mastodon: https://fosstodon.org/@nirik/114762414965872142

Infra and RelEng Update – Week 26

Posted by Fedora Community Blog on 2025-06-27 10:00:00 UTC

This is a weekly report from the I&R (Infrastructure & Release Engineering) Team. We provide you both infographic and text version of the weekly report. If you just want to quickly look at what we did, just look at the infographic. If you are interested in more in depth details look below the infographic.

Week: 23 June – 27 June 2025

Infrastructure & Release Engineering

The purpose of this team is to take care of day to day business regarding CentOS and Fedora Infrastructure and Fedora release engineering work.
It’s responsible for services running in Fedora and CentOS infrastructure and preparing things for the new Fedora release (mirrors, mass branching, new namespaces etc.).
List of planned/in-progress issues

Fedora Infra

CentOS Infra including CentOS CI

Release Engineering

If you have any questions or feedback, please respond to this report or contact us on #apps or #admin channel on chat.fedoraproject.org.

The post Infra and RelEng Update – Week 26 appeared first on Fedora Community Blog.

🔧 Unlocking system performance: A practical guide to tuning PCP on Fedora & RHEL

Posted by Fedora Magazine on 2025-06-27 08:00:00 UTC

Performance Co-Pilot (PCP) is a robust framework for collecting, monitoring, and analyzing system performance metrics. Available in the repos for Fedora and RHEL, it allows administrators to gather a wide array of data with minimal configuration. This guide walks you through tuning PCP’s pmlogger service to better fit your needs—whether you’re debugging performance issues or running on constrained hardware.

Is the default setup of PCP right for your use case? Often, it’s not. While PCP’s defaults strike a balance between data granularity and overhead, production workloads vary widely. Later in this article, two scenarios will be used to demonstrate some useful configurations.

✅ Prerequisites: Getting PCP up and running

First, install the PCP packages:

$ sudo dnf install pcp pcp-system-tools

Then enable and start the core services:

$ sudo systemctl enable --now pmcd.service
$ sudo systemctl enable --now pmlogger.service

Verify both services are running:

$ systemctl status pmcd pmlogger

🔍 Understanding pmlogger and its configuration

PCP consists of two main components:

  • pmcd: collects live performance metrics from various agents.
  • pmlogger: archives these metrics over time for analysis.

The behavior of pmlogger is controlled by files in /etc/pcp/pmlogger/control.d/. The most relevant is local, which contains command-line options for how logging should behave.

Sample configuration:

$ cat /etc/pcp/pmlogger/control.d/local

You’ll see a line like:

localhost y y /usr/bin/pmlogger -h localhost ... -t 10s -m note

The -t 10s flag defines the logging interval—every 10 seconds in this case.

🔧 Scenario 1: High-frequency monitoring for deep analysis

Use case: Debugging a transient issue on a production server.
Goal: Change the logging interval from 10 seconds to 1 second.

Edit the file (nano editor used in the examples, please use your editor of choice):

$ sudo nano /etc/pcp/pmlogger/control.d/local

Change -t 10s to -t 1s.

Restart the logger:

$ sudo systemctl restart pmlogger.service

Verify:

$ ps aux | grep '[p]mlogger -h localhost'
$ pminfo -f

Expected output snippet:

records: 10, interval: 0:00:01.000

🪶 Scenario 2: Lightweight monitoring for constrained systems

Use case: Monitoring on a small VM or IoT device.
Goal: Change the logging interval to once every 60 seconds.

Edit the same file:

$ sudo nano /etc/pcp/pmlogger/control.d/local

Change -t 10s to -t 60s.

Restart the logger:

$ sudo systemctl restart pmlogger.service

Confirm:

$ ps aux | grep '[p]mlogger -h localhost'
$ pminfo -f

Expected output:

records: 3, interval: 0:01:00.000

🧹 Managing data retention: logs, size, and cleanup

PCP archives are rotated daily by a cron-like service. Configuration lives in:

$ cat /etc/sysconfig/pmlogger

Default values:

PCP_MAX_LOG_SIZE=100
PCP_MAX_LOG_VERSIONS=14
  • PCP_MAX_LOG_SIZE: total archive size (in MB).
  • PCP_MAX_LOG_VERSIONS: number of daily logs to keep.

Goal: Keep logs for 30 days.

Edit the file:

$ sudo nano /etc/sysconfig/pmlogger

Change:

PCP_MAX_LOG_VERSIONS=30

No service restart is required. Changes apply during the next cleanup cycle.

🏁 Final thoughts

PCP is a flexible powerhouse. With just a few changes, you can transform it from a general-purpose monitor into a specialized tool tailored to your workload. Whether you need precision diagnostics or long-term resource tracking, tuning pmlogger gives you control and confidence.

So go ahead—open that config file and start customizing your system’s performance story.

Note: This article is dedicated to my wife, Rupali Suraj Patil, who inspires me every day.

some ai thoughts around ansible lightspeed

Posted by Kevin Fenzi on 2025-06-27 01:30:58 UTC

To take a short break from datacenter work, I have been meaning to look into ansible lightspeed for a long time, so I finally sat down and took an introductory course and have some thoughts about how we might use it in on fedora's ansible setup.

The official name of the product is: "Red Hat Ansible Lightspeed with IBM watsonx Code Assistant", which is a bit... verbose, so I will just use 'Lightspeed' here.

This is one of the very first AI products Red Hat produced, so its been around a few years. Some of that history is probibly why it's specifically using watsonx instead of some other LLMs on the backend.

First a list of things I really like about it:

  • It's actually trained on real, correct, good ansible content. It's not a 'general' LLM trained on the internet, it's using some ansible galaxy content (you can opt out if you prefer) as well as a bunch of curated content from real ansible. This always struck me as one of the very best ways to leverage LLMs instead of general hoover in any data and use it. In this case it really helps make the suggestions and content more trustable and less hallucinated.

  • Depending on the watsonx subscription you have, you may train it on _your_ ansible content. Perhaps you have different standards than others or particular ways you do things. You can train it on them and actually get it to give you output that uses that.

  • Having something be able to generate a biolerplate for you that you can review and fix up is also a really great use for llms, IMHO.

And some things I'm not crazy about:

  • It requires AAP (ansible automation platform) and watsonx licenses. (mostly, see below). It would be cool if it could leverage a local model or Red Hat AI in openshift instead of watsonx, but as noted above it's likely tied to that for historical reasons.

  • It uses a vscode plugin. I'm much more a vim type old sysadmin, and the idea of making a small ansible playbook thats just a text file seems like vscode is... overkill. I can of course see why they choose to implement things this way.

And something I sure didn't know: There's an ansible code bot on github. It can scan your ansible git repo and file a PR to bring it in line with best practices. Pretty cool. We have a mirror of our pagure ansible repo on gitub, however, it seems to be not mirroring. I want to sort that out and then enable the bot to see how it does. :)

Staging Datacenter Move outage

Posted by Fedora Infrastructure Status on 2025-06-26 10:00:00 UTC

We will be moving staging services and applications from our IAD2 datacenter to a new RDU3 one.

This only affects staging, end user services are unaffected.

Contributors who need to use staging are advised to wait until after the outage window to resume work and report issues with services.