Loadouts for Genshin Impact v0.1.9 is OUT NOW with the addition of support for recently released characters like Skirk and Dahlia and for recently released weapons like Azurelight from Genshin Impact v5.7 Phase 1. Take this FREE and OPEN SOURCE application for a spin using the links below to manage the custom equipment of artifacts and weapons for the playable characters.
Automated dependency updates for GI Loadouts by @renovate in #342
Automated dependency updates for GI Loadouts by @renovate in #344
Automated dependency updates for GI Loadouts by @renovate in #345
Automated dependency updates for GI Loadouts by @renovate in #346
Automated dependency updates for GI Loadouts by @renovate in #347
Add the recently added character Dahlia to the GI Loadouts roster by @sdglitched in #348
Add the recently added character Skirk to the GI Loadouts roster by @sdglitched in #349
Add the recently added weapon Azurelight to the GI Loadouts roster by @sdglitched in #351
Stage the release v0.1.9 for Genshin Impact v5.7 Phase 1 by @sdglitched in #352
Update dependency ruff to ^0.2.0 || ^0.3.0 || ^0.6.0 || ^0.7.0 || ^0.11.0 || ^0.12.0 by @renovate in #353
Automated dependency updates for GI Loadouts by @renovate in #354
Automated dependency updates for GI Loadouts by @renovate in #355
Update dependency pillow to v11.3.0 [SECURITY] by @renovate in #356
Characters
Skirk
Escoffier is a sword-wielding Cryo character of five-star quality.
Dahlia
Dahlia is a catalyst-wielding Hydro character of four-star quality.
Weapons
Azurelight
Appeal
While allowing you to experiment with various builds and share them for later, Loadouts for Genshin Impact lets you take calculated risks by showing you the potential of your characters with certain artifacts and weapons equipped that you might not even own. Loadouts for Genshin Impact has been and always will be a free and open source software project, and we are committed to delivering a quality experience with every release we make.
Disclaimer
With an extensive suite of over 1428 diverse functionality tests and impeccable 100% source code coverage, we proudly invite auditors and analysts from MiHoYo and other organizations to review our free and open source codebase. This thorough transparency underscores our unwavering commitment to maintaining the fairness and integrity of the game.
The users of this ecosystem application can have complete confidence that their accounts are safe from warnings, suspensions or terminations when using this project. The ecosystem application ensures complete compliance with the terms of services and the regulations regarding third-party software established by MiHoYo for Genshin Impact.
All rights to Genshin Impact assets used in this project are reserved by miHoYo Ltd. and Cognosphere Pte., Ltd. Other properties belong to their respective owners.
Last week, I was in Nürnberg for the openSUSE conference. The project turned 20 years old this year, and I was there right from the beginning (and even before that, if we also count the S.u.S.E. years). There were many great talks, including a syslog-ng talk from me, and even a birthday party… :-)
This year marks not just 20 years of openSUSE but also a major new SLES and openSUSE Leap release: version 16.0. There were many talks about what is coming and how things are changing. I already have a test version running on my laptop, and you should too, if you want to help to make version 16 the best release ever! :-) Slowroll also had a dedicated talk. It is a new openSUSE variant, offering a rolling Linux distribution with a bit more stability. So it is positioned somewhere between Leap and Tumbleweed, but of course it is a bit closer to the latter.
That said, I also had a couple of uncomfortable moments. I ended up working in open-source, because it’s normally a place without real-world politics. In other words, people from all walks of life can work together on open-source software, regardless of whether they are religious or atheist, LGBTQ+ allies or conservatives, or come from the east or the west. And even though I agree that we are in a geopolitical situation in which European software companies are needed to ensure our digital sovereignty, it’s not a topic I was eager to hear at an open-source event. I enjoy the technology and spirit of open-source, but I’m not keen on the politics surrounding it, especially at this time of geopolitical tensions.
syslog-ng logo
As usual, I delivered a talk on log management, specifically about message parsing. While my configuration examples came from syslog-ng, I tried to make sure that anything I said could be applied to other log management applications as well. I also introduced my audience to sequence, which allows you to create parsing rules to parse free-form text messages: https://github.com/ccin2p3/sequence-RTG In the coming weeks, I plan to package it for openSUSE.
Happy birthday to openSUSE, and here’s to another successful 20 years!
Since Fedora's Server SIG has decided to promote using Ansible, I've decided to package a number of roles I find interesting. Packaging solves two problems in my opinion:
This allows users to get roles and playbooks without having to learn how to get them from Ansible Galaxy
It allows us to patch the roles to work properly on Fedora systems
I've started submitting rpms to Fedora but I thought having a copr in the meantime that includes all my ansible rpms would make it easier for people to install and test them.
Activating the COPR on a Fedora system:
You can run the command "dnf copr enable eseyman/ansible" on a F42 or rawhide system. From there, you'll be able to "dnf search" or "dnf install" any of the packages in the copr. On that system, you'll be able to run a playbook that uses the role on any host you can ssh to.
Ben Balter recently announced a new tool he created: AI Community Moderator. This project, written by an AI coding assistant at Balter’s direction, takes moderation action in GitHub repositories. Using any AI model supported by GitHub, it automatically enforces a project’s code of conduct and contribution guidelines. Should you use it for your project?
For the sake of this post, I’m assuming that you’re open to using large language model tools in certain contexts. If you’re not, then there’s nothing to discuss.
Why to not use AI moderation tools
Moderating community interactions is a key part of leading an open source project. Good moderation creates a safe and welcoming community where people can do their best work. Bad moderation drives people away — either because toxic members are allowed to run roughshod over others or because good-faith interactions are given heavy-handed punishment. Moderation is one of the most important factors in creating a sustainable community — people have to want to be there.
Moderation is hard — and often thankless — work. It requires emotional energy in addition to time. I understand the appeal of offloading that work to AI. AI models don’t get emotionally invested. They can’t feel burnout. They’re available around the clock.
But they also don’t understand a community’s culture. They can’t build relationships with contributors. They’re not human. Communities are ultimately a human endeavor. Don’t take the humanity out of maintaining your community.
Why you might use AI moderation tools
Having said the above, there are cases where AI moderation tools can help. In a multilingual community, moderators may not have fluency in all of the languages people use. Anyone who has used AI translations know they can sometimes be hilariously wrong, but they’re (usually) better than nothing.
AI tools are also ever-vigilant. They don’t need sleep or vacations and they don’t get pulled away by their day job, family obligations, or other hobbies. This is particularly valuable when a community spans many time zones and the moderation team does not.
Making a decision for your project
“AI” is a broad term, so you shouldn’t write off everything that has that label. Machine learning algorithms can be very helpful in detecting spam and other forms of antisocial behavior. The people who I’ve heard express moral or ethical objections to large language models seem to generally be okay with machine learning models in appropriate contexts.
Using spam filters and other abuse detection tools to support human moderators is a good thing. It’s reasonable to allow them to take basic reversible actions, like hiding a post until a human has had the chance to review it. However, I don’t recommend using AI models to take more permanent actions or to interact with people who have potentially violated your project’s code of conduct. It’s hard, but you need to keep the humanity in your community.
The first and second quarters of 2025 was the time when a bunch of free and open source software communities seemed to be actively moving away from Pagure to either GitLab (in case of CentOS Project and OpenSUSE Project) and Forgejo (in case of Fedora Project). Having written Pagure Exporter about a couple of years back and being deeply involved in the Fedora To Forgejo initiative, I found myself in the middle of all the Git Forge migration craziness. With a bunch of feature requests and feature requests reaching the doors of the project, I wanted to make the best use of my time to deliver the first release of 2025 for Pagure Exporter using the effective workflows and community personnel at my disposal. I would cover my experiences with the efforts in making this release possible in this article.
Leading up to the v0.1.4 release of Pagure Exporter, I was helped by Greg who himself explored the GitLab API to build a simple Python script that automatically created projects on GitLab under a certain namespace. Pagure Exporter was expected to work in tandem with the said script to migrate repository contents and issue tickets from Pagure as soon as the projects are created on GitLab. We also discussed the possibility of offloading the migration to the GitLab infrastructure to minimize potential network hiccups during the transfer process. Davide Cavalca also joined in to help tailor fit the approach of the migration proceedings and Fabian imported the CentOS Board and CentOS Infra namespaces as dry runs while making observations as to how the tool can be used at scale in automation.
Patience probably is one of the most defining characteristics for those working on free and open source projects. While I try to keep my turnaround time under a week to address any open issue tickets or pull requests as evidenced by those under the v0.1.4 release, sometimes it could take months to get back to a certain work as evidenced by the codebase changes for improving readability. As I have been taking on more work after my promotion to Senior Software Engineer, I have also begun to include open source artificial intelligence tooling like Ramalama, Ollama and Cursor in my workflow for reviewing external codebase changes and finding alternative performance optimizations - all to ensure that the quality of my work remains high while I context switch from one task to another in momentum.
While I wrote about how including open source artificial intelligence technologies in my workflow was helpful in making me productive in the previous section, this section is more about how external AI scrapers hindered the progress of the v0.1.4 release of Pagure Exporter. Pagure has been receiving unreasonable amounts of traffic from various AI scrapers for a while now, but things seemed to worsen at the second half of June 2025 when the bombardment of millions of heavy requests led to the service becoming inaccessible to legitimate users. As the project relied on making actual HTTPS Git requests (but masqueraded HTTPS REST requests) for testing purposes, we could not reliably verify the correctness of the codebase changes, thus negatively affecting the initiative of moving CentOS repos to GitLab.
Even though I run a bunch of selfhosted applications and services on my homelab infrastructure, I am by no means a system administrator, so I had to rely on Kevin Fenzi to block out the offending IP addresses. I have had fair share of problems from AI scrapers on my testing deployment of Forgejo that I had to keep it behind the Cloudflare verification so I understood just how difficult it must have been for him to keep the unreasonable requestors at bay. Learning from the deployment of Codeberg, I have been looking into Anubis to understand just how we can leverage it to protect the upstream resources from the AI scrapers. Given that the Fedora Infrastructure was undergoing a datacenter move as of the first week of July 2025, the experimentation (or implementation) of this solution has to wait for later.
Imagine something pissing me off so much that I had to write about my experience with that in its own dedicated section! I want to preface the section by saying that for whatever trouble VCR.py had given me since the beginning of 2025, it had been immensely helpful in ensuring that I do not have to make a bunch of requests to an actual server. For some reason, the tests involving VCR.py used to work just fine during development but fail inexplicably on GitHub Actions - and error messages would be of no help especially when they are related to failing matchers, existing cassettes, non-existent cassettes, count mismatch etc. There happened to be a bunch of pull requests lined up to address to mentioned concerns, but they were not actively looked into - so I decided that it was about time for me to move away.
And move away I did - to Responses. It was more than methodology switch though as it included a shift in philosophy as unlike VCR.py which used to record real HTTP requests and replay them, Responses mocks the HTTP call entirely. With the increasing roster of over 90 testcases that ensured a stellar 100% codebase coverage, converting the cassettes to Responses would have been a chore. In came my trustworthy AMD Radeon RX6800XT and Ramalama to rescue, I was able to parse through the VCR.pycassettes to obtain Response Definition objects during the testing runtime. The solution was great, even if I say so myself, as I saved approximately ten to fifteen hours of trudging along (and of course, boredom) to painstakingly port the associated recordings to the respective HTTP testcases.
Release Candidate versions are available in the testing repository for Fedora and Enterprise Linux (RHEL / CentOS / Alma / Rocky and other clones) to allow more people to test them. They are available as Software Collections, for parallel installation, the perfect solution for such tests, and as base packages.
RPMs of PHP version 8.4.9RC1 are available
as base packages in the remi-modular-test for Fedora 40-42 and Enterprise Linux≥ 8
as SCL in remi-test repository
RPMs of PHP version 8.3.23RC1 are available
as base packages in the remi-modular-test for Fedora 40-42 and Enterprise Linux≥ 8
as SCL in remi-test repository
ℹ️ The packages are available for x86_64 and aarch64.
ℹ️ PHP version 8.2 is now in security mode only, so no more RC will be released.
It’s already been a month, I can’t imagine how time flies so fast, busy time?? Flock, Fedora DEI and Documentation workshop?? All in one month.
As a Fedora Outreachy intern, my first month has been packed with learning and contributions. This blog shares what I worked on and how I learned to navigate open source communities.
First, I would like to give a shoutout to my amazing Mentor, Jona Azizaj for all the effort she has put into supporting me. Thank You, Jona!
Highlights from June
Fedora DEI & Docs Workshop
One of the biggest milestones this month was planning and hosting my first Fedora DEI & Docs Workshop. This virtual event introduced new contributors to Fedora documentation, showed them how to submit changes, and gave a live demo of fixing an issue – definitely a learning experience in event organizing!
You can check the Discourse post; all information is in the post itself, including slides and comments.
Flock 2025 recap
I wrote a detailed Flock to Fedora recap article, covering the first two days of talks streamed from Prague. From big announcements about Fedora’s future to deep dives into mentorship, the sessions were both inspiring and practical. Read the blog magazine recap.
Documentation contributions
This month, I have contributed to multiple docs areas, including:
DEI team docs – Updated all the broken links in the docs.
Outreachy DEI page, and Outreachy mentored projects pages(under review) – I updated content and added examples of past interns, how Outreachy shaped their journey even beyond the internship.
Past event section – Documented successful Fedora DEI activities. It serves as an archive for our past events.
Collaboration and learning
The good part? It’s great to work closely with others, and I’m learning this in the open source space. I spend some time working with other teams as well:
Mindshare Committee – Learned how to request funding for events
Design team – I had an amazing postcards prepared, thanks to the Design team
Marketing – Got the Docs workshop promoted to different Fedora social accounts
Documentation team – Especially with Petr Bokoc, who shared a detailed guide on how you can easily contribute to the Docs pages.
A great learning experience. One thing I could say about people in Open source (in Fedora), they’re super amazing, gentle Cheers – I’m enjoying my journey.
My role in Join Fedora SIG
Oh, I thought it’s good to mention this as well, I am also part of the Join SIG, which helps newcomers find their place in Fedora. I’ve been able to understand how the community works, onboarding and mentorship.
What I’ve learned
How to collaborate asynchronously – Video calls, and chats.
How to chair meetings – I chaired two DEI Team meetings this month. The first one was challenging, but the second, I felt confident and even enjoyed it. I can tell I didn’t know how meetings are held in text
How open source works – From budgeting to marketing, I’m learning how many moving pieces make Fedora possible.
What’s next
I plan to revisit the Event checklist and revamp it, work with my mentor Jona and make it meaningful and useful for future events.
Also to continue improving the DEI docs, and promoting Fedora’s DEI work.
Last word
This month has already been full of learning and growth. If you’re also interested in helping out the DEI work, reach out to us in the matrix room.
Hi everyone, I’m working on building a service to make it easier for packagers to submit new packages to Fedora, improving upon and staying in line with the current submission process. My main focus is to automate away trivial tasks, provide fast and clear feedback, and tightly integrate with Git-based workflows that developers are familiar with.
This month
I focused on presenting a high-level architecture of the service of the project to the Fedora community and collecting early feedback. These discussions were incredibly helpful in shaping the design of the project. In particular, they helped surface early concerns and identify important edge cases that we will need to support.
The key decision is to go with a monorepo model: Each new package submission will be a Pull Request to a central repository where contributors submit their spec files and related metadata.
The service will focus on:
Running a series of automated checks on the package (e.g rpmlint)
Detecting common issues early.
Reporting the feedback and results in the same PR thread for fast feedback loops.
Keeping the logic abstract and forge-agnostic, reuse packit-service’s code and layer new handlers on top of it.
Currently, Working on setting up the local development environment and testing for the project with packit-service.
What’s Next ?
I’ll be working on getting a reliable testing environment ready and write code for COPR integration for builds and the next series of post build checks. All the code can be found at avant .
Thanks to my mentor Frantisek Lachman and the community for the great feedback and support.
We will be moving services and applications from our IAD2 datacenter to a new RDU3 one.
End user services such as: docs, mirrorlists, dns, pagure.io, torrent, fedorapeople, fedoraproject.org website, and tier0 download server will be unaffected and should continue to work normally through the outage window.
Just a day or two more until the big datacenter move!
I'm hopeful that it will go pretty well, but you never know.
Datacenter move
Early last week we were still deploying things in the new datacenter,
and installing machines. I ran into all kinds of trouble installing
our staging openshift cluster. Much of it around versions of images
or installer binaries or kernels. Openshift seems fond of 'latest'
as a version, but thats not really helpfull all the time. Especially
when we wanted to install 4.18 instead of the just released 4.19.
I did manage to finally fix all my mistakes and get it going in the end though.
We got our new ipa clusters setup and replicating from the old dc to new.
We got new rabbitmq clusters (rhel9 instead of rhel8 and newer rabbitmq) setup
and ready.
With that almost everything is installed (except for a few 'hot spare' type things
that we can do after the move, and buildvm's...which I will be deploying
this weekend.
On thursday we moved our staging env and it mostly went pretty well I think.
There's still some applications that need to be deployed or fixed up, but
overall it should mostly be functional. We can fix things up as time
permits.
We still have an outstanding issue with how our power10's are configured.
Turns out we do need a hardware management console to set things up as
we had planned. We have ordered this and will be reconfiguring things
post move. For normal ppc64le builds this shouldn't have any impact.
For composes that need nested virt, they will just fail until the week
following the move (when we have some power9's on hand to handle this case).
So, sorry ppc64le users, likely a bit of failed rawhide composes,
sorry about that.
Just a reminder about next week:
mirrorlists (dnf updates), docs/websites, downloads, discourse, matrix should all be unaffected
YOU SHOULD PLAN TO NOT TRY AND USE ANY OTHER SERVICES until the goahead (wed).
Monday:
Around 10:00 UTC services will start going down.
We will be moving storage and databases for a while.
Once databases and storage are set we will bring services back up
On monday koji will be up and you can probibly even do builds (but I strongly
advise you to not). However, bodhi will be down, so no updates will move forward
from builds done in this period.
Tuesday:
koji/build pipeline goes down.
We will be moving it's storage databases for a while.
We will bring things up once those are moved.
Wed:
Start fixing outstanding issues, deploy missing/lower pri services
At this point we can start taking problem reports to fix things (hopefully)
Thursday:
More fixing outstanding items.
Will be shutting down machines in old DC
Friday:
Holiday in the US
Hopefully things will be in a stable state by this time.
This is a weekly report from the I&R (Infrastructure & Release Engineering) Team. We provide you both infographic and text version of the weekly report. If you just want to quickly look at what we did, just look at the infographic. If you are interested in more in depth details look below the infographic.
Week: 23 June – 27 June 2025
Infrastructure & Release Engineering
The purpose of this team is to take care of day to day business regarding CentOS and Fedora Infrastructure and Fedora release engineering work. It’s responsible for services running in Fedora and CentOS infrastructure and preparing things for the new Fedora release (mirrors, mass branching, new namespaces etc.). List of planned/in-progress issues
Performance Co-Pilot (PCP) is a robust framework for collecting, monitoring, and analyzing system performance metrics. Available in the repos for Fedora and RHEL, it allows administrators to gather a wide array of data with minimal configuration. This guide walks you through tuning PCP’s pmlogger service to better fit your needs—whether you’re debugging performance issues or running on constrained hardware.
Is the default setup of PCP right for your use case? Often, it’s not. While PCP’s defaults strike a balance between data granularity and overhead, production workloads vary widely. Later in this article, two scenarios will be used to demonstrate some useful configurations.
pmcd: collects live performance metrics from various agents.
pmlogger: archives these metrics over time for analysis.
The behavior of pmlogger is controlled by files in /etc/pcp/pmlogger/control.d/. The most relevant is local, which contains command-line options for how logging should behave.
Sample configuration:
$ cat /etc/pcp/pmlogger/control.d/local
You’ll see a line like:
localhost y y /usr/bin/pmlogger -h localhost ... -t 10s -m note
The -t 10s flag defines the logging interval—every 10 seconds in this case.
Scenario 1: High-frequency monitoring for deep analysis
Use case: Debugging a transient issue on a production server. Goal: Change the logging interval from 10 seconds to 1 second.
Edit the file (nano editor used in the examples, please use your editor of choice):
PCP archives are rotated daily by a cron-like service. Configuration lives in:
$ cat /etc/sysconfig/pmlogger
Default values:
PCP_MAX_LOG_SIZE=100
PCP_MAX_LOG_VERSIONS=14
PCP_MAX_LOG_SIZE: total archive size (in MB).
PCP_MAX_LOG_VERSIONS: number of daily logs to keep.
Goal: Keep logs for 30 days.
Edit the file:
$ sudo nano /etc/sysconfig/pmlogger
Change:
PCP_MAX_LOG_VERSIONS=30
No service restart is required. Changes apply during the next cleanup cycle.
Final thoughts
PCP is a flexible powerhouse. With just a few changes, you can transform it from a general-purpose monitor into a specialized tool tailored to your workload. Whether you need precision diagnostics or long-term resource tracking, tuning pmlogger gives you control and confidence.
So go ahead—open that config file and start customizing your system’s performance story.
Note: This article is dedicated to my wife, Rupali Suraj Patil, who inspires me every day.
To take a short break from datacenter work, I have been meaning
to look into ansible lightspeed for a long time, so I finally
sat down and took an introductory course and have some thoughts
about how we might use it in on fedora's ansible setup.
The official name of the product is: "Red Hat Ansible Lightspeed
with IBM watsonx Code Assistant", which is a bit... verbose, so I will
just use 'Lightspeed' here.
This is one of the very first AI products Red Hat produced, so
its been around a few years. Some of that history is probibly why
it's specifically using watsonx instead of some other LLMs on the
backend.
First a list of things I really like about it:
It's actually trained on real, correct, good ansible content.
It's not a 'general' LLM trained on the internet, it's using some
ansible galaxy content (you can opt out if you prefer) as well
as a bunch of curated content from real ansible. This always
struck me as one of the very best ways to leverage LLMs instead
of general hoover in any data and use it. In this case it really
helps make the suggestions and content more trustable and less
hallucinated.
Depending on the watsonx subscription you have, you may train
it on _your_ ansible content. Perhaps you have different standards
than others or particular ways you do things. You can train it
on them and actually get it to give you output that uses that.
Having something be able to generate a biolerplate for you that
you can review and fix up is also a really great use for llms, IMHO.
And some things I'm not crazy about:
It requires AAP (ansible automation platform) and watsonx licenses.
(mostly, see below). It would be cool if it could leverage a local
model or Red Hat AI in openshift instead of watsonx, but as noted above
it's likely tied to that for historical reasons.
It uses a vscode plugin. I'm much more a vim type old sysadmin,
and the idea of making a small ansible playbook thats just a text
file seems like vscode is... overkill. I can of course see why they
choose to implement things this way.
And something I sure didn't know: There's an ansible code bot on github.
It can scan your ansible git repo and file a PR to bring it in line with
best practices. Pretty cool. We have a mirror of our pagure ansible repo
on gitub, however, it seems to be not mirroring. I want to sort that out
and then enable the bot to see how it does. :)
There are less than four weeks until the Fedora 43 mass rebuild.
While mass rebuilds inevitably result in some breakage, the following issues appear likely to be common causes of
FTBFS or FTI:
Missing sysusers.d configs
RPM now generates virtual dependencies on user(USERNAME) and/or group(GROUPNAME) under various conditions (e.g. a file in %files is listed with such a user/group ownership).
In order to resolve these dependencies, packages must now provide sysusers.d config files which are automatically parsed to generate similar virtual provides.
Without these configs, a newly rebuilt package will FTI, and anything which buildrequires it will fail to build for being unable to install its dependencies.
See the Change documentation for more details.
Python 3.14
While the Python team has already rebuilt many Python-dependent packages for 3.14, some packages still need to be fixed for changes in the new version
(see What’s New in Python 3.14).
Furthermore, packages which use Python only at build-time (e.g. for build scripts, or running tests, etc.) have yet to be rebuilt, and may need to be fixed as well.
Aclocal macros moved in gettext 0.25
In previous versions of gettext-devel, its various aclocal macros were installed in the default macro search path, and therefore were generally found by autoreconf without any effort, or even when they shouldn’t have been.
However, that actually caused conflict with autopoint when trying to pin an older macros version with AM_GNU_GETTEXT_VERSION.
With gettext 0.25, these macros are now located in a private path, and while autopoint still works as designed, other (technically unsupported) use cases have broken as a result, such as the use of AM_ICONV by itself.
The workaround for such cases is to export ACLOCAL_PATH=/usr/share/gettext/m4/ before autoreconf.
X11 disablement in mutter breaks xwfb-run
As part of the ongoing deprecation of the X11 servers, many packages previously switched from xvfb-run to xwfb-run (part of xwayland-run), the latter requiring a Wayland compositor to be specified.
The next step of that for F43 was disabling the X11 session support in the GNOME desktop, which included disabling the X11 window management support in mutter.
Many GNOME packages, and all such packages in the RHEL/ELN set, are using mutter as the compositor.
(Packages outside of the GNOME and RHEL sets may use weston instead and are not impacted.)
Unfortunately, due to a bug in mutter before 49.alpha, disabling X11 also mistakenly disabled the --no-x11 option which (counterintuitively) is not the opposite of the now-disabled --x11 argument (to run as an X11 window manager) but disabled the launching of an XWayland server.
With the option disabled but XWayland support enabled, that means mutter is always trying to launch XWayland even where it shouldn’t (such as in a moch chroot, where it doesn’t work).
xwayland-run relied on this option to avoid that, but with that not working, any use of xwfb-run -c mutter (or the like) are currently failing.
This should be fixed as part of the 49~alpha.0 update which is currently in testing but blocked by additional packages needing to be updated in tandem.
Hopefully the Workstation WG can get this fixed in time for the F43 mass rebuild.
glibc 2.42 changes
The development version of glibc now in rawhide/F43 includes a few potentially breaking changes:
The ancient <termio.h> and struct termio interfaces have been removed. Using <sys/ioctl.h> and/or <termios.h>, and struct termios, instead should cover the common use cases.
lockf now has a warn_unused_result attribute (commit), which will generate warnings (or promoted errors) in code where the result is ignored.
GCC 15
While many of the changes in GCC 15 were handled in the aftermath of the previous F42 mass rebuild , there are still some packages which have yet to be rebuilt.
Particularly, mingw-gcc 15 did not land until after the mass rebuild, so some mingw packages may see failures. (For those with corresponding native packages, hopefully the fix is already available.)
Also, some later changes in GCC 15 (after the mass rebuild) may also cause failures (one example).
Flaky tests
All too often, builds fail in %check due to flaky or architecture-dependent tests. Rerunning an entire build (sometimes multiple times!) just in order to get tests to pass is a waste of time and resources.
Instead, please consider moving tests out of the spec file and into CI instead.
Note:
This list was generated by AI – just kidding!
Actually, this is the result of an early mass rebuild in ELN, which was done with the express purpose of finding such issues in advance, and get a head start on fixing them.
Hopefully this information, and the fixes provided along the way (linked in the tracker) will help maintainers to have a smoother mass rebuild experience.
One of the realities of creating open source software is that people will come along and say you must do something. Often, this happens to be a something that’s very valuable to them. If you’re lucky, they’ll help you do it. Much of the time, though, that’s not the case. But no matter what users or best practices say, the “O” in “FOSS” still does not stand for “obligation”. Unless you’ve committed to doing something, you don’t have to do it.
One good example is having a process for people to privately report security bugs. This is widely accepted as a best practice because it allows for vulnerabilities to be fixed before they’re broadly known. Although this doesn’t entirely eliminate the possibility of a bad actor taking advantage of the vulnerability, it reduces the risk. But that process adds overhead for maintainers, and it puts them in a position to make a fix by a particular deadline. For volunteer maintainers in particular, this can overwhelm the rest of the project work.
This is the conclusion that Nick Wellnhofer, the (sole) maintainer of libxml2, recently reached. He decided to no longer treat security reports as special. Instead, they will be treated just like any other bug report.
The reactions that I’ve seen are almost universally supportive. This is the right approach, especially considering that libxml2 “never had the quality to be used in mainstream browsers or operating systems to begin with.” Just because big companies decided to start using a project, that doesn’t mean the maintainers suddenly have to produce production-quality software. Wellnhofer set the expectations clearly, and that’s the only obligation that he has to meet. If that’s not acceptable to downstreams, they are free to use another library or to make their own.
This article will describe the content and structure of the sosreport output. The aim is to improve its usefullness through a better understanding of its contents.
What is sosreport?
sosreport is a powerful command-line utility available on Fedora, Red Hat Enterprise Linux (RHEL), CentOS, and other RHEL-based systems to collect a comprehensive snapshot of the system’s configuration, logs, services, and state. The primary use is for diagnosing issues, especially during support cases with Red Hat or other vendors.
When executed, sosreport runs a series of modular plugins that collect relevant data from various subsystems like networking, storage, SELinux, Docker, and more. The resulting report is packaged into a compressed tarball, which can be securely shared with support teams to expedite troubleshooting.
In essence, sosreport acts as a black box recorder for Linux — capturing everything from system logs and kernel messages to active configurations and command outputs — helping support engineers trace problems without needing direct access to the system.
How to Generate a sosreport
To use sosreport on Fedora, RHEL, or CentOS, run the following command as root or with sudo:
sudo sosreport
This command collects system configuration, logs, and command outputs using various plugins. After a few minutes, it generates a compressed tarball in /var/tmp/ (or a similar location), typically named like:
sosreport-hostname-20250623-123456.tar.xz
You may be prompted to enter a case ID or other metadata, depending on your system configuration or support workflow.
The sosreport generated tarball contains a detailed snapshot of the system’s health and configuration. It has a well-organized structure which reflects the data collected from the myriad Linux subsystems.
Exploring sosreport output is challenging due to the sheer volume of logs, configuration files, and system command outputs it contains. However, understanding its layout is key for support engineers and sysadmins to quickly locate and interpret crucial diagnostic information.
sosreport directory layout
When the tarball is unpacked, the directory structure typically resembles this:
Each file name matches the Linux command used, with all options. The contents are the actual command output, making the plugin behavior transparent.
sos_reports/
This directory contains multiple formats that index and summarize the entire sosreport:
sos.json: A machine-readable index of all collected files and commands.
manifest.json: Describes how sosreport executed – timestamps, plugins used, obfuscation done, errors, etc.
HTML output for easy browsing via browser.
sos_logs/
Contains logs from the execution of sosreport itself.
sos.log: Primary log file that highlights any errors or issues during data collection.
sos_strings/
Contains journal logs for up to 30 days, extracted using journalctl
Can be quite large, especially on heavily used systems
Structured into subdirectories like logs/ or networkmanager/
EXTRAS/
This is not a default part of an sosreport. It is created by the sos_extras plugin and used to collect any custom user-defined files.
Why this layout matters
Speed: Logical grouping of directories help engineers drill down without manually parsing GigaBytes of log files.
Traceability: Knowing where each file came from and what command produced it enhances reproducibility.
Automation: Tools like soscleaner or sos-analyzer rely on this structure for automated diagnostics.
Final thoughts
While sosreport is a powerful diagnostic tool, its effectiveness hinges on understanding its structure. With familiarity, engineers can isolate root causes of failures, uncover misconfigurations, and collaborate more efficiently with support teams. If you haven’t yet opened one up manually, try it — there’s a lot to learn from the insides!
This is my first Fedora Magazine article, dedicated to my wife Rupali Suraj Patil — my constant source of inspiration.
JP was puzzled that using podman run --memory=2G … would not result in the 2G limit being visible inside the container.
While we were able to identify this as a visualization problem — tools like free(1) only look at /proc/meminfo and that is not virtualized inside a container, you'd have to look at /sys/fs/cgroup/memory.max and friends instead — I couldn't leave it at that.
And then I remembered there is actually something that can provide a virtual (cgroup-aware) /proc for containers: LXCFS!
But does it work with Podman?!
I always used it with LXC, but there is technically no reason why it wouldn't work with a different container solution — cgroups are cgroups after all.
As we all know: there is only one way to find out!
Take a fresh Debian 12 VM, install podman and verify things behave as expected:
And after installing (and starting) lxcfs, we can use the virtual /proc/meminfo it generates by bind-mounting it into the container (LXC does that part automatically for us):
The same of course works with all the other proc entries lxcfs provides (cpuinfo, diskstats, loadavg, meminfo, slabinfo, stat, swaps, and uptime here), just bind-mount them.
And yes, free(1) now works too!
bash-5.1# free -m total used free shared buff/cache availableMem: 2048 3 1976 0 67 2044Swap: 0 0 0
Just don't blindly mount the whole /var/lib/lxcfs/proc over the container's /proc.
It did work (as in: "bash and free didn't crash") for me, but with /proc/$PID etc missing, I bet things will go south pretty quickly.
Large Language Models (LLMs) are powerful tools, but their unchecked proliferation carries technical, ethical, and professional consequences for software developers and the industry at large.
Patch review is an important and useful part of the kernel development process, but it also a time-consuming part. To see if I could save some human reviewer time I've been pushing kernel patch-series to a branch on github, creating a pull-request for the branch and then assigning it to Copilot for review. The idea being that In would fix any issues Co-pilot catches before posting the series upstream saving a human reviewer from having to catch the issues.
I've done this for 5 patch-series: one, two, three, four, five, totalling 53 patches in total. click the number to see the pull-request and Copilot's reviews.
Unfortunately the results are not great on 53 patches Co-pilot had 4 low-confidence comments which were not useful and 3 normal comments. 2 of the no comments were on the power-supply fwnode series one was about spelling degrees Celcius as degrees Celsius instead which is the single valid remark. The other remark was about re-assigning a variable without freeing it first, but Copilot missed that the re-assignment was to another variable since this happened in a different scope. The third normal comment (here) was about as useless as they can come.
To be fair these were all patch-series written by me and then already self-reviewed and deemed ready for upstream posting before I asked Copilot to review them.
As another experiment I did one final pull-request with a couple of WIP patches to add USBIO support from Intel. Copilot generated 3 normal comments here all 3 of which are valid and one of them catches a real bug. Still given the WIP state of this case and the fact that my own review has found a whole lot more then just this, including the need for a bunch if refactoring, the results of this Copilot review are also disappointing IMHO.
Co-pilot also automatically generates summaries of the changes in the pull-requests, at a first look these look useful for e.g. a cover-letter for a patch-set but they are often full with half-truths so at a minimum these need some very careful editing / correcting before they can be used.
My personal conclusion is that running patch-sets through Copilot before posting them on the list is not worth the effort.
In this fifth article of the “Systeminsightswithcommand-line tools” series we explore free and vmstat, two small utilities that reveal a surprising amount about your Linux system’s health. free gives you an instant snapshot of how RAM and swap are being used. vmstat (the virtual memory statistics reporter) reports a real-time view of memory, CPU, and I/O activity.
By the end of this article you will be able to translate buffers and cache into “breathing room”, read the mysterious available column with confidence, and spot memory leaks or I/O saturation.
A quick tour of free
Basic usage
$ free -h total used free shared buff/cache available Mem: 23Gi 14Gi 575Mi 3,3Gi 12Gi 8,8Gi Swap: 8,0Gi 6,6Gi 1,4Gi
free parses /proc/meminfo and prints totals for physical memory and swap, along with kernel buffers and cache. Use -h for human-readable units, -s 1 to refresh every second, and -c N to stop after N samples which is handy to get a trend when doing something in parallel. For example, free -s 60 -c 1440 gives a 24-hour CSV-friendly record without installing extra monitoring daemons.
Free memory refers to RAM that is entirely unoccupied. It isn’t being used by any process or for caching. On server systems, I tend to view this as wasted since unused memory isn’t contributing to performance. Ideally, after a system has been running for some time, this number should remain low.
Available memory, on the other hand, represents an estimate of how much memory can be used by new or running processes without resorting to swap. It includes free memory plus parts of the cache and buffers that the system can reclaim quickly if needed.
In essence, the distinction in Linux lies here: free memory is idle and unused, while available memory includes both truly free space and memory that can be readily freed up to keep the system responsive without swapping. It is not a problem to have a low free memory, available memory is usually what to be concerned about.
A healthy system might even show used ≈ total yet available remains large; that mostly reflects cache at work. Fedora’s kernel will automatically drop clean cache pages whenever an application needs the space, so cached memory is not wasted. Think of it as a working set that just hasn’t been reassigned yet.
Spotting problems with free
Rapidly shrinking available combined with rising swap used indicates real memory pressure.
Large swap-in/out spikes point to thrashing workloads or runaway memory consumers.
vmstat – Report virtual memory statistics
vmstat (virtual memory statistics) displays processes, memory, paging, block-I/O, interrupts, context switches, and CPU utilization in a single line. Run it with an interval and count to watch trends (output shown below has been split into three sections for better readability):
---swap-- -----io---- si so bi bo 8 21 130 724 0 0 0 0 0 0 8 48
-system-- -------cpu------- in cs us sy id wa st gu 2851 19 15 7 77 0 0 0 5779 7246 14 10 77 0 0 0 5141 6525 12 9 79 0 0 0
Anatomy of the output
From the vmstat(8) manpage:
Procs r: The number of runnable processes (running or waiting for run time). b: The number of processes blocked waiting for I/O to complete.
Memory These are affected by the --unit option. swpd: the amount of swap memory used. free: the amount of idle memory. buff: the amount of memory used as buffers. cache: the amount of memory used as cache. inact: the amount of inactive memory. (-a option) active: the amount of active memory. (-a option)
Swap These are affected by the --unit option. si: Amount of memory swapped in from disk (/s). so: Amount of memory swapped to disk (/s).
IO bi: Kibibyte received from a block device (KiB/s). bo: Kibibyte sent to a block device (KiB/s).
System in: The number of interrupts per second, including the clock. cs: The number of context switches per second.
CPU These are percentages of total CPU time. us: Time spent running non-kernel code. (user time, including nice time) sy: Time spent running kernel code. (system time) id: Time spent idle. Prior to Linux 2.5.41, this includes IO-wait time. wa: Time spent waiting for IO. Prior to Linux 2.5.41, included in idle. st: Time stolen from a virtual machine. Prior to Linux 2.6.11, unknown. gu: Time spent running KVM guest code (guest time, including guest nice).
Practical diagnostics
Section
Key Fields
What to watch
Procs
r (run-queue), b (blocked)
r > CPU cores = contention
Memory
swpd, free, buff, cache
Rising swpd with falling free = pressure
Swap
si, so
Non-zero so means the kernel is swapping out
IO
bi, bo
High bo + high wa hints at write-heavy workloads
System
in, cs
Sudden spikes may indicate interrupt storms
CPU
us, sy, id, wa, st
High wa (I/O wait) = storage bottleneck
Catching a memory leak
Run vmstat 500 in one terminal while your suspect application runs in another. If free keeps falling and si/so climb over successive samples, physical RAM is being exhausted and the kernel starts swapping, which is classic leak behavior.
Finding I/O saturation
When wa (CPU wait) and bo (blocks out) soar while r remains modest, the CPU is idle but stuck waiting for the disk. Consider adding faster storage or tuning I/O scheduler parameters.
Detecting CPU over-commit
A sustained r that is double the number of logical cores with low wa and plenty of free means CPU is the bottleneck, not memory or I/O. Use top or htop to locate the busiest processes, or scale out workloads accordingly.
Conclusion
Mastering free and vmstat gives you a lens into memory usage, swap activity, I/O latency, and CPU load. For everyday debugging: start with free to check if your system is truly out of memory, then use vmstat to reveal the reason, whether it’s memory leaks, disk bottlenecks, or CPU saturation.
Stay tuned for the next piece in our “System insights with command-line tools” series and happy Fedora troubleshooting!
Have you ever asked ChatGPT or other online AI services to review and correct
your emails or posts for you? Have you ever pondered over what the service or
company, such as OpenAI, does with the text you provide them “for free”? What
are the potential risks of sharing private content, possibly leading to
copyright headaches?
I am going to demonstrate how I use Ramalama and local models instead of
ChatGPT for English corrections, on a moderately powerful laptop. The latest
container-based tooling surrounding the Ramalama project makes running Language
Models (LLMs) on Fedora effortless and free of those risks.
Installation and starting
Ramalama is packaged for Fedora! Just go with:
dnf install -y ramalama
It is a fairly swift action, I assure you. Do not be misled though. You will
need to download gigabytes of data later on, during the Ramalama runtime :-)
I take privacy seriously, the more isolation the better. Even though
Ramalama claims it isolates the model I’d like to isolate Ramalama itself.
Therefore, I create a new user called “ramalama” with the command:
useradd ramalama && sudo su - ramalama
This satisfies me, as I trust the Ramalama/Podman team’s that they do not
escalate privileges (at least I don’t see any setuid bits, etc.).
Experimenting with models
ramalama run: error: the following arguments are required: MODEL
The tool does a lot for you, but you still need to research which model is most
likely the one you require. I haven’t done an extensive research myself,
but I’ve heard rumors that LLama3 or Mistral are relatively good options for
English on laptops. Lemme try:
$ ramalama run llama3
Downloading ollama://llama3:latest ...
Trying to pull ollama://llama3:latest ...
99% |█████████████████████████████████████████████████████████████████████████ | 4.33 GB/ 4.34 GB 128.84 MB/s
Getting image source signatures
Copying blob 97024d81bb02 done |_
Copying blob 97971704aef2 done |_
Copying blob 2b2bdebbc339 done |_
Copying config d13a3de051 done |_
Writing manifest to image destination
🦭 > Hello, who is there?
Hello! I'm an AI, not a human, so I'm not "there" in the classical sense. I exist solely as a digital entity,
designed to understand and respond to natural language inputs. I'm here to help answer your questions, provide
information, and chat with you about various topics. What's on your mind?
Be prepared for that large download (6GB in my case, not just the 4.3GB model;
I’m including the Ramalama Podman image as well).
The command-line sucks here, server helps
Very early on, you’ll find that the command-line prompt doesn’t make it easy for
you to type new lines; therefore, asking the model for help with composing
multi-line emails isn’t straightforward. Yes, you can use Python’s multi\nline
strings\nlike this one, but for this you’ll at least need a conversion tool.
I want to have a similar UI with the ChatGPT, and it is possible!
...
srv log_server_r: request: POST /v1/chat/completions 192.168.0.4 500
srv log_server_r: request: GET / 192.168.0.4 200
srv log_server_r: request: GET /props 192.168.0.4 200
Admittedly, that’s just too much. I don’t need a full prompt; I’d still prefer
a simple command-line interface that would let me provide multiline strings and
respond with the model’s answer. Nah, we need to package python-openai into
Fedora (but it is not yet there).
Performance concerns
Both llama3 and Mistral respond surprisingly quickly. The reply starts
immediately, and they print approximately 30 tokens per second. Contrastingly,
Deepseek takes much longer to respond, approximately a minute, but the token
rate is roughly equivalent.
I was surprised to find that while the GPU was fully utilized, NVTOP did not
report any additional GPU memory consumption (before, during or after model
provided the answer). Does anyone have any ideas as to why this might be the
case?
Summary
The mentioned models perform exceptionally well for my use-case. My
interactions with the model look like:
fix english:
the multi-line text
that I want to correct
and the outputs are noticeably superior to the inputs :-).
More experimentation is possible with different models, temperature settings,
RAG, and more. Refer to ramalama run –help for details.
However, I have been encountering some hardware issues with my Radeon 780M.
If I run my laptop for an extended period, starting the prompt with a lengthy
question can trigger a black screen situation, leaving no other interactions
possible (reboot needed). If you have any suggestions on how to debug these
issues, please let me know.
Super busy recently focused on the datacenter move thats happening
in just 10 days! (I hope).
datacenter move
Just 10 days left. We are not really where I was hoping to be at this
point, but hopefully we can still make things work.
We got our power10 boxes installed and setup and... we have an issue.
Some of our compose process uses vm's in builder guests, but the way
we have the power10 setup with one big linux hypervisor and guests on that
doesn't allow those guests to have working nested virt. Only two levels
is supported. So, we are looking at options for early next week and
hopefully we can get something working in time for the move. Options
include getting a vHMC to carve out lpars, moving an existing power9
machine in place at least for the move for those needs and a few more.
I'm hoping we can get something working in time.
We are having problems with our arm boxes too. First there were
strange errors on the addon 25G cards. That turned out to be a transceiver
problem and was fixed thursday. Then the addon network
cards in them don't seem to be able to network boot, which makes installing
them anoying. We have plans for workarounds there too for early next week:
either connected the onboard 1G nics, or some reprogramming of the cards
to get them working, or some installs with virtual media. I'm pretty sure
we can get this working one way or another.
On the plus side, tons of things are deployed in the new datacenter already
and should be ready. Early next week we should have ipa clusters replicating.
Also soon we should have staging openshift cluster in place.
Monday, networking is going to do a resilance test on the networking setup
there. This will have them take down one 'side' of the switches and confirm
all our machines are correctly balancing over their two network cards.
Tuesday we have a 'go/no-go' meeting with IT folks. Hopefully we can be go
and get this move done.
Next wed, I am planning to move all of our staging env over to the new
datacenter. This will allow us to have a good 'dry run' at the production
move and also reduce the number of things that we need to move the following
week. If you are one of the very small number of folks that uses our
staging env to test things, make a note that things will be down on wed.
Then more prep work and last minute issues and on into switcharoo week.
Early monday of that week, things will be shutdown so we can move storage,
then storage moves, we sync other data over and bring things up. Tuesday
will be the same for the build system side. I strongly advise contributors
to just go do other things monday and tuesday. Lots of things will be in
a state a flux. Starting wed morning we can start looking at issues and
fixing them up.
Thanks for everyone's patience during this busy time!
misc other stuff
I've been of course doing other regular things, but my focus as been on datacenter
moving. Just one other thing to call out:
Finally we have our updated openh264 packages released for updates in stable
fedora releases. It was a long sad road, but hopefully now we can get things
done much much quicker. The entire thing wasn't just one thing going wrong or
blocking stuff, it was a long series of things, one after another. We are
in a much better state now moving forward though.
Last week in Rijeka we held Science festival 2015. This is the (hopefully not unlucky) 13th instance of the festival that started in 2003. Popular science events were organized in 18 cities in Croatia.
I was invited to give a popular lecture at the University departments open day, which is a part of the festival. This is the second time in a row that I got invited to give popular lecture at the open day. In 2014 I talked about The Perfect Storm in information technology caused by the fall of economy during 2008-2012 Great Recession and the simultaneous rise of low-cost, high-value open-source solutions. Open source completely changed the landscape of information technology in just a few years.
comments? additions? reactions?
As always, comment on mastodon: https://fosstodon.org/@nirik/114762414965872142