/rss20.xml">

Fedora People

Fedora DEI Outreachy Intern – My first month Recap 🎊

Posted by Fedora Community Blog on 2025-07-01 12:00:00 UTC

Hey everyone!

It’s already been a month, I can’t imagine how time flies so fast, busy time?? Flock, Fedora DEI and Documentation workshop?? All in one month.

As a Fedora Outreachy intern, my first month has been packed with learning and contributions. This blog shares what I worked on and how I learned to navigate open source communities.

First, I would like to give a shoutout to my amazing Mentor, Jona Azizaj for all the effort she has put into supporting me. Thank You, Jona!

Highlights from June

Fedora DEI & Docs Workshop

One of the biggest milestones this month was planning and hosting my first Fedora DEI & Docs Workshop. This virtual event introduced new contributors to Fedora documentation, showed them how to submit changes, and gave a live demo of fixing an issue – definitely a learning experience in event organizing!

You can check the Discourse post; all information is in the post itself, including slides and comments.

Flock 2025 recap

I wrote a detailed Flock to Fedora recap article, covering the first two days of talks streamed from Prague. From big announcements about Fedora’s future to deep dives into mentorship, the sessions were both inspiring and practical. Read the blog magazine recap.

Documentation contributions

This month, I have contributed to multiple docs areas, including:

  • DEI team docs – Updated all the broken links in the docs.
  • Outreachy DEI page, and Outreachy mentored projects pages(under review) – I updated content and added examples of past interns, how Outreachy shaped their journey even beyond the internship.
  • How to Organize events section – Created a step guide for event planning.
  • Past event section – Documented successful Fedora DEI activities. It serves as an archive for our past events.

Collaboration and learning

The good part? It’s great to work closely with others, and I’m learning this in the open source space. I spend some time working with other teams as well:

  • Mindshare Committee – Learned how to request funding for events
  • Design team – I had an amazing postcards prepared, thanks to the Design team
  • Marketing – Got the Docs workshop promoted to different Fedora social accounts
  • Documentation team – Especially with Petr Bokoc, who shared a detailed guide on how you can easily contribute to the Docs pages.

A great learning experience. One thing I could say about people in Open source (in Fedora), they’re super amazing, gentle 🙂 Cheers – I’m enjoying my journey. 

My role in Join Fedora SIG

Oh, I thought it’s good to mention this as well, I am also part of the Join SIG, which helps newcomers find their place in Fedora. I’ve been able to understand how the community works, onboarding and mentorship.

What I’ve learned

  • How to collaborate asynchronously – Video calls, and chats. 
  • How to chair meetings – I chaired two DEI Team meetings this month. The first one was challenging, but the second, I felt confident and even enjoyed it. I can tell I didn’t know how meetings are held in text 🙂 
  • How open source works – From budgeting to marketing, I’m learning how many moving pieces make Fedora possible. 

What’s next

I plan to revisit the Event checklist and revamp it, work with my mentor Jona and make it meaningful and useful for future events. 

Also to continue improving the DEI docs, and promoting Fedora’s DEI work.

Last word

This month has already been full of learning and growth. If you’re also interested in helping out the DEI work, reach out to us in the matrix room.

Thanks for reading!

Your Friend in Open Source.

The post Fedora DEI Outreachy Intern – My first month Recap 🎊 appeared first on Fedora Community Blog.

Simplifying Fedora Package Submission Progress Report – GSoC ’25

Posted by Fedora Community Blog on 2025-07-01 10:07:58 UTC

Student: Mayank Singh

  • Fedora Account: manky201

About Project

Hi everyone, I’m working on building a service to make it easier for packagers to submit new packages to Fedora, improving upon and staying in line with the current submission process. My main focus is to automate away trivial tasks, provide fast and clear feedback, and tightly integrate with Git-based workflows that developers are familiar with.

This month

I focused on presenting a high-level architecture of the service of the project to the Fedora community and collecting early feedback. These discussions were incredibly helpful in shaping the design of the project. In particular, they helped surface early concerns and identify important edge cases that we will need to support.

The key decision is to go with a monorepo model:
Each new package submission will be a Pull Request to a central repository where contributors submit their spec files and related metadata.

The service will focus on:

  • Running a series of automated checks on the package (e.g rpmlint)
  • Detecting common issues early.
  • Reporting the feedback and results in the same PR thread for fast feedback loops.
  • Keeping the logic abstract and forge-agnostic, reuse packit-service’s code and layer new handlers on top of it.

Currently, Working on setting up the local development environment and testing for the project with packit-service.

What’s Next ?

I’ll be working on getting a reliable testing environment ready and write code for COPR integration for builds and the next series of post build checks. All the code can be found at avant .

Thanks to my mentor Frantisek Lachman and the community for the great feedback and support.

Looking forward to share further updates.

The post Simplifying Fedora Package Submission Progress Report – GSoC ’25 appeared first on Fedora Community Blog.

Datacenter Move outage

Posted by Fedora Infrastructure Status on 2025-06-30 01:00:00 UTC

We will be moving services and applications from our IAD2 datacenter to a new RDU3 one.

End user services such as: docs, mirrorlists, dns, pagure.io, torrent, fedorapeople, fedoraproject.org website, and tier0 download server will be unaffected and should continue to work normally through the outage window.

Other services …

2 days to datacenter move!

Posted by Kevin Fenzi on 2025-06-28 18:07:20 UTC
Scrye into the crystal ball

Just a day or two more until the big datacenter move! I'm hopeful that it will go pretty well, but you never know.

Datacenter move

Early last week we were still deploying things in the new datacenter, and installing machines. I ran into all kinds of trouble installing our staging openshift cluster. Much of it around versions of images or installer binaries or kernels. Openshift seems fond of 'latest' as a version, but thats not really helpfull all the time. Especially when we wanted to install 4.18 instead of the just released 4.19. I did manage to finally fix all my mistakes and get it going in the end though.

We got our new ipa clusters setup and replicating from the old dc to new. We got new rabbitmq clusters (rhel9 instead of rhel8 and newer rabbitmq) setup and ready.

With that almost everything is installed (except for a few 'hot spare' type things that we can do after the move, and buildvm's...which I will be deploying this weekend.

On thursday we moved our staging env and it mostly went pretty well I think. There's still some applications that need to be deployed or fixed up, but overall it should mostly be functional. We can fix things up as time permits.

We still have an outstanding issue with how our power10's are configured. Turns out we do need a hardware management console to set things up as we had planned. We have ordered this and will be reconfiguring things post move. For normal ppc64le builds this shouldn't have any impact. For composes that need nested virt, they will just fail until the week following the move (when we have some power9's on hand to handle this case). So, sorry ppc64le users, likely a bit of failed rawhide composes, sorry about that.

Just a reminder about next week:

  • mirrorlists (dnf updates), docs/websites, downloads, discourse, matrix should all be unaffected

  • YOU SHOULD PLAN TO NOT TRY AND USE ANY OTHER SERVICES until the goahead (wed).

Monday:

  • Around 10:00 UTC services will start going down.

  • We will be moving storage and databases for a while.

  • Once databases and storage are set we will bring services back up

  • On monday koji will be up and you can probibly even do builds (but I strongly advise you to not). However, bodhi will be down, so no updates will move forward from builds done in this period.

Tuesday:

  • koji/build pipeline goes down.

  • We will be moving it's storage databases for a while.

  • We will bring things up once those are moved.

Wed:

  • Start fixing outstanding issues, deploy missing/lower pri services

  • At this point we can start taking problem reports to fix things (hopefully)

Thursday:

  • More fixing outstanding items.

  • Will be shutting down machines in old DC

Friday:

  • Holiday in the US

  • Hopefully things will be in a stable state by this time.

comments? additions? reactions?

As always, comment on mastodon: https://fosstodon.org/@nirik/114762414965872142

Infra and RelEng Update – Week 26

Posted by Fedora Community Blog on 2025-06-27 10:00:00 UTC

This is a weekly report from the I&R (Infrastructure & Release Engineering) Team. We provide you both infographic and text version of the weekly report. If you just want to quickly look at what we did, just look at the infographic. If you are interested in more in depth details look below the infographic.

Week: 23 June – 27 June 2025

Infrastructure & Release Engineering

The purpose of this team is to take care of day to day business regarding CentOS and Fedora Infrastructure and Fedora release engineering work.
It’s responsible for services running in Fedora and CentOS infrastructure and preparing things for the new Fedora release (mirrors, mass branching, new namespaces etc.).
List of planned/in-progress issues

Fedora Infra

CentOS Infra including CentOS CI

Release Engineering

If you have any questions or feedback, please respond to this report or contact us on #apps or #admin channel on chat.fedoraproject.org.

The post Infra and RelEng Update – Week 26 appeared first on Fedora Community Blog.

🔧 Unlocking system performance: A practical guide to tuning PCP on Fedora & RHEL

Posted by Fedora Magazine on 2025-06-27 08:00:00 UTC

Performance Co-Pilot (PCP) is a robust framework for collecting, monitoring, and analyzing system performance metrics. Available in the repos for Fedora and RHEL, it allows administrators to gather a wide array of data with minimal configuration. This guide walks you through tuning PCP’s pmlogger service to better fit your needs—whether you’re debugging performance issues or running on constrained hardware.

Is the default setup of PCP right for your use case? Often, it’s not. While PCP’s defaults strike a balance between data granularity and overhead, production workloads vary widely. Later in this article, two scenarios will be used to demonstrate some useful configurations.

✅ Prerequisites: Getting PCP up and running

First, install the PCP packages:

$ sudo dnf install pcp pcp-system-tools

Then enable and start the core services:

$ sudo systemctl enable --now pmcd.service
$ sudo systemctl enable --now pmlogger.service

Verify both services are running:

$ systemctl status pmcd pmlogger

🔍 Understanding pmlogger and its configuration

PCP consists of two main components:

  • pmcd: collects live performance metrics from various agents.
  • pmlogger: archives these metrics over time for analysis.

The behavior of pmlogger is controlled by files in /etc/pcp/pmlogger/control.d/. The most relevant is local, which contains command-line options for how logging should behave.

Sample configuration:

$ cat /etc/pcp/pmlogger/control.d/local

You’ll see a line like:

localhost y y /usr/bin/pmlogger -h localhost ... -t 10s -m note

The -t 10s flag defines the logging interval—every 10 seconds in this case.

🔧 Scenario 1: High-frequency monitoring for deep analysis

Use case: Debugging a transient issue on a production server.
Goal: Change the logging interval from 10 seconds to 1 second.

Edit the file (nano editor used in the examples, please use your editor of choice):

$ sudo nano /etc/pcp/pmlogger/control.d/local

Change -t 10s to -t 1s.

Restart the logger:

$ sudo systemctl restart pmlogger.service

Verify:

$ ps aux | grep '[p]mlogger -h localhost'
$ pminfo -f

Expected output snippet:

records: 10, interval: 0:00:01.000

🪶 Scenario 2: Lightweight monitoring for constrained systems

Use case: Monitoring on a small VM or IoT device.
Goal: Change the logging interval to once every 60 seconds.

Edit the same file:

$ sudo nano /etc/pcp/pmlogger/control.d/local

Change -t 10s to -t 60s.

Restart the logger:

$ sudo systemctl restart pmlogger.service

Confirm:

$ ps aux | grep '[p]mlogger -h localhost'
$ pminfo -f

Expected output:

records: 3, interval: 0:01:00.000

🧹 Managing data retention: logs, size, and cleanup

PCP archives are rotated daily by a cron-like service. Configuration lives in:

$ cat /etc/sysconfig/pmlogger

Default values:

PCP_MAX_LOG_SIZE=100
PCP_MAX_LOG_VERSIONS=14
  • PCP_MAX_LOG_SIZE: total archive size (in MB).
  • PCP_MAX_LOG_VERSIONS: number of daily logs to keep.

Goal: Keep logs for 30 days.

Edit the file:

$ sudo nano /etc/sysconfig/pmlogger

Change:

PCP_MAX_LOG_VERSIONS=30

No service restart is required. Changes apply during the next cleanup cycle.

🏁 Final thoughts

PCP is a flexible powerhouse. With just a few changes, you can transform it from a general-purpose monitor into a specialized tool tailored to your workload. Whether you need precision diagnostics or long-term resource tracking, tuning pmlogger gives you control and confidence.

So go ahead—open that config file and start customizing your system’s performance story.

Note: This article is dedicated to my wife, Rupali Suraj Patil, who inspires me every day.

some ai thoughts around ansible lightspeed

Posted by Kevin Fenzi on 2025-06-27 01:30:58 UTC

To take a short break from datacenter work, I have been meaning to look into ansible lightspeed for a long time, so I finally sat down and took an introductory course and have some thoughts about how we might use it in on fedora's ansible setup.

The official name of the product is: "Red Hat Ansible Lightspeed with IBM watsonx Code Assistant", which is a bit... verbose, so I will just use 'Lightspeed' here.

This is one of the very first AI products Red Hat produced, so its been around a few years. Some of that history is probibly why it's specifically using watsonx instead of some other LLMs on the backend.

First a list of things I really like about it:

  • It's actually trained on real, correct, good ansible content. It's not a 'general' LLM trained on the internet, it's using some ansible galaxy content (you can opt out if you prefer) as well as a bunch of curated content from real ansible. This always struck me as one of the very best ways to leverage LLMs instead of general hoover in any data and use it. In this case it really helps make the suggestions and content more trustable and less hallucinated.

  • Depending on the watsonx subscription you have, you may train it on _your_ ansible content. Perhaps you have different standards than others or particular ways you do things. You can train it on them and actually get it to give you output that uses that.

  • Having something be able to generate a biolerplate for you that you can review and fix up is also a really great use for llms, IMHO.

And some things I'm not crazy about:

  • It requires AAP (ansible automation platform) and watsonx licenses. (mostly, see below). It would be cool if it could leverage a local model or Red Hat AI in openshift instead of watsonx, but as noted above it's likely tied to that for historical reasons.

  • It uses a vscode plugin. I'm much more a vim type old sysadmin, and the idea of making a small ansible playbook thats just a text file seems like vscode is... overkill. I can of course see why they choose to implement things this way.

And something I sure didn't know: There's an ansible code bot on github. It can scan your ansible git repo and file a PR to bring it in line with best practices. Pretty cool. We have a mirror of our pagure ansible repo on gitub, however, it seems to be not mirroring. I want to sort that out and then enable the bot to see how it does. :)

Staging Datacenter Move outage

Posted by Fedora Infrastructure Status on 2025-06-26 10:00:00 UTC

We will be moving staging services and applications from our IAD2 datacenter to a new RDU3 one.

This only affects staging, end user services are unaffected.

Contributors who need to use staging are advised to wait until after the outage window to resume work and report issues with services.

Potential failures in the F43 mass rebuild

Posted by Yaakov Selkowitz on 2025-06-26 00:00:00 UTC

There are less than four weeks until the Fedora 43 mass rebuild. While mass rebuilds inevitably result in some breakage, the following issues appear likely to be common causes of FTBFS or FTI:

Missing sysusers.d configs

RPM now generates virtual dependencies on user(USERNAME) and/or group(GROUPNAME) under various conditions (e.g. a file in %files is listed with such a user/group ownership). In order to resolve these dependencies, packages must now provide sysusers.d config files which are automatically parsed to generate similar virtual provides. Without these configs, a newly rebuilt package will FTI, and anything which buildrequires it will fail to build for being unable to install its dependencies. See the Change documentation for more details.

Python 3.14

While the Python team has already rebuilt many Python-dependent packages for 3.14, some packages still need to be fixed for changes in the new version (see What’s New in Python 3.14). Furthermore, packages which use Python only at build-time (e.g. for build scripts, or running tests, etc.) have yet to be rebuilt, and may need to be fixed as well.

Aclocal macros moved in gettext 0.25

In previous versions of gettext-devel, its various aclocal macros were installed in the default macro search path, and therefore were generally found by autoreconf without any effort, or even when they shouldn’t have been. However, that actually caused conflict with autopoint when trying to pin an older macros version with AM_GNU_GETTEXT_VERSION. With gettext 0.25, these macros are now located in a private path, and while autopoint still works as designed, other (technically unsupported) use cases have broken as a result, such as the use of AM_ICONV by itself. The workaround for such cases is to export ACLOCAL_PATH=/usr/share/gettext/m4/ before autoreconf.

X11 disablement in mutter breaks xwfb-run

As part of the ongoing deprecation of the X11 servers, many packages previously switched from xvfb-run to xwfb-run (part of xwayland-run), the latter requiring a Wayland compositor to be specified. The next step of that for F43 was disabling the X11 session support in the GNOME desktop, which included disabling the X11 window management support in mutter. Many GNOME packages, and all such packages in the RHEL/ELN set, are using mutter as the compositor. (Packages outside of the GNOME and RHEL sets may use weston instead and are not impacted.)

Unfortunately, due to a bug in mutter before 49.alpha, disabling X11 also mistakenly disabled the --no-x11 option which (counterintuitively) is not the opposite of the now-disabled --x11 argument (to run as an X11 window manager) but disabled the launching of an XWayland server. With the option disabled but XWayland support enabled, that means mutter is always trying to launch XWayland even where it shouldn’t (such as in a moch chroot, where it doesn’t work). xwayland-run relied on this option to avoid that, but with that not working, any use of xwfb-run -c mutter (or the like) are currently failing.

This should be fixed as part of the 49~alpha.0 update which is currently in testing but blocked by additional packages needing to be updated in tandem. Hopefully the Workstation WG can get this fixed in time for the F43 mass rebuild.

glibc 2.42 changes

The development version of glibc now in rawhide/F43 includes a few potentially breaking changes:

  • The ancient <termio.h> and struct termio interfaces have been removed. Using <sys/ioctl.h> and/or <termios.h>, and struct termios, instead should cover the common use cases.
  • lockf now has a warn_unused_result attribute (commit), which will generate warnings (or promoted errors) in code where the result is ignored.

GCC 15

While many of the changes in GCC 15 were handled in the aftermath of the previous F42 mass rebuild , there are still some packages which have yet to be rebuilt. Particularly, mingw-gcc 15 did not land until after the mass rebuild, so some mingw packages may see failures. (For those with corresponding native packages, hopefully the fix is already available.) Also, some later changes in GCC 15 (after the mass rebuild) may also cause failures (one example).

Flaky tests

All too often, builds fail in %check due to flaky or architecture-dependent tests. Rerunning an entire build (sometimes multiple times!) just in order to get tests to pass is a waste of time and resources. Instead, please consider moving tests out of the spec file and into CI instead.

Note:

This list was generated by AI – just kidding! Actually, this is the result of an early mass rebuild in ELN, which was done with the express purpose of finding such issues in advance, and get a head start on fixing them. Hopefully this information, and the fixes provided along the way (linked in the tracker) will help maintainers to have a smoother mass rebuild experience.

Your only obligations are the promises you make

Posted by Ben Cotton on 2025-06-25 12:00:00 UTC

One of the realities of creating open source software is that people will come along and say you must do something. Often, this happens to be a something that’s very valuable to them. If you’re lucky, they’ll help you do it. Much of the time, though, that’s not the case. But no matter what users or best practices say, the “O” in “FOSS” still does not stand for “obligation”. Unless you’ve committed to doing something, you don’t have to do it.

One good example is having a process for people to privately report security bugs. This is widely accepted as a best practice because it allows for vulnerabilities to be fixed before they’re broadly known. Although this doesn’t entirely eliminate the possibility of a bad actor taking advantage of the vulnerability, it reduces the risk. But that process adds overhead for maintainers, and it puts them in a position to make a fix by a particular deadline. For volunteer maintainers in particular, this can overwhelm the rest of the project work.

This is the conclusion that Nick Wellnhofer, the (sole) maintainer of libxml2, recently reached. He decided to no longer treat security reports as special. Instead, they will be treated just like any other bug report.

The reactions that I’ve seen are almost universally supportive. This is the right approach, especially considering that libxml2 “never had the quality to be used in mainstream browsers or operating systems to begin with.” Just because big companies decided to start using a project, that doesn’t mean the maintainers suddenly have to produce production-quality software. Wellnhofer set the expectations clearly, and that’s the only obligation that he has to meet. If that’s not acceptable to downstreams, they are free to use another library or to make their own.

This post’s featured photo by Andrew Petrov on Unsplash.

The post Your only obligations are the promises you make appeared first on Duck Alignment Academy.

🔧 Deep dive into sosreport: understanding the data pack layout in Fedora & RHEL

Posted by Fedora Magazine on 2025-06-25 08:00:00 UTC

This article will describe the content and structure of the sosreport output. The aim is to improve its usefullness through a better understanding of its contents.

🧰 What is sosreport?

sosreport is a powerful command-line utility available on Fedora, Red Hat Enterprise Linux (RHEL), CentOS, and other RHEL-based systems to collect a comprehensive snapshot of the system’s configuration, logs, services, and state. The primary use is for diagnosing issues, especially during support cases with Red Hat or other vendors.

When executed, sosreport runs a series of modular plugins that collect relevant data from various subsystems like networking, storage, SELinux, Docker, and more. The resulting report is packaged into a compressed tarball, which can be securely shared with support teams to expedite troubleshooting.

In essence, sosreport acts as a black box recorder for Linux — capturing everything from system logs and kernel messages to active configurations and command outputs — helping support engineers trace problems without needing direct access to the system.

🛠 How to Generate a sosreport

To use sosreport on Fedora, RHEL, or CentOS, run the following command as root or with sudo:

sudo sosreport

This command collects system configuration, logs, and command outputs using various plugins. After a few minutes, it generates a compressed tarball in /var/tmp/ (or a similar location), typically named like:

sosreport-hostname-20250623-123456.tar.xz

You may be prompted to enter a case ID or other metadata, depending on your system configuration or support workflow.

The sosreport generated tarball contains a detailed snapshot of the system’s health and configuration. It has a well-organized structure which reflects the data collected from the myriad Linux subsystems.

Exploring sosreport output is challenging due to the sheer volume of logs, configuration files, and system command outputs it contains. However, understanding its layout is key for support engineers and sysadmins to quickly locate and interpret crucial diagnostic information.

📁 sosreport directory layout

When the tarball is unpacked, the directory structure typically resembles this:

.
├── ./boot
├── ./etc
├── ./lib -> usr/lib
├── ./opt
├── ./proc
├── ./run
├── ./sos_commands
├── ./sos_logs
├── ./sos_reports
├── ./sos_strings
├── ./sys
├── ./usr
├── ./var
└── ./EXTRAS

CORE Breakdown:

  • Most directories mimic a standard Linux root filesystem and primarily contain configuration files.
  • The directories that don’t appear in a regular root filesystem include:
    • sos_command
    • sos_logs
    • sos_reports
    • sos_strings
    • EXTRAS

🔍 Key directories in detail

🔊 sos_commands/

This contains output from commands executed by each plugin. Its structure is plugin-specific:

./sos_commands/
├── apparmor/
├── docker/
├── memory/
├── networkmanager/
├── process/
│ ├── lsof_M_-n_-l_-c
│ ├── pidstat_-p_ALL_-rudvwsRU_--human_-h
│ ├── ps_auxwwwm
│ └── pstree_-lp

Each file name matches the Linux command used, with all options. The contents are the actual command output, making the plugin behavior transparent.

📊 sos_reports/

This directory contains multiple formats that index and summarize the entire sosreport:

  • sos.json: A machine-readable index of all collected files and commands.
  • manifest.json: Describes how sosreport executed – timestamps, plugins used, obfuscation done, errors, etc.
  • HTML output for easy browsing via browser.

📃 sos_logs/

Contains logs from the execution of sosreport itself.

  • sos.log: Primary log file that highlights any errors or issues during data collection.

📰 sos_strings/

  • Contains journal logs for up to 30 days, extracted using journalctl
  • Can be quite large, especially on heavily used systems
  • Structured into subdirectories like logs/ or networkmanager/

🔒 EXTRAS/

This is not a default part of an sosreport. It is created by the sos_extras plugin and used to collect any custom user-defined files.

🚀 Why this layout matters

  • Speed: Logical grouping of directories help engineers drill down without manually parsing GigaBytes of log files.
  • Traceability: Knowing where each file came from and what command produced it enhances reproducibility.
  • Automation: Tools like soscleaner or sos-analyzer rely on this structure for automated diagnostics.

✅ Final thoughts

While sosreport is a powerful diagnostic tool, its effectiveness hinges on understanding its structure. With familiarity, engineers can isolate root causes of failures, uncover misconfigurations, and collaborate more efficiently with support teams. If you haven’t yet opened one up manually, try it — there’s a lot to learn from the insides!

This is my first Fedora Magazine article, dedicated to my wife Rupali Suraj Patil — my constant source of inspiration.

Using LXCFS together with Podman

Posted by Evgeni Golov on 2025-06-24 19:46:34 UTC

JP was puzzled that using podman run --memory=2G … would not result in the 2G limit being visible inside the container. While we were able to identify this as a visualization problem — tools like free(1) only look at /proc/meminfo and that is not virtualized inside a container, you'd have to look at /sys/fs/cgroup/memory.max and friends instead — I couldn't leave it at that. And then I remembered there is actually something that can provide a virtual (cgroup-aware) /proc for containers: LXCFS!

But does it work with Podman?! I always used it with LXC, but there is technically no reason why it wouldn't work with a different container solution — cgroups are cgroups after all.

As we all know: there is only one way to find out!

Take a fresh Debian 12 VM, install podman and verify things behave as expected:

user@debian12:~$ podman run -ti --rm --memory=2G centos:stream9
bash-5.1# grep MemTotal /proc/meminfo
MemTotal:        6067396 kB
bash-5.1# cat /sys/fs/cgroup/memory.max
2147483648

And after installing (and starting) lxcfs, we can use the virtual /proc/meminfo it generates by bind-mounting it into the container (LXC does that part automatically for us):

user@debian12:~$ podman run -ti --rm --memory=2G --mount=type=bind,source=/var/lib/lxcfs/proc/meminfo,destination=/proc/meminfo centos:stream9
bash-5.1# grep MemTotal /proc/meminfo
MemTotal:        2097152 kB
bash-5.1# cat /sys/fs/cgroup/memory.max
2147483648

The same of course works with all the other proc entries lxcfs provides (cpuinfo, diskstats, loadavg, meminfo, slabinfo, stat, swaps, and uptime here), just bind-mount them.

And yes, free(1) now works too!

bash-5.1# free -m
               total        used        free      shared  buff/cache   available
Mem:            2048           3        1976           0          67        2044
Swap:              0           0           0

Just don't blindly mount the whole /var/lib/lxcfs/proc over the container's /proc. It did work (as in: "bash and free didn't crash") for me, but with /proc/$PID etc missing, I bet things will go south pretty quickly.

The Negative Consequences of LLMs

Posted by Tony Asleson on 2025-06-24 14:49:26 UTC
Large Language Models (LLMs) are powerful tools, but their unchecked proliferation carries technical, ethical, and professional consequences for software developers and the industry at large.

Is Copilot useful for kernel patch review?

Posted by Hans de Goede on 2025-06-23 13:46:22 UTC
Patch review is an important and useful part of the kernel development process, but it also a time-consuming part. To see if I could save some human reviewer time I've been pushing kernel patch-series to a branch on github, creating a pull-request for the branch and then assigning it to Copilot for review. The idea being that In would fix any issues Co-pilot catches before posting the series upstream saving a human reviewer from having to catch the issues.

I've done this for 5 patch-series: one, two, three, four, five, totalling 53 patches in total. click the number to see the pull-request and Copilot's reviews.

Unfortunately the results are not great on 53 patches Co-pilot had 4 low-confidence comments which were not useful and 3 normal comments. 2 of the no comments were on the power-supply fwnode series one was about spelling degrees Celcius as degrees Celsius instead which is the single valid remark. The other remark was about re-assigning a variable without freeing it first, but Copilot missed that the re-assignment was to another variable since this happened in a different scope. The third normal comment (here) was about as useless as they can come.

To be fair these were all patch-series written by me and then already self-reviewed and deemed ready for upstream posting before I asked Copilot to review them.

As another experiment I did one final pull-request with a couple of WIP patches to add USBIO support from Intel. Copilot generated 3 normal comments here all 3 of which are valid and one of them catches a real bug. Still given the WIP state of this case and the fact that my own review has found a whole lot more then just this, including the need for a bunch if refactoring, the results of this Copilot review are also disappointing IMHO.

Co-pilot also automatically generates summaries of the changes in the pull-requests, at a first look these look useful for e.g. a cover-letter for a patch-set but they are often full with half-truths so at a minimum these need some very careful editing / correcting before they can be used.

My personal conclusion is that running patch-sets through Copilot before posting them on the list is not worth the effort.

comment count unavailable comments

System insights with command-line tools: free and vmstat

Posted by Fedora Magazine on 2025-06-23 08:00:00 UTC

In this fifth article of the “System insights with command-line tools” series we explore free and vmstat, two small utilities that reveal a surprising amount about your Linux system’s health. free gives you an instant snapshot of how RAM and swap are being used. vmstat (the virtual memory statistics reporter) reports a real-time view of memory, CPU, and I/O activity.

By the end of this article you will be able to translate buffers and cache into “breathing room”, read the mysterious available column with confidence, and spot memory leaks or I/O saturation.

A quick tour of free

Basic usage

$ free -h
total used free shared buff/cache available
Mem: 23Gi 14Gi 575Mi 3,3Gi 12Gi 8,8Gi
Swap: 8,0Gi 6,6Gi 1,4Gi

free parses /proc/meminfo and prints totals for physical memory and swap, along with kernel buffers and cache. Use -h for human-readable units, -s 1 to refresh every second, and -c N to stop after N samples which is handy to get a trend when doing something in parallel. For example, free -s 60 -c 1440 gives a 24-hour CSV-friendly record without installing extra monitoring daemons.

Free memory refers to RAM that is entirely unoccupied. It isn’t being used by any process or for caching. On server systems, I tend to view this as wasted since unused memory isn’t contributing to performance. Ideally, after a system has been running for some time, this number should remain low.

Available memory, on the other hand, represents an estimate of how much memory can be used by new or running processes without resorting to swap. It includes free memory plus parts of the cache and buffers that the system can reclaim quickly if needed.

In essence, the distinction in Linux lies here: free memory is idle and unused, while available memory includes both truly free space and memory that can be readily freed up to keep the system responsive without swapping. It is not a problem to have a low free memory, available memory is usually what to be concerned about.

A healthy system might even show used ≈ total yet available remains large; that mostly reflects cache at work. Fedora’s kernel will automatically drop clean cache pages whenever an application needs the space, so cached memory is not wasted. Think of it as a working set that just hasn’t been reassigned yet.

Spotting problems with free

  • Rapidly shrinking available combined with rising swap used indicates real memory pressure.
  • Large swap-in/out spikes point to thrashing workloads or runaway memory consumers.

vmstat – Report virtual memory statistics

vmstat (virtual memory statistics) displays processes, memory, paging, block-I/O, interrupts, context switches, and CPU utilization in a single line. Run it with an interval and count to watch trends (output shown below has been split into three sections for better readability):

$ vmstat 1 3
procs -----------memory----------
r b swpd free buff cache
2 0 7102404 1392528 36 12335148
0 0 7102404 1392560 36 12335188
0 0 7102404 1373640 36 12349928

---swap-- -----io----
si so bi bo
8 21 130 724
0 0 0 0
0 0 8 48

-system-- -------cpu-------
in cs us sy id wa st gu
2851 19 15 7 77 0 0 0
5779 7246 14 10 77 0 0 0
5141 6525 12 9 79 0 0 0

Anatomy of the output

From the vmstat(8) manpage:

Procs
r: The number of runnable processes (running or waiting
for run time).
b: The number of processes blocked waiting for I/O to
complete.

Memory
These are affected by the --unit option.
swpd: the amount of swap memory used.
free: the amount of idle memory.
buff: the amount of memory used as buffers.
cache: the amount of memory used as cache.
inact: the amount of inactive memory. (-a option)
active: the amount of active memory. (-a option)

Swap
These are affected by the --unit option.
si: Amount of memory swapped in from disk (/s).
so: Amount of memory swapped to disk (/s).

IO
bi: Kibibyte received from a block device (KiB/s).
bo: Kibibyte sent to a block device (KiB/s).

System
in: The number of interrupts per second, including
the clock.
cs: The number of context switches per second.

CPU
These are percentages of total CPU time.
us: Time spent running non-kernel code. (user time,
including nice time)
sy: Time spent running kernel code. (system time)
id: Time spent idle. Prior to Linux 2.5.41, this
includes IO-wait time.
wa: Time spent waiting for IO. Prior to Linux 2.5.41,
included in idle.
st: Time stolen from a virtual machine. Prior to
Linux 2.6.11, unknown.
gu: Time spent running KVM guest code (guest time,
including guest nice).

Practical diagnostics

SectionKey FieldsWhat to watch
Procsr (run-queue),
b (blocked)
r > CPU cores = contention
Memoryswpd, free, buff, cacheRising swpd with falling free = pressure
Swapsi, soNon-zero so means the kernel is swapping out
IObi, boHigh bo + high wa hints at write-heavy workloads
Systemin, csSudden spikes may indicate interrupt storms
CPUus, sy, id, wa, stHigh wa (I/O wait) = storage bottleneck


Catching a memory leak

Run vmstat 500 in one terminal while your suspect application runs in another. If free keeps falling and si/so climb over successive samples, physical RAM is being exhausted and the kernel starts swapping, which is classic leak behavior.

Finding I/O saturation

When wa (CPU wait) and bo (blocks out) soar while r remains modest, the CPU is idle but stuck waiting for the disk. Consider adding faster storage or tuning I/O scheduler parameters.

Detecting CPU over-commit

A sustained r that is double the number of logical cores with low wa and plenty of free means CPU is the bottleneck, not memory or I/O. Use top or htop to locate the busiest processes, or scale out workloads accordingly.

Conclusion

Mastering free and vmstat gives you a lens into memory usage, swap activity, I/O latency, and CPU load. For everyday debugging: start with free to check if your system is truly out of memory, then use vmstat to reveal the reason, whether it’s memory leaks, disk bottlenecks, or CPU saturation.

Stay tuned for the next piece in our “System insights with command-line tools” series and happy Fedora troubleshooting!

Empower your writing with Ramalama

Posted by Pavel Raiskup on 2025-06-23 00:00:00 UTC

Have you ever asked ChatGPT or other online AI services to review and correct your emails or posts for you? Have you ever pondered over what the service or company, such as OpenAI, does with the text you provide them “for free”? What are the potential risks of sharing private content, possibly leading to copyright headaches?

I am going to demonstrate how I use Ramalama and local models instead of ChatGPT for English corrections, on a moderately powerful laptop. The latest container-based tooling surrounding the Ramalama project makes running Language Models (LLMs) on Fedora effortless and free of those risks.

Installation and starting

Ramalama is packaged for Fedora! Just go with:

dnf install -y ramalama

It is a fairly swift action, I assure you. Do not be misled though. You will need to download gigabytes of data later on, during the Ramalama runtime :-)

I take privacy seriously, the more isolation the better. Even though Ramalama claims it isolates the model I’d like to isolate Ramalama itself. Therefore, I create a new user called “ramalama” with the command:

useradd ramalama && sudo su - ramalama

This satisfies me, as I trust the Ramalama/Podman team’s that they do not escalate privileges (at least I don’t see any setuid bits, etc.).

Experimenting with models

ramalama run: error: the following arguments are required: MODEL

The tool does a lot for you, but you still need to research which model is most likely the one you require. I haven’t done an extensive research myself, but I’ve heard rumors that LLama3 or Mistral are relatively good options for English on laptops. Lemme try:

$ ramalama run llama3
Downloading ollama://llama3:latest ...
Trying to pull ollama://llama3:latest ...
 99% |█████████████████████████████████████████████████████████████████████████ |    4.33 GB/   4.34 GB 128.84 MB/s
Getting image source signatures
Copying blob 97024d81bb02 done   |_
Copying blob 97971704aef2 done   |_
Copying blob 2b2bdebbc339 done   |_
Copying config d13a3de051 done   |_
Writing manifest to image destination
🦭 > Hello, who is there?
Hello! I'm an AI, not a human, so I'm not "there" in the classical sense. I exist solely as a digital entity,
designed to understand and respond to natural language inputs. I'm here to help answer your questions, provide
information, and chat with you about various topics. What's on your mind?

Be prepared for that large download (6GB in my case, not just the 4.3GB model; I’m including the Ramalama Podman image as well).

The command-line sucks here, server helps

Very early on, you’ll find that the command-line prompt doesn’t make it easy for you to type new lines; therefore, asking the model for help with composing multi-line emails isn’t straightforward. Yes, you can use Python’s multi\nline strings\nlike this one, but for this you’ll at least need a conversion tool.

I want to have a similar UI with the ChatGPT, and it is possible!

$ ramalama serve llama3
OpenAI RESTAPI: http://localhost:8080
... starting ...

Getting this in web browser:

Model starting

But after a while:

...
srv  log_server_r: request: POST /v1/chat/completions 192.168.0.4 500
srv  log_server_r: request: GET / 192.168.0.4 200
srv  log_server_r: request: GET /props 192.168.0.4 200

 

The UI prompt in web browser

Admittedly, that’s just too much. I don’t need a full prompt; I’d still prefer a simple command-line interface that would let me provide multiline strings and respond with the model’s answer. Nah, we need to package python-openai into Fedora (but it is not yet there).

Performance concerns

Both llama3 and Mistral respond surprisingly quickly. The reply starts immediately, and they print approximately 30 tokens per second. Contrastingly, Deepseek takes much longer to respond, approximately a minute, but the token rate is roughly equivalent.

I was surprised to find that while the GPU was fully utilized, NVTOP did not report any additional GPU memory consumption (before, during or after model provided the answer). Does anyone have any ideas as to why this might be the case?

NVTOP not reporting memory consumption

Summary

The mentioned models perform exceptionally well for my use-case. My interactions with the model look like:

fix english:
the multi-line text
that I want to correct

and the outputs are noticeably superior to the inputs :-).

More experimentation is possible with different models, temperature settings, RAG, and more. Refer to ramalama run –help for details.

However, I have been encountering some hardware issues with my Radeon 780M. If I run my laptop for an extended period, starting the prompt with a lengthy question can trigger a black screen situation, leaving no other interactions possible (reboot needed). If you have any suggestions on how to debug these issues, please let me know.

10 days to datacenter move

Posted by Kevin Fenzi on 2025-06-21 16:14:57 UTC
Scrye into the crystal ball

Super busy recently focused on the datacenter move thats happening in just 10 days! (I hope).

datacenter move

Just 10 days left. We are not really where I was hoping to be at this point, but hopefully we can still make things work.

We got our power10 boxes installed and setup and... we have an issue. Some of our compose process uses vm's in builder guests, but the way we have the power10 setup with one big linux hypervisor and guests on that doesn't allow those guests to have working nested virt. Only two levels is supported. So, we are looking at options for early next week and hopefully we can get something working in time for the move. Options include getting a vHMC to carve out lpars, moving an existing power9 machine in place at least for the move for those needs and a few more. I'm hoping we can get something working in time.

We are having problems with our arm boxes too. First there were strange errors on the addon 25G cards. That turned out to be a transceiver problem and was fixed thursday. Then the addon network cards in them don't seem to be able to network boot, which makes installing them anoying. We have plans for workarounds there too for early next week: either connected the onboard 1G nics, or some reprogramming of the cards to get them working, or some installs with virtual media. I'm pretty sure we can get this working one way or another.

On the plus side, tons of things are deployed in the new datacenter already and should be ready. Early next week we should have ipa clusters replicating. Also soon we should have staging openshift cluster in place.

Monday, networking is going to do a resilance test on the networking setup there. This will have them take down one 'side' of the switches and confirm all our machines are correctly balancing over their two network cards.

Tuesday we have a 'go/no-go' meeting with IT folks. Hopefully we can be go and get this move done.

Next wed, I am planning to move all of our staging env over to the new datacenter. This will allow us to have a good 'dry run' at the production move and also reduce the number of things that we need to move the following week. If you are one of the very small number of folks that uses our staging env to test things, make a note that things will be down on wed.

Then more prep work and last minute issues and on into switcharoo week.

Early monday of that week, things will be shutdown so we can move storage, then storage moves, we sync other data over and bring things up. Tuesday will be the same for the build system side. I strongly advise contributors to just go do other things monday and tuesday. Lots of things will be in a state a flux. Starting wed morning we can start looking at issues and fixing them up.

Thanks for everyone's patience during this busy time!

misc other stuff

I've been of course doing other regular things, but my focus as been on datacenter moving. Just one other thing to call out:

Finally we have our updated openh264 packages released for updates in stable fedora releases. It was a long sad road, but hopefully now we can get things done much much quicker. The entire thing wasn't just one thing going wrong or blocking stuff, it was a long series of things, one after another. We are in a much better state now moving forward though.

comments? additions? reactions?

As always, comment on mastodon: https://fosstodon.org/@nirik/114722324993121031

Browser wars

Posted by Vedran Miletić on 2025-06-20 12:47:32 UTC

Browser wars


brown fox on snow field

Photo source: Ray Hennessy (@rayhennessy) | Unsplash


Last week in Rijeka we held Science festival 2015. This is the (hopefully not unlucky) 13th instance of the festival that started in 2003. Popular science events were organized in 18 cities in Croatia.

I was invited to give a popular lecture at the University departments open day, which is a part of the festival. This is the second time in a row that I got invited to give popular lecture at the open day. In 2014 I talked about The Perfect Storm in information technology caused by the fall of economy during 2008-2012 Great Recession and the simultaneous rise of low-cost, high-value open-source solutions. Open source completely changed the landscape of information technology in just a few years.

The follow-up

Posted by Vedran Miletić on 2025-06-20 12:47:32 UTC

The follow-up


people watching concert

Photo source: Andre Benz (@trapnation) | Unsplash


When Linkin Park released their second album Meteora, they had a quote on their site that went along the lines of

Musicians have their entire lives to come up with a debut album, and only a very short time afterward to release a follow-up.

Open-source magic all around the world

Posted by Vedran Miletić on 2025-06-20 12:47:32 UTC

Open-source magic all around the world


woman blowing sprinkle in her hand

Photo source: Almos Bechtold (@almosbech) | Unsplash


Last week brought us two interesting events related to open-source movement: 2015 Red Hat Summit (June 23-26, Boston, MA) and Skeptics in the pub (June 26, Rijeka, Croatia).

Joys and pains of interdisciplinary research

Posted by Vedran Miletić on 2025-06-20 12:47:32 UTC

Joys and pains of interdisciplinary research


white and black coffee maker

Photo source: Trnava University (@trnavskauni) | Unsplash


In 2012 University of Rijeka became NVIDIA GPU Education Center (back then it was called CUDA Teaching Center). For non-techies: NVIDIA is a company producing graphical processors (GPUs), the computer chips that draw 3D graphics in games and the effects in modern movies. In the last couple of years, NVIDIA and other manufacturers allowed the usage of GPUs for general computations, so one can use them to do really fast multiplication of large matrices, finding paths in graphs, and other mathematical operations.

What is the price of open-source fear, uncertainty, and doubt?

Posted by Vedran Miletić on 2025-06-20 12:47:32 UTC

What is the price of open-source fear, uncertainty, and doubt?


turned on red open LED signage

Photo source: j (@janicetea) | Unsplash


The Journal of Physical Chemistry Letters (JPCL), published by American Chemical Society, recently put out two Viewpoints discussing open-source software:

  1. Open Source and Open Data Should Be Standard Practices by J. Daniel Gezelter, and
  2. What Is the Price of Open-Source Software? by Anna I. Krylov, John M. Herbert, Filipp Furche, Martin Head-Gordon, Peter J. Knowles, Roland Lindh, Frederick R. Manby, Peter Pulay, Chris-Kriton Skylaris, and Hans-Joachim Werner.

Viewpoints are not detailed reviews of the topic, but instead, present the author's view on the state-of-the-art of a particular field.

The first of two articles stands for open source and open data. The article describes Quantum Chemical Program Exchange (QCPE), which was used in the 1980s and 1990s for the exchange of quantum chemistry codes between researchers and is roughly equivalent to the modern-day GitHub. The second of two articles questions the open-source software development practice, advocating the usage and development of proprietary software. I will dissect and counter some of the key points from the second article below.

On having leverage and using it for pushing open-source software adoption

Posted by Vedran Miletić on 2025-06-20 12:47:32 UTC

On having leverage and using it for pushing open-source software adoption


Open 24 Hours neon signage

Photo source: Alina Grubnyak (@alinnnaaaa) | Unsplash


Back in late August and early September, I attended 4th CP2K Tutorial organized by CECAM in Zürich. I had the pleasure of meeting Joost VandeVondele's Nanoscale Simulations group at ETHZ and working with them on improving CP2K. It was both fun and productive; we overhauled the wiki homepage and introduced acronyms page, among other things. During a coffee break, there was a discussion on the JPCL viewpoint that speaks against open-source quantum chemistry software, which I countered in the previous blog post.

But there is a story from the workshop which somehow remained untold, and I wanted to tell it at some point. One of the attendants, Valérie Vaissier, told me how she used proprietary quantum chemistry software during her Ph.D.; if I recall correctly, it was Gaussian. Eventually, she decided to learn CP2K and made the switch. She liked CP2K better than the proprietary software package because it is available free of charge, the reported bugs get fixed quicker, and the group of developers behind it is very enthusiastic about their work and open to outsiders who want to join the development.

AMD and the open-source community are writing history

Posted by Vedran Miletić on 2025-06-20 12:47:32 UTC

AMD and the open-source community are writing history


a close up of a cpu chip on top of a motherboard

Photo source: Andrew Dawes (@andrewdawes) | Unsplash


Over the last few years, AMD has slowly been walking the path towards having fully open source drivers on Linux. AMD did not walk alone, they got help from Red Hat, SUSE, and probably others. Phoronix also mentions PathScale, but I have been told on Freenode channel #radeon this is not the case and found no trace of their involvement.

AMD finally publically unveiled the GPUOpen initiative on the 15th of December 2015. The story was covered on AnandTech, Maximum PC, Ars Technica, Softpedia, and others. For the open-source community that follows the development of Linux graphics and computing stack, this announcement comes as hardly surprising: Alex Deucher and Jammy Zhou presented plans regarding amdgpu on XDC2015 in September 2015. Regardless, public announcement in mainstream media proves that AMD is serious about GPUOpen.

I believe GPUOpen is the best chance we will get in this decade to open up the driver and software stacks in the graphics and computing industry. I will outline the reasons for my optimism below. As for the history behind open-source drivers for ATi/AMD GPUs, I suggest the well-written reminiscence on Phoronix.