/rss20.xml">

Fedora People

Discussing RTO in my Genesi t-shirt...

Posted by Peter Czanik on 2026-04-16 07:58:48 UTC

This Monday I talked to a couple of friends about work while wearing my Genesi t-shirt. A teacher going back to school after Spring break and an IT guy explaining the nightmare of RTO threat. I love coincidences :-) Why do I say that?

Genesi t-shirt

As I wrote a few years ago about working from home: “After graduating from university, I worked from home for a small US-based company. I never met my boss while working there and met only one of my colleagues at a conference in Brussels. I eventually met my boss some seven years later, when I gave a talk at a conference in Washington, D.C.” The company was Genesi, and that was the work culture which defines me. I received the t-shirt on the photo during my visit to Washington, D.C.. Luckily, I’m still living mostly this way, visiting the office 1-2 times a week: working hybrid.

Imagine the contrast I felt, when I realized that I’m talking to someone who works on a very strict fixed schedule. For a teacher vacation is only possible when there is no school, like Spring break in Hungary last week. There is a fixed schedule all year around. Compare that to my Genesi years: no regular meetings, communicating by e-mail & chat, and working when it was the right time for me: sometimes in the morning, other days during the night. It was fantastic, especially with small kids. I have been working on flexible hours ever since, limited only by meetings.

COVID made remote work less of a niche. Sometimes even mandatory. Many people in IT started to work remotely. Most of our work does not require a fixed place or time. On-line meetings became the norm, teams are often not location based anymore but scattered around the globe. As long as you have an Internet connection and a noise canceling microphone you can join a meeting from anywhere, even from the top of a mountain. It is easy to get used to this flexibility and very difficult to give it up.

RTO became a periodical threat. It’s a lot cheaper to announce RTO and let people leave voluntarily than sending them away. Quite a few friends write me every once in a while that they have to return to the office starting in a few weeks time. Then, a few weeks later they happily share: they gave me an exemption, so they do not want me to leave…

Wearing my Genesi t-shirt all these problems feel so distant. I hope that it stays this way!

Using LLM for Code Review in storaged Projects

Posted by Vojtěch Trefný on 2026-04-16 07:13:00 UTC

For a couple of months now, we have been experimenting with using AI/LLM tools for code review for our projects under the storaged.org umbrella. We originally started with testing various tools for pull request reviews available on GitHub. For that, we are currently using CodeRabbit.

But the newly added code is not the only code that should be reviewed. Our projects have a long history: the first version of libblockdev was released in 2014 and the history of UDisks goes back to 2008. The code, of course, went through many changes and rewrites since then and many people reviewed it. And we have tests and use static analysis and other tools to ensure code quality. But that doesn’t mean there are no bugs or other issues. We know there are, our users report them somewhat regularly.

Libblockdev together with UDisks contain about 100 thousand lines of C code. Doing a full review of all that code would be a huge undertaking. For a human. A machine can do the review in a matter of minutes.

This is a good place to stop and address some questions and concerns you might have. Yes, LLMs, or what most people call AI these days, are not perfect, they make mistakes and results can be unpredictable or completely hallucinated. These are valid issues, but for this specific use case – reviewing existing code written by humans – these are less of a concern. Each issue found was first reviewed by the developer to make sure it is actually a real issue and the proposed fix was also reviewed before applying. And the resulting change went through the standard process before including it in the project – pull request with a review by another developer.

We are also well aware that the code having been reviewed by these tools doesn’t mean there aren’t any issues anymore. But the issues that were found and fixed were real issues and fixing even one issue in any software is always a net positive regardless of whether the issue was found by tests, static analysis, a quality engineer or in this case an LLM/AI tool.

With this out of the way, let’s move to the actual review process, and more importantly, the findings.

The review

The review process itself is pretty simple. We are currently using Anthropic’s Claude with the Opus 4.6 model. We did experiment with other tools and models as well, but so far Opus 4.6 seems to produce the best results (as of March 2026).

We used some basic phrases as prompts, without asking for any specific areas to review. A basic variant of “do code review of the existing code base” was enough for our use case (more about prompts and some unexpected results later). The produced review is usually a formatted numbered list of issues sorted by priority and well documented, with arguments supporting the findings. The short description of the issue was usually good enough for deciding whether the issue is real or a false positive.

Crypto: Wrong strerror_l usage for ioctl errors in OPAL

File: src/plugins/crypto.c:3911

ioctl returns -1 and sets errno, but the code uses strerror_l(-ret, ...) which gives strerror_l(1, ...) (always EPERM).

Fix: Use strerror_l(errno, c_locale).

Example of an issue found in libblockdev

One thing we noticed quite quickly was that one review is not enough. The model seems to stop after a certain number of issues and after fixing these, running the prompt again produces a new set of issues. We did a number of runs for each project (both with and without a new context) and by the last run, the number of real issues found was close to zero. With a decreasing number of “real” issues in the code, the false-positives and “nitpicky” issues in the reports seemed to outnumber the newly discovered issues. This is understandable and kind of expected behavior and we’ve definitely seen human reviewers behave similarly when reviewing code that didn’t have any glaring issues to point out.

We also experimented with limiting the review scope in the prompt: either to one specific module or plugin, or even to just one file. This approach seems to produce slightly better results within the limited scope, but also produces more false-positive reports as the model doesn’t typically recognize relationships and implications at a global scale. But even with this caveat, this is still very well worth the effort.

Findings

Now, let’s talk about the issues that were found and fixed. In total, in all of the “review runs”, 235 issues were reported: 110 in libblockdev and 125 in UDisks. (We didn’t stop at these projects, our other projects went through the same process, but for the sake of simplicity, we’ll focus only on these two here.)

Out of these, the largest group, 41 in total, were related to improper resource handling, mostly memory leaks, closely followed by various error handling issues. And because none of the developers working on these projects are native speakers, third place belongs to fixing typos, grammar and documentation with 27 issues fixed.

The “winners” here show where most of the existing issues in our code are hidden: in various error paths (upon closer inspection, most of the memory leaks were located in the error paths as well). This makes sense. These are hard to cover in automated tests and finding issues in error paths is mostly relying on manual testing, static analysis and of course, on code review.

Fixing grammar and typos might not seem that important, but these also include public API documentation where a misunderstanding about usage or memory ownership can lead to bugs in other projects using this API. And we even found a few nonsensical error messages that would definitely confuse users when shown.

Crypto: Stale strerror_l(0) produces “Success” in error message

File: src/plugins/crypto.c:2255

ret is 0 (from successful crypt_load), so the error message reads “Label can be set only on LUKS 2 devices: Success”.

Fix: Remove the strerror_l call from this error message (it is a type-check error, not a syscall error).

Again in libblockdev. This very much resembles the famous "Task failed successfully" error message.

And what about CVEs?

Memory management, error paths and grammar. Important issues and it’s good these are being fixed, but what about something more serious?

During our experiments with AI review, we didn’t find any serious flaws or security issues. That, of course, doesn’t mean there are none in our code, but if we knew about issues like these, we wouldn’t be doing this exercise. What we can do is test the tools on a security issue we know exists. Or to be more precise, a security issue that existed.

In February, two related security issues were reported against UDisks – CVE-2026-26103 and CVE-2026-26104. Both were caused by missing authorization checks in some UDisks public functions. UDisks is a daemon that runs with root privileges and allows unprivileged users to perform certain operations when some conditions (based on the operation, type of device, type of session the user is logged in, etc.) are met. We use Polkit for this. And in these two cases the authorization check was missing in the code. This went through review by two developers, and neither of us noticed. Now the question is, would AI notice?

Short answer is: yes, it would:

Missing PolicyKit Authorization on LUKS Header Backup/Restore

Files:

  • src/udiskslinuxencrypted.c:1356-1442 (handle_header_backup)
  • src/udiskslinuxblock.c:4238-4312 (handle_restore_encrypted_header)

Both D-Bus method handlers perform privileged operations without any PolicyKit authorization check. Compare with other handlers in the same file (lines 524, 784, 974, 1133, 1288) which all call udisks_daemon_util_check_authorization_sync(). There are also no PolicyKit action IDs defined in data/org.freedesktop.UDisks2.policy.in for these operations.

Severity: CRITICAL

Both CVEs clearly pointed out in the review

But even if this result looks impressive, it was not that easy. As I already mentioned, running the reviews multiple times and with slightly different prompts can change the results a lot. And the same happened here. The first two general “do code review” prompts did not uncover these issues. For the third run (with a new context), I explicitly asked to focus on potential security issues, and the tool happily obliged:

I’ll conduct a security-focused code review of the UDisks codebase. Since this is a privileged system daemon, security issues are critical. Let me explore several attack surface areas in parallel.

and one of the spawned agents explicitly did “review privilege and auth issues” in order to finally find this.

So even AI isn’t infallible and doesn’t just spot everything every time. On the other hand, neither did two senior software engineers working on the project when reviewing the pull request introducing this, so maybe we shouldn’t start throwing stones either.

What is “code review” and what exactly is “code”?

As mentioned multiple times already, the exact prompt wording matters. We have found that what we simply refer to as “code review” can lead to different results based on the wording and context.

Asking simply to “Do a code review on this project” usually leads to a high-level description of the project with a few key findings. These might include some specific issues and bugs, but more often the result is just a summary of overall “code health” of the project. In our case, praising our projects for clean and readable code and good test coverage. This is nice to read, but doesn’t help much.

Overview

UDisks is a well-maintained ~160-file C project (v2.12.0) implementing a root-running storage management daemon using GLib/GObject and D-Bus. The codebase is in active maintenance with recent work focused on stability and safety hardening rather than new features.

Security (Strong)

For a daemon running as root with direct hardware access, the security posture is excellent:

Code Quality Observations

Positives:

  • Consistent coding style (GNU C, spaces-only, proper emacs modelines)
  • Naming conventions (UDisks prefix, snake_case functions) applied uniformly

Technical Debt:

  • ~80+ TODO/FIXME/XXX comments scattered across the codebase. Notable clusters:

Summary

This is a mature, security-conscious codebase. The main areas for improvement are clearing the TODO backlog and tightening a few intermediate NULL checks in getter chains. No critical issues found.

Code review of UDisks. Some TODOs and FIXMEs, but other than that it's perfect, right?

But if we ask to “Do a code review of the existing code base and report any issues or bugs you find” instead, it actually goes through the code more thoroughly and reports individual issues. And suddenly, our great project now has 16 issues, three of them high priority.

Summary

16 issues found across the daemon core, modules, client library, and CLI tool.

Severity Count Key areas
High 3 NULL derefs in partition ops (daemon crash), use-after-free in LVM2, double-close in LSM
Medium 9 Dead code in flag checks, empty DM name, insecure passphrase handling, memory leaks, wrong D-Bus completion
Low 4 Missing bounds checks, minor memory leaks in CLI tool, TOCTTOU race

16 new issues found. Not that perfect after all.

Another interesting thing we found is that “code” in the code review often doesn’t include tests. This might be a specific problem for our projects, where tests are somewhat separated: for both libblockdev and UDisks the test suite is part of the same repository, but it is written in Python (in contrast to C) and for UDisks the biggest part of the test suite uses the DBus API, making it even more distinct. But that doesn’t change the fact that without explicitly asking for “code review of the test suite”, the tests were completely ignored. If included, issues in tests are actually the second biggest part of the issues found in this experiment. Looks like we might need tests for our tests. And that’s only for libblockdev, UDisks “test suite code review” is still in progress.

And the same “tests are not code” issue exists for the other non-code parts of the project: documentation, man pages and various YAML config files. This shows that a simple one line prompt is not enough. For a one time review of the entire project as we did here, writing the prompts manually one by one and trying different tactics and approaches might be a good enough solution, but in the future more detailed instructions will surely be needed.

The obvious next step is to prepare a skill for Claude that would include all these instructions for future code reviews. We don’t intend to do a thorough code review of the entire existing code base for every change (that’s where the AI-assisted code review on pull requests takes over), but we definitely intend to continue with this in the future and are looking forward to new and better models.

Conclusion

Even though this started just as an experiment with the tools that are currently available for us, using Claude and Opus 4.6 for code review showed some really interesting results. We were able to quickly fix more than two hundred issues in our code (and that’s counting only libblockdev and UDisks) and even though these were not critical or security bugs, this will definitely improve the project. We will continue working with AI/LLM tools and hopefully eliminate even more issues and also speed up bringing new features – both by directly implementing these with the help of Claude and simply by offloading some of the other work to it.

Browser wars

Posted by Vedran Miletić on 2026-04-15 20:50:48 UTC

Browser wars


brown fox on snow field

Photo source: Ray Hennessy (@rayhennessy) | Unsplash


Last week in Rijeka we held Science festival 2015. This is the (hopefully not unlucky) 13th instance of the festival that started in 2003. Popular science events were organized in 18 cities in Croatia.

I was invited to give a popular lecture at the University departments open day, which is a part of the festival. This is the second time in a row that I got invited to give popular lecture at the open day. In 2014 I talked about The Perfect Storm in information technology caused by the fall of economy during 2008-2012 Great Recession and the simultaneous rise of low-cost, high-value open-source solutions. Open source completely changed the landscape of information technology in just a few years.

The follow-up

Posted by Vedran Miletić on 2026-04-15 20:50:48 UTC

The follow-up


people watching concert

Photo source: Andre Benz (@trapnation) | Unsplash


When Linkin Park released their second album Meteora, they had a quote on their site that went along the lines of

Musicians have their entire lives to come up with a debut album, and only a very short time afterward to release a follow-up.

Open-source magic all around the world

Posted by Vedran Miletić on 2026-04-15 20:50:48 UTC

Open-source magic all around the world


woman blowing sprinkle in her hand

Photo source: Almos Bechtold (@almosbech) | Unsplash


Last week brought us two interesting events related to open-source movement: 2015 Red Hat Summit (June 23-26, Boston, MA) and Skeptics in the pub (June 26, Rijeka, Croatia).

Joys and pains of interdisciplinary research

Posted by Vedran Miletić on 2026-04-15 20:50:48 UTC

Joys and pains of interdisciplinary research


white and black coffee maker

Photo source: Trnava University (@trnavskauni) | Unsplash


In 2012 University of Rijeka became NVIDIA GPU Education Center (back then it was called CUDA Teaching Center). For non-techies: NVIDIA is a company producing graphical processors (GPUs), the computer chips that draw 3D graphics in games and the effects in modern movies. In the last couple of years, NVIDIA and other manufacturers allowed the usage of GPUs for general computations, so one can use them to do really fast multiplication of large matrices, finding paths in graphs, and other mathematical operations.

What is the price of open-source fear, uncertainty, and doubt?

Posted by Vedran Miletić on 2026-04-15 20:50:48 UTC

What is the price of open-source fear, uncertainty, and doubt?


turned on red open LED signage

Photo source: j (@janicetea) | Unsplash


The Journal of Physical Chemistry Letters (JPCL), published by American Chemical Society, recently put out two Viewpoints discussing open-source software:

  1. Open Source and Open Data Should Be Standard Practices by J. Daniel Gezelter, and
  2. What Is the Price of Open-Source Software? by Anna I. Krylov, John M. Herbert, Filipp Furche, Martin Head-Gordon, Peter J. Knowles, Roland Lindh, Frederick R. Manby, Peter Pulay, Chris-Kriton Skylaris, and Hans-Joachim Werner.

Viewpoints are not detailed reviews of the topic, but instead, present the author's view on the state-of-the-art of a particular field.

The first of two articles stands for open source and open data. The article describes Quantum Chemical Program Exchange (QCPE), which was used in the 1980s and 1990s for the exchange of quantum chemistry codes between researchers and is roughly equivalent to the modern-day GitHub. The second of two articles questions the open-source software development practice, advocating the usage and development of proprietary software. I will dissect and counter some of the key points from the second article below.

On having leverage and using it for pushing open-source software adoption

Posted by Vedran Miletić on 2026-04-15 20:50:48 UTC

On having leverage and using it for pushing open-source software adoption


Open 24 Hours neon signage

Photo source: Alina Grubnyak (@alinnnaaaa) | Unsplash


Back in late August and early September, I attended 4th CP2K Tutorial organized by CECAM in Zürich. I had the pleasure of meeting Joost VandeVondele's Nanoscale Simulations group at ETHZ and working with them on improving CP2K. It was both fun and productive; we overhauled the wiki homepage and introduced acronyms page, among other things. During a coffee break, there was a discussion on the JPCL viewpoint that speaks against open-source quantum chemistry software, which I countered in the previous blog post.

But there is a story from the workshop which somehow remained untold, and I wanted to tell it at some point. One of the attendants, Valérie Vaissier, told me how she used proprietary quantum chemistry software during her Ph.D.; if I recall correctly, it was Gaussian. Eventually, she decided to learn CP2K and made the switch. She liked CP2K better than the proprietary software package because it is available free of charge, the reported bugs get fixed quicker, and the group of developers behind it is very enthusiastic about their work and open to outsiders who want to join the development.

AMD and the open-source community are writing history

Posted by Vedran Miletić on 2026-04-15 20:50:48 UTC

AMD and the open-source community are writing history


a close up of a cpu chip on top of a motherboard

Photo source: Andrew Dawes (@andrewdawes) | Unsplash


Over the last few years, AMD has slowly been walking the path towards having fully open source drivers on Linux. AMD did not walk alone, they got help from Red Hat, SUSE, and probably others. Phoronix also mentions PathScale, but I have been told on Freenode channel #radeon this is not the case and found no trace of their involvement.

AMD finally publically unveiled the GPUOpen initiative on the 15th of December 2015. The story was covered on AnandTech, Maximum PC, Ars Technica, Softpedia, and others. For the open-source community that follows the development of Linux graphics and computing stack, this announcement comes as hardly surprising: Alex Deucher and Jammy Zhou presented plans regarding amdgpu on XDC2015 in September 2015. Regardless, public announcement in mainstream media proves that AMD is serious about GPUOpen.

I believe GPUOpen is the best chance we will get in this decade to open up the driver and software stacks in the graphics and computing industry. I will outline the reasons for my optimism below. As for the history behind open-source drivers for ATi/AMD GPUs, I suggest the well-written reminiscence on Phoronix.

I am still not buying the new-open-source-friendly-Microsoft narrative

Posted by Vedran Miletić on 2026-04-15 20:50:48 UTC

I am still not buying the new-open-source-friendly-Microsoft narrative


black framed window

Photo source: Patrick Bellot (@pbellot) | Unsplash


This week Microsoft released Computational Network Toolkit (CNTK) on GitHub, after open sourcing Edge's JavaScript engine last month and a whole bunch of projects before that.

Even though the open sourcing of a bunch of their software is a very nice move from Microsoft, I am still not convinced that they have changed to the core. I am sure there are parts of the company who believe that free and open source is the way to go, but it still looks like a change just on the periphery.

All the projects they have open-sourced so far are not the core of their business. Their latest version of Windows is no more friendly to alternative operating systems than any version of Windows before it, and one could argue it is even less friendly due to more Secure Boot restrictions. Using Office still basically requires you to use Microsoft's formats and, in turn, accept their vendor lock-in.

Put simply, I think all the projects Microsoft has opened up so far are a nice start, but they still have a long way to go to gain respect from the open-source community. What follows are three steps Microsoft could take in that direction.

Free to know: Open access and open source

Posted by Vedran Miletić on 2026-04-15 20:50:48 UTC

Free to know: Open access and open source


yellow and black come in we're open sign

Photo source: Álvaro Serrano (@alvaroserrano) | Unsplash


!!! info Reposted from Free to Know: Open access & open source, originally posted by STEMI education on Medium.

Q&A with Vedran Miletić

In June 2014, Elon Musk opened up all Tesla patents. In a blog post announcing this, he wrote that patents "serve merely to stifle progress, entrench the positions of giant corporations and enrich those in the legal profession, rather than the actual inventors." In other words, he joined those who believe that free knowledge is the prerequisite for a great society -- that it is the vibrancy of the educated masses that can make us capable of handling the strange problems our world is made of.

The movements that promote and cultivate this vibrancy are probably most frequently associated with terms "Open access" and "open source". In order to learn more about them, we Q&A-ed Vedran Miletić, the Rocker of Science -- researcher, developer and teacher, currently working in computational chemistry, and a free and open source software contributor and activist. You can read more of his thoughts on free software and related themes on his great blog, Nudged Elastic Band. We hope you will join him, us, and Elon Musk in promoting free knowledge, cooperation and education.

The academic and the free software community ideals

Posted by Vedran Miletić on 2026-04-15 20:50:48 UTC

The academic and the free software community ideals


book lot on black wooden shelf

Photo source: Giammarco Boscaro (@giamboscaro) | Unsplash


Today I vaguely remembered there was one occasion in 2006 or 2007 when some guy from the academia doing something with Java and Unicode posted on some mailing list related to the free and open-source software about a tool he was developing. What made it interesting was that the tool was open source, and he filed a patent on the algorithm.

Celebrating Graphics and Compute Freedom Day

Posted by Vedran Miletić on 2026-04-15 20:50:48 UTC

Celebrating Graphics and Compute Freedom Day


stack of white and brown ceramic plates

Photo source: Elena Mozhvilo (@miracleday) | Unsplash


Hobbyists, activists, geeks, designers, engineers, etc have always tinkered with technologies for their purposes (in early personal computing, for example). And social activists have long advocated the power of giving tools to people. An open hardware movement driven by these restless innovators is creating ingenious versions of all sorts of technologies, and freely sharing the know-how through the Internet and more recently through social media. Open-source software and more recently hardware is also encroaching upon centers of manufacturing and can empower serious business opportunities and projects.

The free software movement is cited as both an inspiration and a model for open hardware. Free software practices have transformed our culture by making it easier for people to become involved in producing things from magazines to music, movies to games, communities to services. With advances in digital fabrication making it easier to manipulate materials, some now anticipate an analogous opening up of manufacturing to mass participation.

Enabling HTTP/2, HTTPS, and going HTTPS-only on inf2

Posted by Vedran Miletić on 2026-04-15 20:50:48 UTC

Enabling HTTP/2, HTTPS, and going HTTPS-only on inf2


an old padlock on a wooden door

Photo source: Arkadiusz Gąsiorowski (@ambuscade) | Unsplash


Inf2 is a web server at University of Rijeka Department of Informatics, hosting Sphinx-produced static HTML course materials (mirrored elsewhere), some big files, a WordPress instance (archived elsewhere), and an internal instance of Moodle.

HTTPS was enabled on inf2 for a long time, albeit using a self-signed certificate. However, with Let's Encrpyt coming into public beta, we decided to join the movement to HTTPS.

Why we use reStructuredText and Sphinx static site generator for maintaining teaching materials

Posted by Vedran Miletić on 2026-04-15 20:50:48 UTC

Why we use reStructuredText and Sphinx static site generator for maintaining teaching materials


open book lot

Photo source: Patrick Tomasso (@impatrickt) | Unsplash


Yesterday I was asked by Edvin Močibob, a friend and a former student teaching assistant of mine, the following question:

You seem to be using Sphinx for your teaching materials, right? As far as I can see, it doesn't have an online WYSIWYG editor. I would be interested in comparison of your solution with e.g. MediaWiki.

While the advantages and the disadvantages of static site generators, when compared to content management systems, have been written about and discussed already, I will outline our reasons for the choice of Sphinx below. Many of the points have probably already been presented elsewhere.

Fly away, little bird

Posted by Vedran Miletić on 2026-04-15 20:50:48 UTC

Fly away, little bird


macro-photography blue, brown, and white sparrow on branch

Photo source: Vincent van Zalinge (@vincentvanzalinge) | Unsplash


The last day of July happened to be the day that Domagoj Margan, a former student teaching assistant and a great friend of mine, set up his own DigitalOcean droplet running a web server and serving his professional website on his own domain domargan.net. For a few years, I was helping him by providing space on the server I owned and maintained, and I was always glad to do so. Let me explain why.

Mirroring free and open-source software matters

Posted by Vedran Miletić on 2026-04-15 20:50:48 UTC

Mirroring free and open-source software matters


gold and silver steel wall decor

Photo source: Tuva Mathilde Løland (@tuvaloland) | Unsplash


Post theme song: Mirror mirror by Blind Guardian

A mirror is a local copy of a website that's used to speed up access for the users residing in the area geographically close to it and reduce the load on the original website. Content distribution networks (CDNs), which are a newer concept and perhaps more familiar to younger readers, serve the same purpose, but do it in a way that's transparent to the user; when using a mirror, the user will see explicitly which mirror is being used because the domain will be different from the original website, while, in case of CDNs, the domain will remain the same, and the DNS resolution (which is invisible to the user) will select a different server.

Free and open-source software was distributed via (FTP) mirrors, usually residing in the universities, basically since its inception. The story of Linux mentions a directory on ftp.funet.fi (FUNET is the Finnish University and Research Network) where Linus Torvalds uploaded the sources, which was soon after mirrored by Ted Ts'o on MIT's FTP server. The GNU Project's history contains an analogous process of making local copies of the software for faster downloading, which was especially important in the times of pre-broadband Internet, and it continues today.

Markdown vs reStructuredText for teaching materials

Posted by Vedran Miletić on 2026-04-15 20:50:48 UTC

Markdown vs reStructuredText for teaching materials


blue wooden door surrounded by book covered wall

Photo source: Eugenio Mazzone (@eugi1492) | Unsplash


Back in summer 2017. I wrote an article explaining why we used Sphinx and reStructuredText to produce teaching materials and not a wiki. In addition to recommending Sphinx as the solution to use, it was general praise for generating static HTML files from Markdown or reStructuredText.

This summer I made the conversion of teaching materials from reStructuredText to Markdown. Unfortunately, the automated conversion using Pandoc didn't quite produce the result I wanted so I ended up cooking my own Python script that converted the specific dialect of reStructuredText that was used for writing the contents of the group website and fixing a myriad of inconsistencies in the writing style that accumulated over the years.

Don't use RAR

Posted by Vedran Miletić on 2026-04-15 20:50:48 UTC

Don't use RAR


a large white tank

Photo source: Tim Mossholder (@ctimmossholder) | Unsplash


I sometimes joke with my TA Milan Petrović that his usage of RAR does not imply that he will be driving a rari. After all, he is not Devito rapping^Wsinging Uh 😤. Jokes aside, if you search for "should I use RAR" or a similar phrase on your favorite search engine, you'll see articles like 2007 Don't Use ZIP, Use RAR and 2011 Why RAR Is Better Than ZIP & The Best RAR Software Available.

Should I do a Ph.D.?

Posted by Vedran Miletić on 2026-04-15 20:50:48 UTC

Should I do a Ph.D.?


a bike is parked in front of a building

Photo source: Santeri Liukkonen (@iamsanteri) | Unsplash


Tough question, and the one that has been asked and answered over and over. The simplest answer is, of course, it depends on many factors.

As I started blogging at the end of my journey as a doctoral student, the topic of how I selected the field and ultimately decided to enroll in the postgraduate studies never really came up. In the following paragraphs, I will give a personal perspective on my Ph.D. endeavor. Just like other perspectives from doctors of not that kind, it is specific to the person in the situation, but parts of it might apply more broadly.

Alumni Meeting 2023 at HITS and the reminiscence of the postdoc years

Posted by Vedran Miletić on 2026-04-15 20:50:48 UTC

Alumni Meeting 2023 at HITS and the reminiscence of the postdoc years


a fountain in the middle of a town square

Photo source: Jahanzeb Ahsan (@jahan_photobox) | Unsplash


This month we had Alumni Meeting 2023 at the Heidelberg Institute for Theoretical Studies, or HITS for short. I was very glad to attend this whole-day event and reconnect with my former colleagues as well as researchers currently working in the area of computational biochemistry at HITS. After all, this is the place and the institution where I worked for more than half of my time as a postdoc, where I started regularly contributing code to GROMACS molecular dynamics simulator, and published some of my best papers.

My perspective after two years as a research and teaching assistant at FIDIT

Posted by Vedran Miletić on 2026-04-15 20:50:48 UTC

My perspective after two years as a research and teaching assistant at FIDIT


human statues near white building

Photo source: Darran Shen (@darranshen) | Unsplash


My employment as a research and teaching assistant at Faculty of Informatics and Digital Technologies (FIDIT for short), University of Rijeka (UniRi) ended last month with the expiration of the time-limited contract I had. This moment has marked almost two full years I spent in this institution and I think this is a good time to take a look back at everything that happened during that time. Inspired by the recent posts by the PI of my group, I decided to write my perspective on the time that I hope is just the beginning of my academic career.

GDB source-tracking breakpoints

Posted by Fedora Magazine on 2026-04-15 16:33:47 UTC

One of the main abilities of a debugger is setting breakpoints.
GDB: The GNU Project Debugger now introduces an experimental feature
called source-tracking breakpoints that tracks the source line a breakpoint
was set to.

Introduction

Imagine you are debugging: you set breakpoints on a bunch of
source lines, inspect some values, and get ideas about how to change your
code. You edit the source and recompile, but keep your GDB session running
and type run to reload the newly compiled executable. Because you changed
the source, the breakpoint line numbers shifted. Right now, you have to
disable the existing breakpoints and set new ones.

GDB source-tracking breakpoints change this situation. When you set a
breakpoint using file:line notation, when this feature is enabled, GDB
captures a small window of the surrounding source code. When you recompile
and reload the executable, GDB adjusts any breakpoints whose lines shifted
due to source changes. This is especially helpful in ad-hoc debug sessions
where you want to keep debugging without manually resetting breakpoints
after each edit-compile cycle.

Setting a source-tracking breakpoint

To enable the source-tracking feature, run:

(gdb) set breakpoint source-tracking enabled on

Set a breakpoint using file:line notation:

(gdb) break myfile.c:42
Breakpoint 1 at 0x401234: file myfile.c, line 42.

GDB now tracks the source around this line. The info breakpoints command
shows whether a breakpoint is tracked:

(gdb) info breakpoints
Num Type Disp Enb Address What
1 breakpoint keep y 0x0000000000401234 in calculate at myfile.c:42
source-tracking enabled (tracking 3 lines around line 42)

Now edit the source — say a few lines are added above the breakpoint,
shifting it from line 42 to line 45. After recompiling and reloading the
executable with run, GDB resets the breakpoint to the new line and displays:

Breakpoint 1 adjusted from line 42 to line 45.

Run info breakpoints again to confirm the new location:

(gdb) info breakpoints
Num Type Disp Enb Address What
1 breakpoint keep y 0x0000000000401256 in calculate at myfile.c:45
source-tracking enabled (tracking 3 lines around line 45)

As you can see, GDB updated the breakpoint line to match the new location.

Limitations

The matching algorithm requires an exact string match of the captured source
lines. Whitespace-only changes or trivial reformatting of the tracked lines
will confuse the matcher and may cause the breakpoint not to be found.

GDB only searches within a 12-line window around the original location. If
the code shifted by more than that — for example, because a large block was
inserted above — the breakpoint will not be found. GDB will keep the
original location and print a warning:

warning: Breakpoint 1 source code not found after reload, keeping original
location.

Source context cannot be captured when a breakpoint is created pending
(e.g., with set breakpoint pending on), because no symbol table is available
yet. When the breakpoint later resolves to a location, it will not be
source-tracked.

Source tracking is not supported for ranged breakpoints (set with
break-range).

Breakpoints on inline functions that expand to multiple locations are not
source-tracked, as each location may have moved differently.

How to try this experimental feature

This feature is not yet available in a stable GDB release. There are two
ways to try it.

Install from COPR (for Fedora users)

A pre-built package is available through a COPR repository. Enable it and
install:

sudo dnf copr enable ahajkova/GDB-source-tracking-breakpoints
sudo dnf upgrade gdb

To disable the repository again after testing:

sudo dnf copr disable ahajkova/GDB-source-tracking-breakpoints

The COPR project page is at:
https://copr.fedorainfracloud.org/coprs/ahajkova/GDB-source-tracking-breakpo
ints/

Build from source

  1. Clone the GDB repository:
    git clone git://sourceware.org/git/binutils-gdb.git
    cd binutils-gdb
  2. Download and apply the patch from the upstream mailing list:
    https://sourceware.org/pipermail/gdb-patches/2026-April/226349.html
  3. Build GDB:
    mkdir build && cd build
    ../configure --prefix=/usr/local
    make -j$(nproc) all-gdb
  4. Run the newly built GDB:
    ./gdb/gdb

Conclusion

GDB source-tracking breakpoints are an experimental feature currently under
upstream review and not yet available in a stable GDB release. This link
https://sourceware.org/gdb/current/onlinedocs/gdb.html/Set-Breaks.html
covers all available breakpoint commands. If you try this feature out and
hit any kind of unexpected behavior, feedback is very welcome — you can
follow and respond to the upstream patch discussion on the GDB mailing list
at https://sourceware.org/pipermail/gdb-patches/2026-April/226349.html

Stop building agents, start harnessing Goose

Posted by Adam Miller on 2026-04-15 14:00:00 UTC

Stop building agents, start harnessing Goose

There's a disconnect in the AI Engineering space right now and I think that the open source community has alread risen to the occasion to bridge the gap, but I don't see any signal that it's well understood or widely adopted. The industry is overwhelmingly focused on building agents from scratch via custom frameworks, bespoke orchestration layers, hand-rolled tool-calling loops, etc. when many of the hard problems have already been solved in that layer of the stack. The building block exists. It's open source. It's called goose.

I think for over 90% of use cases, if you're spending your time implementing an agent from scratch, you're already behind or potentially have already lost the race. My hypothesis is that Goose is the building block. It's the small, composable thing that becomes powerful when you wrap it in what the industry is rapidly agreeing is called the Harness.

The composable agent you didn't know you needed

Most people hear "goose" and think either "another AI coding assistant" or "another AI chatbot" (depending on how they came across goose and how they use it). That misunderstanding is the problem. Goose is not a coding assistant. It is not a chatbot. It is not a Claude Code competitor, though it can be configured to act as all of those things. At its core, goose is a small, configurable agent runtime with an extension-based architecture that can be composed into virtually anything.

It operates on three components:

  • Interface: Desktop app or CLI/TUI that collects user input and displays output.

  • Agent: The core logic engine that manages the interactive loop: sending requests to LLM providers, orchestrating tool calls, and handling context revision.

  • Extensions: Pluggable components built on the Model Context Protocol (MCP) that provide specific tools and capabilities.

A small core with a lot of power delivered through native extensions, external plugins, and configuration options. The agent core itself is minimal, it's an interactive loop plus context management. That's it. All capabilities come through the extension system.

You can strip goose down to nothing. No external capabilities. No tool calling. No skills. No plugins. You can even configure it so it cannot access the internet, only the inference service to talk to the model (which can be local). At that point, it's a plain chatbot with no agency whatsoever.

Or you can go the other direction entirely.

From zero to everything

Configure goose with the Developer extension, Computer Controller, Memory, and a handful of MCP servers and you have a working replacement for Claude Code, Codex, Gemini CLI, OpenCode, or any other similar tool. Same capabilities, no vendor lock-in, and you choose your own inference provider from over 25 options (at the time of this writing)including Anthropic, OpenAI, Google Gemini, Groq, Mistral, and more. You can run fully local inference via goose's native inference provider, or offload to Ollama, Ramalama LM Studio, or Docker Model Runner. The full list of providers is in the goose documentation.

If you put this together, you're well on your way to unlocking the full potential but you're just getting started.

Recipes: reproducible, composable workflows

Where goose gets interesting is its composition model. Goose Recipes are reusable, shareable workflow definitions that package together instructions, extensions, parameters, provider settings, retry logic, and structured response schemas. A recipe can be as simple as a single prompt with a specific extension configuration. Alternatively it can be sophisticated, composed of subrecipes where each subrecipe is effectively another goose agent with its own configuration: its own extensions, plugins, inference provider, system prompt, and skills.

Subrecipes run in isolated sessions with no shared conversation history, memory, or state. The main recipe's agent decides when to invoke them, can run them sequentially or in parallel, and chains their outputs through conversation context. Compositional agent orchestration without writing a single line of framework code.

You're not writing an orchestration layer. You're not building a DAG executor. You're not implementing tool-calling logic. You're writing YAML that describes what you want done and goose handles the how.

Goosetown: multi-agent orchestration, no framework required

If want to take this all the way to the extreme of a fully autonomous software factory like the one Steve Yegge outlines in his now infamous blog post, "Welcome to Gas Town", and implemented via his Gastown project. Gastown is a multi-agent workspace manager for orchestrating Claude Code, GitHub Copilot, Codex, Gemini, and other AI agents with persistent work tracking. It's a Go application with concepts like Mayors, Rigs, Polecats, Hooks, Convoys, and Beads. It's a real engineering effort to coordinate 20-30 agents on a codebase.

You can do exactly that by using goose as the building block. The open source community did it. They looked at Gastown and re-implemented its core concepts using goose's native capabilities. The result is Goosetown. Goosetown is a multi-agent coordination system that orchestrates "flocks" of AI agents (researchers, writers, workers, reviewers) to decompose and execute complex tasks. Goosetown uses goose's subagent delegation, skills system for role-based specialization, inter-agent communication via a broadcast channel called the "Town Wall," and multi-model support for adversarial cross-reviews where different LLMs review each other's work.

If you look at the code, it's just a few flat files, some shell scripts, some skills markdown, and some agent definitions.

All of this built on top of goose. Not alongside it. Not wrapping it. On it. Using the primitives goose already provides: skills, subagents, extensions, and recipes.

Goose as a service

Goose also runs as a daemon, exposing itself to other applications via the Agent Client Protocol (ACP) (a standardized JSON-RPC protocol developed by Zed Industries). ACP does for AI agents what LSP did for language servers. ACP decouples agents from editors and frontends, so goose can be embedded directly into Zed, JetBrains, Neovim, or any ACP-compatible environment.

The composability runs both directions. Goose can also consume other ACP agents as providers, routing its LLM calls through Claude Code, Codex, or Gemini while keeping its own extension ecosystem and UI. As Adrian Cole wrote in his blog post "How to Break Up with Your Agent":

"Pick the UI you like. Pick the agent you like. They don't have to be the same thing."

This bidirectional composability — goose as a component and goose as an orchestrator — is what separates it from other agent tools.

Open governance, no vendor lock-in

Goose is fully open source under the leadership of the Agentic AI Foundation (AAIF), which provides vendor-neutral governance under the umbrella of the Linux Foundation. AAIF also hosts the Model Context Protocol (MCP) itself, so the standards goose builds on are governed with the same neutrality.

This matters. When you build your workflows on goose, you're building on a foundation governed by a neutral body with a Governing Board, a Technical Committee, and a transparent contribution model. This is the same open, collaborative, and neutral model that made Linux and Kubernetes into reliable core components of the entire software industry, and it's the same reason I think it's worth investing time and energy into.

It's no secret I'm an open source nerd, and goose checks all the boxes.

The harness is the thing

We've collectively been on a journey. First it was Prompt Engineering, crafting the right words to get the right output. Then it was Context Engineering, making sure the model has the right information at the right time. Now, it seems we've arrived at the next turn in this adventure we all find ourselves in: Harness Engineering.

Ralph Bean nails this in his blog post "What Even Is the Harness?". The harness is the enablement layer. It's everything you add to the agent runtime that gives you control over your outcomes:

"Harness — the enablement layer. AGENTS.md files, skills, custom tools, hand-crafted linters, system prompts for task-oriented agents. These are the things you engineer, iteratively, to increase the chances the agent gets things right. This is what Birgitta Böckeler calls the user harness and is where Mitchell Hashimoto's attention lives."

—Ralph Bean

Read that again. The harness is not the agent. The harness is what you add to the agent. The AGENTS.md files. The skills. The custom MCP tools. The hand-crafted linters. The system prompts. The recipes and subrecipes. The extension configurations. The provider choices. The permission policies.

This is where your engineering effort belongs. Not in building the interactive loop, or implementing tool-calling JSON parsing, or writing context window management, or building MCP client libraries. Goose already does all of that and does so with the full backing of the AAIF, the Linux Foundation, and a vibrant open source community.

In most cases, and I'd argue almost all cases, your job is to build the harness.

The 90% argument

I think for over 90% of use cases where someone is building an agent today, goose is a better starting point than a blank text editor or a vibe coding session (are we calling it Agentic Engineering yet?).

If you need a coding assistant, goose does that. If you need a research agent, configure goose with web scraping extensions and a research-focused recipe or skill. If you need a CI/CD bot, run goose in daemon mode with ACP or orchestrate it with scripts/recipes in your CI job runner of choice. If you need multi-agent orchestration, compose goose instances with subrecipes or build a Goosetown-style flock. If you need local-only, air-gapped inference, point goose at Ollama, Ramalama, LM Studio, or its native inference provider. If you need to integrate with your existing editor, goose speaks ACP natively or you can set GOOSE_PROMPT_EDITOR and run the whole flow from inside your editor of choice. If you need vendor-neutral governance, it's under the Linux Foundation umbrella via AAIF.

The remaining 10%? Those are the genuinely novel agent architectures, the research projects pushing boundaries, the use cases where you do need to control every byte of the agent loop. For those, build from scratch. For everything else, build the harness. I'm not saying you can't build agents from scratch. I'm simply suggesting that you probably don't need to.

A call to action

If you're a professional technologist or an aspiring AI Engineer, I'd encourage you to shift your mental model. Stop thinking about building agents. Start thinking about harnessing them. At this point in the AI hype cycle, the agent is mature enough to be the commodity. The harness is your competitive advantage.

Install goose. Strip it down to nothing and build it back up. Write a recipe. Compose some subrecipes. Add skills. Configure extensions. Point it at different providers. Run it as a daemon. Embed it in your editor. Build a flock. Engineer the harness.

Go forth and harness your agents.

Happy hacking. <3

Streaming syslog-ng data to your lakehouse using OpenTelemetry

Posted by Peter Czanik on 2026-04-15 12:12:22 UTC

Version 4.11.0 of syslog-ng contains contributions from Databricks related to OAuth2 authentication. Recently, they published a blog about how this enables their customers to send logs to their data lake using syslog-ng and the OpenTelemetry protocol.

The syslog-ng project received two contributions from Databricks in the last weeks of 2025. The first one turned the already existing OAuth2 support generic and extensible, so it can be used anywhere, not just with Microsoft Azure (but of course, Azure compatibility was preserved). The next pull request was built on the first one and enabled OAuth2 support for gRPC-based destinations, like OpenTelemetry, Loki, BigQuery, PubSub, ClickHouse, etc. These changes were released as part of the syslog-ng 4.11.0 release. You can read more about these in the release notes at https://github.com/syslog-ng/syslog-ng/releases/tag/syslog-ng-4.11.0

Besides an excellent overview about syslog-ng, the related Databricks blog also provides step-by-step instructions on how to use syslog-ng with their product. You can read it at: https://community.databricks.com/t5/technical-blog/streaming-syslog-ng-data-to-your-lakehouse-powered-by-zerobus/ba-p/153979

syslog-ng logo

Originally published at https://www.syslog-ng.com/community/b/blog/posts/streaming-syslog-ng-data-to-your-lakehouse-using-opentelemetry

The best way to setup a kanban board

Posted by Ben Cotton on 2026-04-15 12:00:00 UTC

The best way to setup a kanban board is…whatever way works best for you. I have at least two distinct styles of board setup across three tools (yes, I have a problem) because that’s what works best for me. How you setup your boards is a matter of style. The most important thing is to set it up in such a way that you’ll actually use it — an unused board is full of lies and can cause confusion with your collaborators. Since I often get asked for help on this, it seems worth writing down some considerations.

Direction of flow

Most tools I’ve used assume a left-to-right flow. I recently switched to using a right-to-left flow after reading Philippe Bourgau’s post on the topic. Using right-to-left means you’re starting with the stuff that’s in progress instead of the backlog. You “pull” cards instead of “pushing” them. I made the suggestion at work, and folks seem to generally like the change. The main problem is that a lot of tools I’ve used treat the leftmost column as the default starting point, so the card creation experience involves an extra click or two.

Number of columns

This is one that I’ve often seen people overthink. In the simplest configuration, you have “to do”, “doing”, and “done.” That’s often enough. Simplicity is good. On the other end, one of the boards I use at work has something like a dozen columns. Not every card flows through every column, so it’s not as wild as it sounds. Much of the column sprawl is because the tool we use doesn’t support swimlanes. I would normally not suggest double-digit columns, but it works well in this case.

A good rule to determine if a column is necessary is if it never has any cards, or cards only briefly live in the column, it’s not worthwhile.

Examples

Here are a few examples of how I have different boards set up.

The meal planning board my wife and I use to plan the week’s dinner has three columns: ideas, need ingredients, and planned. “Ideas” is the backlog, “need ingredients” means we intend to cook it but first we have to go grocery shopping, and “planned” is ready to cook.

The board I use to track laundry has five columns: dirty (the backlog), ready (sorted), washing (duh), drying (also duh), and folding (triple duh). The folding column can probably go away, but sometimes the basket sits for a few hours or until the next day.

My board for Duck Alignment Academy has five columns (plus one). “New” is for ideas that I don’t really fleshed out yet. “Ready” is for posts that I could sit down and write. “In progress” is for posts that I am currently writing. “Scheduled” is for completed posts waiting to publish (I wish this had more cards in it). “Done” is for posts that have published. There’s an extra “archived” column that I move cards to after I send that month’s Duck Alignment Academy newsletter. (I have a column because the tool doesn’t support archiving cards directly.)

Extra fields/metadata

Most tools let you apply labels, add due dates, create custom fields, and so on. I’ve certainly made use of that when setting up boards, but I find myself not really paying attention to it most of the time. There’s often an urge to design a system so that everything will be perfectly organized. But in the same way that a backup that you never test restores from is not reliable, metadata that you only write is not useful. Metadata is so easy to overoptimize, because it requires getting everyone using the board to buy in and then also to have a reason to use it. I wrote a similar piece about issue labels earlier this year.

In my experience, it’s almost always better to ignore metadata when initially creating a board. Only when you can identify a concrete problem that you’re actually experiencing should you add metadata.

A great example of this is in my Duck Alignment Academy board. When I was first creating the board, I was also creating the website. I had cards for tasks like domain registration, hosting setup, page creation, and so on. In the four-plus years since I launched the site, the cards have almost exclusively been blog posts. The “blog post” label that I created doesn’t add a lot of value. What does add value are two custom fields I added: “URL” and “description.” I put the post’s URL and the excerpt (that gets used for social media preview and the like) into those two fields so that later on when I go to add them to the newsletter or share them elsewhere, they’re available and consistent.

Another use that is actually useful is on my meal planning board. I have labels for the primary protein in a meal, which helps me see at a glance if we’ve planned chicken five days in a row.

Work in progress limits

If your tool supports setting work in progress limits, I recommend that you do. If nothing else, it forces you to be honest about what you’re actually working on and what you’ve set aside for one reason or another. I’ve found that WIP limits seem to work better on single-user boards, but I haven’t used them much on collaborative boards.

Automate and integrate

The more your board does the boring work for you, the more likely you are to keep it up to date. If it supports auto-archiving cards when they’re complete, set that up. If you can tie it in to the tools you’re already using, do that. (The kanban board available in GitHub issues works pretty well in that regard.) Make the board a hub of your work and you’ll get use out of it. Make the board a chore that you have to go update and you won’t.

This post’s featured photo by airfocus on Unsplash.

The post The best way to setup a kanban board appeared first on Duck Alignment Academy.

Matrix server maintenance

Posted by Fedora Infrastructure Status on 2026-04-14 11:15:00 UTC

Element Matrix Services is performing scheduled maintenance on our matrix server (fedora.im).

Affected Services:

  • chat.fedoraproject.org
  • fedora.im
  • matrix services

Nominate Your Fedora Heroes: Mentor and Contributor Recognition 2026

Posted by Fedora Magazine on 2026-04-14 09:30:00 UTC
Mentor and Contributor Recognition 2026 - Generated using Gemini Nano Banana Pro by Akashdeep Dhar

It’s time to show our appreciation of the amazing contributors who help shape the Feodra community.

The Fedora Project thrives through the devotion, guidance, and tireless drive of the contributors who consistently perform. From developing testcases to onboarding contributors, from technical writing to coordinating events, it is these vital champions who ensure that the community flourishes. In coordination with the Fedora Mentor Summit 2026, we will be returning to Flock To Fedora 2026 to announce the winners. This wiki reflects the deep gratitude and careful thought behind this community recognition program.

As we prepare to spotlight exceptional mentors and contributors across the Fedora Project, we invite you to help us appreciate the amazing contributors who help shape the community. Whether it is a veteran mentor who helped you begin your journey or a contributor whose efforts have truly reshaped the community’s landscape, now is the moment to celebrate them! Discover more about the nomination guidelines and submit your entry using the link provided below:


👉 Find more information here: https://fedoraproject.org/wiki/Contributor_Recognition_Program_2026
👉 Submit your nominations here: https://forms.gle/mBAVKw4qLu14R5YY7

🗓 Deadline: 15th May 2026

Let us appreciate the amazing contributors who help shape the community. Your nomination could be the recognition that might enable them to do more – and a moment of achievement for the entire community.

Rustbucket

Posted by Tony Asleson on 2026-04-14 01:38:18 UTC

Sorting a terabyte of data in the late 1990s meant serious hardware, serious planning, and probably a serious budget approval process. Today you can do it on a workstation before lunch. I wanted to know how fast, so I wrote rustbucket to find out.

It’s a two-phase external sort implemented in Rust, built around io_uring, and named for reasons that should be obvious to anyone who has spent time with either Rust or storage systems.

RHEL 10 (GNOME 47) Accessibility Conformance Report

Posted by Felipe Borges on 2026-04-13 10:24:49 UTC

Red Hat just published the Accessibility Conformance Report (ACR) for Red Hat Enterprise Linux 10.

Accessibility Conformance Reports basically document how our software measures up against accessibility standards like WCAG and Section 508. Since RHEL 10 is built on GNOME 47, this report is a good look at how our stack handles various accessibility things from screen readers to keyboard navigation.

Getting a desktop environment to meet these requirements is a huge task and it’s only possible because of the work done by our community in projects like: Orca, GTK, Libadwaita, Mutter, GNOME Shell, core apps, etc…

Kudos to everyone in the GNOME project that cares about improving accessibility. We all know there’s a long way to go before desktop computing is fully accessible to everyone, but we are surely working on that.

If you’re curious about the state of accessibility in the 47 release or how these audits work, you can find the full PDF here.

misc fedora bits second week of april 2026

Posted by Kevin Fenzi on 2026-04-12 17:14:00 UTC
Scrye into the crystal ball

Another saturday and... oh wait, it's sunday! I was away almost all the day yesterday (morning at https://beaverbarcamp.org/ and afternoon/evening visiting family ), so this will be a day late. :)

This week we were still in Fedora 44 final freeze (we canceled the go/nogo on thursday because there were still unaddressed blockers) so there was a lot of catching up on old issues/processing docs and other pull requests and the like.

There were a few things that stood out however:

Matrix bots learn new tricks

Diego wrote up a pull request to adjust our matrix bot to point to forge.fedoraproject.org for things that have moved there from pagure.io ( https://github.com/fedora-infra/maubot-fedora/pull/150 ) and so I merged it, figured out how to cut a release there, figured out how to deploy it to first staging and the production.

So, now !epel !ticket !releng should all work for those trackers, and

!forge org repo should work for a generic pointer to any forge project.

Hopefully this will make meetings and discussion on matrix nicer.

Fixed a websocket proxy issue with openqa

We had to move openqa behind anubis as the scrapers discovered it and were making it unusable. Unfortunately, openqa has a mode where you can update test screens that uses websockets and those were not correctly passing though anubis so that functionality was broken.

I was going to go look at apache docs and see if I could track down what needed to be set to do that, but decided to just ride the ai wave and ask a ai agent about it.

It snarfed in the config, thought about it for a bit, then spewed out a solution. The solution was largely for older apache versions (but I didn't tell it what apache version we were running), but at the end it correctly noted that on newer versions passing "upgrade=websocket" to the proxy commandline would fix it.

It did. It definitely saved me time poking through the apache docs.

a few new builders soon

We have had a number of machines we moved from our old datacenter that we wanted to repurpose as builders sitting around. It's not been very high priority to get them setup, but what better time that a freeze to get them online.

So, I got 3 of them ready, which involved updating a bunch of firmware on them, installing them, configuring networking, etc.

Will need a small freeze break to add them into ansible and finish them up, but then there should be 3 more buildhw-x86 builders.

Fedora 44 upgrades

Since I was catching up on things I decided to go ahead and upgrade my main server and it's vmhost to fedora 44 this morning.

Everything went super fast and painlessly, aside one issue with matrix-synapse (The f43 packager is newer than the f44 one so it would not work with my config/database). For now I just "downgraded" (or is that "sidegraded") to the f43 one. It seems to have a pretty nasty tangle of rust crate version changes, so it might not be too easy to sort out quickly.

Family Vacation time

The week after next (the week of april 20th) I will be away all week. I might look in on matrix/email some, but don't count on it. I'll be at a family vacation in hawaii. Please file tickets and be kind to my co-workers who are perfectly capable of handling anything in my absense. :)

comments? additions? reactions?

As always, comment on mastodon: https://fosstodon.org/@nirik/116392979078747195

HowTo: Cómo instalar y anclar una versión específica con Flatpak

Posted by Rénich Bon Ćirić on 2026-04-10 20:30:00 UTC

¿Te ha pasado que sale la actualización de tu software favorito y nomás no puedes saltar a ella? Pues resulta que Bitwig Studio llegó a su versión 6 y, pues, yo andaba sin chamba y sin feria para el upgrade.

Lo uso con Flatpak porque, la neta, yo mismo inicié, propuse y doné la primera implementación en Flathub. Total, me tuve que quedar en la versión 5.x y nunca había tenido que anclar una versión usando Flatpak.

Le pregunté a un LLM cómo hacerle (pa' qué le miento, compa) y me dió la respuesta. Así de fácil se hace:

Paso a paso

Revisar el log:

Primero hay que ver qué versiones hay disponibles en el historial de Flathub.

flatpak remote-info --log flathub com.bitwig.BitwigStudio
Instalar la versión deseada:

Copia el hash del commit que te interese e instálalo.

sudo flatpak update --commit=00535d779d2ebead55f129a406ed819064b7d3a28bd638aa25a0c8dda919197e com.bitwig.BitwigStudio
Anclar (mask):

Esto es lo más importante para que no se te actualice por error la próxima vez que hagas un update general.

flatpak mask com.bitwig.BitwigStudio

No tiene chiste, ¿verdad? Con esto ya puedes estar tranquilo de que tu flujo de trabajo no se va a romper por una actualización que no pediste... o, mejor dicho, pa' la que no te alcanza. ;D

Sobre el software privativo

La neta, yo no soy muy partidario de estas cosas. Siempre recomiendo estar al día pero, con software privativo y que pagas por usar, pues no me quedó de otra. Estos batos de Bitwig no mantienen versiones anteriores de forma sencilla; hay que estar al día con ellos a huevo.

Note

A muchos les parecerá una patraña o una traición usar software no libre en Fedora. En mi caso, pasé varios años intentando grabar mis rolas con software libre. Nunca pude. No por falta de software sino, más que nada, por la falta de infraestructura.

Para grabar con puro Open Source como Ardour, que es una chulada, necesitas un estudio pro: micros, batería, un baterista y que todos tengan tiempo. Como yo soy bien huevón para coordinar gente, si no hago las cosas en el momento, ya no las hice. Bitwig Studio me resuelve la vida porque ya trae todo el arsenal: samples, instrumentos y FX de súper buena calidad.

Al menos no tengo que usar Windows. Es mucho más de lo que tuve en los 2000s con el buen Renoise o LMMS. El flujo de trabajo importa bastante, y aunque en GNU/Linux hay plugins increíbles (como los Calf en LV2), armar una sesión estable era un pinche desmadre impráctico.

Conclusión

Ahí se las dejo, compas. Ojalá y a alguien le sirva este mini-howto para no andar batallando con las versiones de Flatpak. ¡A seguirle dando a la música!

Community Update – Week 15 2026

Posted by Fedora Community Blog on 2026-04-10 10:00:00 UTC

This is a report created by CLE Team, which is a team containing community members working in various Fedora groups for example Infrastructure, Release Engineering, Quality etc. This team is also moving forward some initiatives inside Fedora project.

Week: 06 – 10 Apr 2026

Fedora Infrastructure

This team is taking care of day to day business regarding Fedora Infrastructure.
It’s responsible for services running in Fedora infrastructure.
Ticket tracker

CentOS Infra including CentOS CI

This team is taking care of day to day business regarding CentOS Infrastructure and CentOS Stream Infrastructure.
It’s responsible for services running in CentOS Infratrusture and CentOS Stream.
CentOS ticket tracker
CentOS Stream ticket tracker

QE

This team is taking care of quality of Fedora. Maintaining CI, organizing test days
and keeping an eye on overall quality of Fedora releases.

  • General release validation and bug reporting work throughout the whole week, in preparation for F44 Final.
  • Automated a new openQA test for basic IPv4 and IPv6 connectivity (https://forge.fedoraproject.org/quality/os-autoinst-distri-fedora/pulls/504)
  • Blockerbugs migration to Forge almost ready for staging deployment
  • Ongoing collaboration with OSCI team to improve and rationalize generic test pipelines
  • Contributed Kiwi and Koji logging improvements to Pungi as part of compose-critical script work

Forgejo

This team is working on introduction of https://forge.fedoraproject.org to Fedora
and migration of repositories from pagure.io.

UX

This team is working on improving User experience. Providing artwork, user experience,
usability, and general design services to the Fedora project

  • Madeline submitted a proposal for All Things Open

If you have any questions or feedback, please respond to this report or contact us on #admin:fedoraproject.org channel on matrix.

The post Community Update – Week 15 2026 appeared first on Fedora Community Blog.

⚙️ PHP version 8.4.20 and 8.5.5

Posted by Remi Collet on 2026-04-10 05:48:00 UTC

RPMs of PHP version 8.5.5 are available in the remi-modular repository for Fedora ≥ 42 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).

RPMs of PHP version 8.4.20 are available in the remi-modular repository for Fedora ≥ 42 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).

ℹ️ These versions are also available as Software Collections in the remi-safe repository.

ℹ️ The packages are available for x86_64 and aarch64.

ℹ️ There is no security fix this month, so no update for versions 8.2.30 and 8.3.30.

Version announcements:

ℹ️ Installation: Use the Configuration Wizard and choose your version and installation mode.

Replacement of default PHP by version 8.5 installation (simplest):

On Enterprise Linux (dnf 4)

dnf module switch-to php:remi-8.5/common

On Fedora (dnf 5)

dnf module reset php
dnf module enable php:remi-8.5
dnf update

Parallel installation of version 8.5 as Software Collection

yum install php85

Replacement of default PHP by version 8.4 installation (simplest):

On Enterprise Linux (dnf 4)

dnf module switch-to php:remi-8.4/common

On Fedora (dnf 5)

dnf module reset php
dnf module enable php:remi-8.4
dnf update

Parallel installation of version 8.4 as Software Collection

yum install php84

And soon in the official updates:

⚠️ To be noticed :

  • EL-10 RPMs are built using RHEL-10.1
  • EL-9 RPMs are built using RHEL-9.7
  • EL-8 RPMs are built using RHEL-8.10
  • intl extension now uses libicu74 (version 74.2)
  • mbstring extension (EL builds) now uses oniguruma5php (version 6.9.10, instead of the outdated system library)
  • oci8 extension now uses the RPM of Oracle Instant Client version 23.26 on x86_64 and aarch64
  • A lot of extensions are also available; see the PHP extensions RPM status (from PECL and other sources) page

ℹ️ Information:

Base packages (php)

Software Collections (php83 / php84 / php85)

 

⚙️ PHP version 8.4.19 and 8.5.4

Posted by Remi Collet on 2026-03-13 05:32:00 UTC

RPMs of PHP version 8.5.4 are available in the remi-modular repository for Fedora ≥ 42 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).

RPMs of PHP version 8.4.19 are available in the remi-modular repository for Fedora ≥ 42 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).

ℹ️ These versions are also available as Software Collections in the remi-safe repository.

ℹ️ The packages are available for x86_64 and aarch64.

ℹ️ There is no security fix this month, so no update for versions 8.2.30 and 8.3.30.

Version announcements:

ℹ️ Installation: Use the Configuration Wizard and choose your version and installation mode.

Replacement of default PHP by version 8.5 installation (simplest):

On Enterprise Linux (dnf 4)

dnf module switch-to php:remi-8.5/common

On Fedora (dnf 5)

dnf module reset php
dnf module enable php:remi-8.5
dnf update

Parallel installation of version 8.5 as Software Collection

yum install php85

Replacement of default PHP by version 8.4 installation (simplest):

On Enterprise Linux (dnf 4)

dnf module switch-to php:remi-8.4/common

On Fedora (dnf 5)

dnf module reset php
dnf module enable php:remi-8.4
dnf update

Parallel installation of version 8.4 as Software Collection

yum install php84

And soon in the official updates:

⚠️ To be noticed :

  • EL-10 RPMs are built using RHEL-10.1
  • EL-9 RPMs are built using RHEL-9.7
  • EL-8 RPMs are built using RHEL-8.10
  • intl extension now uses libicu74 (version 74.2)
  • mbstring extension (EL builds) now uses oniguruma5php (version 6.9.10, instead of the outdated system library)
  • oci8 extension now uses the RPM of Oracle Instant Client version 23.26 on x86_64 and aarch64
  • A lot of extensions are also available; see the PHP extensions RPM status (from PECL and other sources) page

ℹ️ Information:

Base packages (php)

Software Collections (php83 / php84 / php85)

 

Friday Links 26-12

Posted by Christof Damian on 2026-04-09 22:00:00 UTC
Open sandwich with vegetarian coronation chicken

Two weeks of links. A lot of AI related articles again. I liked the paper about using chatbots in leadership, and the one about the YAML of the mind. If you want a podcast, check out the one about night trains.

Quote of the Week
The technologies we use to try to “get on top of everything” always fail us, in the end, because they increase the size of the “everything” of which we’re trying to get on top.
Four Thousand Weeks
Oliver Burkeman

Leadership

We-ness: The secret cause of Psychological Safety [Podcast] - very interesting deep dive into how feeling like a team can increase psychological safety.

Sécuriser ses serveurs derrière Cloudflare : Authenticated Origin Pulls

Posted by Guillaume Kulakowski on 2026-04-09 18:24:00 UTC
Cloudflare protège efficacement vos services… sauf si votre serveur reste accessible en direct. Dans cet article, on met en place Authenticated Origin Pulls pour garantir que seules les requêtes provenant de Cloudflare peuvent atteindre votre infrastructure, avec deux niveaux de sécurité.

Handling a PR disaster for your project

Posted by Ben Cotton on 2026-04-08 12:00:00 UTC

I want to say up front that the point of this post is not to disparage Trivy or its maintainers. They’ve had a rough few weeks and I feel for them. I only discuss Trivy as a recent example of a bad day at the office.

The saying “there’s no such thing as bad publicity” always felt a little gross to me. There are a lot of reasons you might get noticed that are bad, and it should feel bad to do bad things. But maybe there’s something to it. If you handle a PR disaster well, you might come out ahead (although you’d still probably prefer to not deal with it in the first place).

Bad publicity is good?

As you may know, the Trivy project fell victim to an attack last month. The compromise affected not just Trivy, but its sponsoring company and many downstream projects and companies. It was — and continues to be — a big deal.

Out of curiosity, I decided to look to see if people were switching from Trivy to other projects. I decided to use GitHub stars as a proxy for interest. Yes, GitHub stars are meaningless. But I figured a relative change might indicate interest in alternatives. Much to my surprise, Trivy’s star count increased pretty dramatically post-compromise. Syft and cdxgen, which seem to be the main alternatives for SBOM generation, saw no such bump. Of course, this doesn’t necessarily mean that people aren’t shifting away from Trivy. But I expected to see the opposite of what the star counts show.

Graph of GitHub stars over time for three projects. Trivy shows a steady increase with a sharp uptick in the last few months. Syft and cdxgen show slower-but-still-steady increases with no recent changes.

The past few weeks have been a PR nightmare for Trivy, and I’m sure it’s been entirely unpleasant for the maintainers. I don’t wish this on anyone, but if you find yourself in this kind of situation, take heart. There’s at least some indication that your project can survive this kind of catastrophic event.

It’s worth noting that it may just be too early to tell what the future holds for Trivy. While they’ve received a lot more stars and attention, there haven’t been any commits to main in three weeks. People are still opening issues and pull requests, so it may just be that the maintainers are still focused on cleanup. I hope that the project comes back stronger and more secure, but time will tell.

Disaster recovery

If your project has a PR nightmare, stay calm. If you have the resources to bring in a crisis PR expert, do that and ignore everything else I say after this. Most likely, though, you don’t have the resources to bring in a pro. So here’s my amateur advice:

  • Mitigate the damage. Take whatever technical steps are necessary to keep the problem from getting worse. This may include rotating credentials, disabling CI pipelines, locking issues, or turning off services. This step is the most important because you don’t want the problem getting worse while you’re handling communication.
  • Create an advisory if appropriate. Do this if there’s a vulnerability in your software that you’re addressing. You can skip this for other flavors of disaster. If you’re on GitHub, you can follow the GHSA creation process. Other hosting platforms may have their own system. As a last resort, you can request a CVE at https://cveform.mitre.org/.
  • Communicate honestly and factually. Tell people what happened and, if applicable, how to know if they’re affected and how to address it. Don’t try to hide uncomfortable facts. Be honest, but don’t speculate or guess. You can always update as new facts come to light, but you can’t un-say things.
  • Don’t feel obligated to correct everyone. As the story spreads, people will get things wrong. Perhaps aggressively so. You don’t need to put your energy into correcting every misstatement posted by some rando online. If news outlets or influential people misstate facts, you can give them the correct version. If they draw inferences that you disagree with, it’s probably best to let it go.

This post’s featured photo by Ante Hamersmit on Unsplash.

The post Handling a PR disaster for your project appeared first on Duck Alignment Academy.

🎲 PHP version 8.4.20RC1 and 8.5.5RC1

Posted by Remi Collet on 2026-03-27 06:38:00 UTC

Release Candidate versions are available in the testing repository for Fedora and Enterprise Linux (RHEL / CentOS / Alma / Rocky and other clones) to allow more people to test them. They are available as Software Collections, for parallel installation, the perfect solution for such tests, and as base packages.

RPMs of PHP version 8.5.5RC1 are available

  • as base packages in the remi-modular-test for Fedora 42-44 and Enterprise Linux ≥ 8
  • as SCL in remi-test repository

RPMs of PHP version 8.4.20RC1 are available

  • as base packages in the remi-modular-test for Fedora 42-44 and Enterprise Linux ≥ 8
  • as SCL in remi-test repository

ℹ️ The packages are available for x86_64 and aarch64.

ℹ️ PHP version 8.3 is now in security mode only, so no more RC will be released.

ℹ️ Installation: follow the wizard instructions.

ℹ️ Announcements:

Parallel installation of version 8.5 as Software Collection:

yum --enablerepo=remi-test install php85

Parallel installation of version 8.4 as Software Collection:

yum --enablerepo=remi-test install php84

Update of system version 8.5:

dnf module switch-to php:remi-8.5
dnf --enablerepo=remi-modular-test update php\*

Update of system version 8.4:

dnf module switch-to php:remi-8.4
dnf --enablerepo=remi-modular-test update php\*

ℹ️ Notice:

  • version 8.5.4RC1 is in Fedora rawhide for QA
  • EL-10 packages are built using RHEL-10.1 and EPEL-10.1
  • EL-9 packages are built using RHEL-9.7 and EPEL-9
  • EL-8 packages are built using RHEL-8.10 and EPEL-8
  • oci8 extension uses the RPM of the Oracle Instant Client version 23.9 on x86_64 and aarch64
  • intl extension uses libicu 74.2
  • RC version is usually the same as the final version (no change accepted after RC, exception for security fix).
  • versions 8.4.19 and 8.5.4 are planed for March 12th, in 2 weeks.

Software Collections (php84, php85)

Base packages (php)

Howto: a very nice way of organizing your bash env variables and settings

Posted by Rénich Bon Ćirić on 2026-04-08 01:30:00 UTC

So, I know you have either ~/.bashrc and ~/.bash_profile or ~/.profile in your installation. We all do. And many apps we use on a daily basis use those. Plus, you like your aliases, your own env variables and maybe even one or two bash functions you like to use.

That creates a problem. You have everything in a single file (or two) and you have a mess. It's hard to read, hard to organize and a single mistake renders that file useless. Well, maybe not useless, but you get the idea. It's a bad idea to have a 500-line config file, ¿no crees?

So, which solutions are there?

Easy, just npm install .... Yeah, right. Who wants more terrible TypeScript/EcmaScript code in their environment? I mean, really. And it comes in troves! Huge amounts of it everywhere! No mames, we can do better with plain old Bash.

The trick is actually much simpler. Here's how I do it:

Filename: ~/.bashrc

## Load any supplementary scripts from ~/.bashrc.d/
if [[ -d $HOME/.bashrc.d ]]; then
   for f in "$HOME"/.bashrc.d/*.bash; do
      [[ -f "$f" ]] && source "$f"
   done

   unset -v f
fi

Note

This snippet goes into your ~/.bashrc. It checks if the directory ~/.bashrc.d exists and then loops through every file ending in .bash to "source" it. This effectively evaluates those files into your current session.

Now, you can do the same for your bash profile, which is the preferred place to put things like environment variables and such.

Filename: ~/.bash_profile

## Load any supplementary scripts from ~/.bash_profile.d/
if [[ -d ~/.bash_profile.d ]]; then
   for f in ~/.bash_profile.d/*.bash; do
      [[ -f "$f" ]] && source "$f"
   done

   unset -v f
fi

This is a neat trick, if I may say so. It enables me to create independent files for different things. For example, I like my $GOPATH env variable to point to ~/Projects/go. Also, I like to add Go's bin directory to my $PATH. Easy enough, right?

But, where do I put it?

Main Differences:
~/.bashrc:
Read every time you open a new interactive terminal. Perfect for aliases and prompts.
~/.bash_profile:
Read only once upon login. Best for environment variables that should be inherited by all child processes.

Tip

If you want to be able to overwrite your $PATH entries and expect them to persist between terminals without re-logging, putting the loading logic in ~/.bashrc is the way to go.

That said, I am putting my go.bash file in ~/.bashrc.d/go.bash:

# go settings
export GOPATH=$HOME/src/go
export PATH=$PATH:$GOPATH/bin

Now, it's as easy as opening a new terminal (I set up my terminal to use a login shell) or I can just source ~/.bash_profile. In Fedora, sourcing ~/.bash_profile will source ~/.bashrc if it exists anyway. ;D

One more customization I really like:

# ~/.bash_profile.d/ls.bash
alias ls='ls --color=auto --group-directories-first'

That one makes my directories appear before the files when using ls. The --color=auto flag is just to make the default colors stay.

Conclusion

Keep your environment clean, dude. Organizing your configs in .d directories makes it much easier to manage and debug. No more messy files!

Referencias

Fedora Code of Conduct Report 2025

Posted by Fedora Community Blog on 2026-04-07 12:00:00 UTC

The Fedora Project’s Code of Conduct and its reports are managed by the Fedora Code of Conduct Committee, the Fedora Community Architect, and the Fedora Project Leader. We publish this summary to demonstrate our commitment to community safety and our project’s social fabric.

This post covers the year of reports received in the 2025 calendar year. The purpose of publishing the annual Code of Conduct Report is to provide transparency, insight, and awareness into the health signs of the community.

How’d it go in 2025

In 2025, we had a slight uptick in engagement from 2024. 14 reports were opened in 2025, compared to 11 reports in 2024. While we saw some members step down this year, the Fedora Code of Conduct Committee (CoCC) also refreshed its membership with new voices. Jef Spaleta, Chris Idoko, and Ankur Sinha were nominated this year to maintain responsiveness and steer our community standards forward.

The majority of issues reported in the year 2025 were largely handled through “shoulder taps” and formal reach-outs. This is in comparison to disciplinary actions or emergency action requiring bans or long-term suspensions. While reports did increase from 2024 to 2025, the difference is negligible. The Committee expects this number to fluctuate annually, as world events and international conflicts often impact the social dynamics of communities like ours.

You can see the full data from 2025 in the table below.

Community Health Assessment

After six years of reporting, looking back at our journey from the modernization of the Code of Conduct to where we stand today, it is encouraging to see how much we have grown together. Yearly reports indicate while our community continues to have conflicts (as any healthy community ought to), incident severity continues to decrease comparing reports spanning 2020 through 2025. We attribute this consistent reduction in “opened reports” and “CoC interventions” to the maturity of our self-moderation culture.

A significant part of this positive atmosphere is thanks to the refreshed CoC guidelines established by Marie Nordin in 2021 successfully addressing the peak in incidents that occurred in the COVID-19. These were roadmaps on how we want to treat each other. Seeing these guidelines in actions in our reports shows that they are working as hoped. We feel the community is in a healthy place at this time, but a healthy committee is one that never stops listening. We would love to hear your thoughts, feedback and suggestions on how we can continue to help our shared spaces feel safe, inclusive and welcoming.  

YearReports OpenedReports ClosedWarnings IssuedModerations IssuedSuspensions IssuedBans Issued
202514141200
202411111010
202317175311
202221246300
202123242101
202020168420

Looking forward to 2026

If you witness or are part of a situation that violates Fedora’s Code of Conduct, please open a private report on the [Code of Conduct repo] or email codeofconduct@fedoraproject.org. As always, your reports are confidential and only visible to the Code of Conduct Committee.

Remember that opening a CoC report does not automatically mean action will be taken. Sometimes things can be clarified, improved, or resolved entirely. Or, it could be something pretty small, but it definitely wasn’t okay, and you don’t want to make a big deal… open that report anyway, because it could show a pattern of behavior that is negatively impacting more people than yourself.

Here is a reminder to our Fedora community to be kind and considerate to each other in all our interactions. We all depend on each other to create a community that is healthy, safe, and happy. Most of all, we love seeing folks self-moderate and stand up for the right thing day to day in our community. Keep it up, and keep being awesome Fedora, we <3 you!

About the Committee

Fedora Project’s Code of Conduct and reports are managed by the Fedora Code of Conduct Committee (CoCC). The Fedora CoCC is made up of the Fedora Project Leader, Jef Spaleta; the Fedora Community Architect, Justin Wheeler; the Red Hat legal team, as appropriate; and community nominated members. Jef Spaleta, Chris Onoja Idoko, Ankur Sinha, nominated this year.

We’re incredibly grateful to Josh Berkus and Laura Santamaria for stepping up as term-limited members of the Fedora Code of Conduct Committee (CoCC). Their commitment ensured we had consistent coverage through September 30th, 2025, providing vital support until our newest nominees were fully onboarded and trained.

The post Fedora Code of Conduct Report 2025 appeared first on Fedora Community Blog.

One Day

Posted by Justin Wheeler on 2026-04-07 08:00:00 UTC

It has been a minute. If you look at my blog archives, my last post went up in 2024. Recently, I decided it was time for a massive digital renovation: I completely migrated this blog from WordPress to Hugo using my own theme.

Fortunately, I was able to meet my one key requirement. The migration was a complete one-to-one pairing from WordPress to Hugo. Every post I wrote between 2015 and 2024 made the jump intact. Even the images and URL schema! You can go back, browse the archives, and read a decade’s worth of my written word in my new site. Best of all, every old URL for my WordPress blog will seamlessly redirect to the new home here.

But I didn’t just move the content; I also built a custom Hugo theme from the ground up. (Because of course I did.) I began working on the theme over a year ago for my own site (this very one!). Originally I developed the code inside my own website, but eventually, I moved the theme code into its own repository in June 2025. However, I spent a lot of time in March working on my theme, giving it a solid structure for blogging, and turning it into something highly functional. I confess that AI was significantly used in improving my Hugo theme. It was my first time ever using an AI agent to do something outside of a browser. For various reasons, I chose to work with Claude AI for this project, and it helped me accomplish clearly-defined milestones in my mind since a long time. I wanted to create a theme that was still useful for me, but had the broad appeals of any basic blogging tool or engine out there today. And I believe I achieved that together with AI assistance, my pedantic review patterns, and OCD-like obsession for my design vision. My hope is that eventually, more people than just me could benefit from it.

Of course, a beautifully optimized, custom-themed blog is still just an empty vessel if you don’t write. And to say a lot has happened in my life since 2024 would be an understatement. The last twenty-three months had much to teach me in holding profound grief and incredible joy at the same time.

The Hardest Goodbyes 🔗

The heaviest reality of this past year was a prolonged season of caregiving that culminated in back-to-back losses. Right before Christmas in December 2023, my mother was diagnosed with cholangiocarcinoma, better known as bile duct cancer. Throughout 2024, my sister and I walked alongside her through her cancer journey, doing everything we could to support her. Alongside this, my maternal grandmother’s health was steadily declining due to the onset of dementia.

The emotional and physical toll of managing both of their needs is why I spent so much time away from work throughout 2025, and why my availability became so unpredictable. Ultimately, we faced an unimaginable timeline: my mother passed away in September 2025, and then one month later, in October 2025, my grandmother also passed.

Toward the end of 2025, after they were both gone, I slowly but steadily began the process of climbing out and getting caught up on everything. Throughout all of this, my sister was my absolute rock. Even now, my sister and I are still dealing with the long-term ripple effects and the heavy administrative burden of navigating probate court and managing an estate. Walking through this long, heavy aftermath as partners with my sister means everything to me. I could not navigate this season of life without her.

Finding “Home” Across an Ocean 🔗

On the opposite end of the emotional spectrum, my life expanded in the best way possible: I married the love of my life and muse of my soul. In November 2025, my wife and I began the next chapter of life together. She is currently living and working in Germany. Most of the time since then is spent navigating the unique complexities of our union This includes what is usually a simple question for most married couples: where to live.

Because international immigration is a notoriously slow and complex machine, our life is currently a transatlantic hybrid. Right now, while I permanently reside in Georgia, USA, my time is shuffled between the USA, being with my wife in Germany, and traveling for work.

While we are managing the distance for now, our biggest ongoing project is my official relocation to Germany. The exact timeline is fluid, but our hope & prayer is to celebrate the winter holidays in Germany together this year as residents. I look forward to sharing more about this process as it unfolds. (Including any potential trauma of migrating from temperate, warm Georgia to somewhere much colder most of the year.)

The Weight of Context Switching 🔗

Between the flights, the time zones, and my day job at Red Hat supporting Fedora, my brain is regularly forced into a relentless state of context switching.

The “Execution Mode” I use to navigate probate court, resolving medical bills, and executing an estate actually uses the exact same back-office muscles I use to manage budgets and plan events for Fedora. The hardest part lately was not lack of passion, but the sheer volume of threads I am holding. I am constantly shifting gears between my work at Red Hat and Fedora, then to coordinating international immigration, and dealing with the immediate reality of life—like trying to figure out when a technician can fix the broken outdoor air-conditioning unit at my house in the middle of a workday.

If you have noticed me working odd, irregular, or even borderline unhealthy hours lately, that is why. Work is not necessarily an escape from the grief; it is one engine that keeps me moving. So, that is a part of my coping mechanism. But feeling spread this thin has also been a wake-up call that I need to delegate more, reduce the number of hats I am wearing, and focus on delivering deeper, higher-quality work on fewer things.

The Anchor and The Code 🔗

When I am dropping plates and feeling completely drained, someone might wonder why I keep showing up to work. For me, it was always about Fedora. I do not mean this as a humble brag, because I understand it is not this way for everyone. But for me, Fedora was always more than a paycheck; Fedora is the people and community bonds. Getting to build a free and Open Source operating system that aligns with my values, alongside a community I genuinely love, is what anchors me here.

That same drive to build and organize is the same reason why I took on this massive blog migration. Occasionally, I have some deep-seated OCD-like tendencies. Creating structure is another way how I cope with a world that often feels entirely out of my control. During my mother’s and grandmother’s health declines, the volume of incoming paperwork was overwhelming. It was an endless stream of letters, bills, hospital discharge packets, and insurance statements.

To manage it, I accidentally built a massive, semantic digital library. I ended up purchasing one of the best Linux-compatible HP digital scanners on the market to handle the influx of paper. I became incredibly efficient at scanning stacks of paper, writing rules to sort and filter emails, sorting and categorizing PDFs, and developing strict file-naming patterns so everything was easily searchable. It sounds novel, but keeping the physical paper stacks from taking over my own space gave me a tangible sense of peace. So, organizing the things I can control gives me the confidence to leap in and handle the chaotic, uncontrollable moments when they arrive.

Plus, if I am being completely honest, I am exhausted from the WordPress ecosystem altogether. I held significant anticipation for canceling my expensive WordPress hosting service and various other subscriptions and fees tied to running WordPress. However, what I did not expect to find while working on this project was a spark of joy for creation that I did not feel in a long time. My childhood and adolescence were filled with a curious desire to make things that were helpful and useful. This is perhaps what nudged me in the direction of computer science and information technology, because these were domains I could understand. I confess feeling mixed emotions that this rediscovery of joy for creation was mixed with AI assistance. Yet at the same time, this is a project that was on my list since several years, and “pays off” a lot of technical debt. I look forward to maintaining and hosting my website here, and rediscovering my writing voice. (And I can use Vim to write blog posts now too, hooray!)

My creative engineering spark is still very much alive.

Taking it One Day at a Time 🔗

It has been twenty-three months of extreme migrations—digital, geographical, and emotional. The dust is not all settled yet, and I am still finding my steady footing. But now that my new blog engine is finally running, I am excited to share more of the journey, the code, and whatever else comes next.

(One more yak shaved.)

Historia de Planeta Libre

Posted by Rénich Bon Ćirić on 2026-04-07 06:00:00 UTC

¿Te acuerdas de Planeta Linux México? Qué tiempos aquellos, compa. Hoy me puse nostálgico pensando en cómo ese espacio nos unía a todos los que andábamos metidos en el rollo del software libre hace ya más de dos décadas. Planeta Libre no es nomás un agregador de blogs; es la neta, es la continuación de un esfuerzo comunitario que se niega a morir.

Orígenes y Nostalgia

El proyecto nace de esa pinche nostalgia por los tiempos dorados. Durante los primeros años de los 2000, ese espacio era el punto de encuentro fundamental. Como miembro de la comunidad de Fedora, siempre valoré esa interacción única que se armaba entre desarrolladores, usuarios y entusiastas. ¡Era puro fuego!

Note

Fue en aquel Planeta original donde tuve el honor de conocer y convivir con grandes luminarios de la escena mexicana.

Figuras como Gunnar Wolf, cuyo blog Nice Grey Life ha sido una referencia constante, y muchos otros que le dieron forma a lo que hoy entendemos como el ecosistema de software libre en nuestro país. Había varios otros. La neta, les debemos un buen.

La Misión de Continuar

Con el paso de los años, las plataformas cambian y los espacios a veces se pierden gacho. Pero la necesidad de ese "pulso" comunitario sigue ahí, bien latente. Planeta Libre surgió como un intento personal de recuperar esa chispa, de mantener viva la conversación que iniciamos hace tantos años. ¿No crees que hace falta más de eso hoy en día?

Evolución Técnica y Apoyo Comunitario

Inicialmente, el proyecto comenzó su andadura técnica bajo el dominio planeta.libre, utilizando la red OpenNIC. Era una declaración de principios: un espacio libre en una red libre. ¡Bien perro!

Poco después, recibimos un impulso chido, que estuvo de locos: Octavio Álvarez (alvarezp), un pilar indiscutible de nuestra comunidad (conocido por su chamba en el Gultij y en proyectos como Debian y LibreOffice), decidió donar el dominio oficial planetalibre.org.

Tip

Esta generosa donación no solo nos dio un hogar más estable, sino que validó todo el esfuerzo. ¡Gracias, mi Octavio!

Hoy, Planeta Libre está construido con tecnología moderna (Crystal y Kemal), buscando ser rápido, seguro y, sobre todo, un espejo fiel de lo que nuestra comunidad está creando día con día. Digo, sin mencionar que se la pela todo el mundo con la velocidad de procesamiento; la cual es < 100 µs. Micro... como en millonésimas de segundo.

Agradecimientos

Este proyecto existe gracias a gente chingona:

La comunidad original de Planeta Linux México:
Por darnos el ejemplo y el espacio inicial que tanto extrañábamos.
Los de OpenNIC:
Sin esos batos, la neta, no se arma la cosa. Que chido que existan y que nos echen la mano.
Octavio Álvarez (alvarezp):
Por su generosidad al donar el dominio y su eterno compromiso técnico con México. Esperemos que sean muchos, muchos... MUCHOS años. ;D
La Comunidad de Fedora:
Por ser el motor de mi propia formación técnica y comunitaria. ¡Fedora manda!

¡Sigamos escribiendo esta historia juntos, que esto apenas se vuelve a poner bueno!

Matrix server maintenance

Posted by Fedora Infrastructure Status on 2026-04-06 11:15:00 UTC

Element Matrix Services is performing scheduled maintenance on our matrix server (fedora.im).

Affected Services:

  • chat.fedoraproject.org
  • fedora.im
  • matrix services

Migrer ses DNS sur Cloudflare : retour d’expérience et pièges à éviter

Posted by Guillaume Kulakowski on 2026-04-06 08:00:14 UTC
Dans cet article, je détaille la migration de mes DNS vers Cloudflare : configuration, mise en place du CDN, gestion des certificats avec Traefik et retour sur les problèmes rencontrés (ACME, SSH, mTLS). Un retour d’expérience concret avec les pièges à éviter.

kurbu5: MIT Kerberos plugins in Rust

Posted by Alexander Bokovoy on 2026-04-04 19:10:00 UTC

For a couple of years, Andreas Schneider and I have been working on a project we call the ‘local authentication hub’: an effort to use the Kerberos protocol to track authentication and authorization context for applications, regardless of whether the system they run on is enrolled into a larger organizational domain or is standalone. We aim to reuse the code and experience we got while developing Samba and FreeIPA over the past twenty years.

Local authentication hub

The local authentication hub relies on a Kerberos KDC available on demand on each system. We achieved this by allowing MIT Kerberos to communicate over UNIX domain sockets. On Linux systems, systemd allows processes to be started on demand when someone connects to a UNIX domain socket, and MIT Kerberos 1.22 has support for this mode.

A KDC accessible over a UNIX domain socket is not very useful in itself: it is only available within the context of a single machine (or a single container, or pod, if UNIX domain sockets are shared across multiple containers). Otherwise, it is a fully featured KDC with its own quirks. And we can start looking at what could be improved based on the enhanced context locality we have achieved. For example, a KDB driver can see host-specific network interfaces and thus be able to react to requests such as host/<ip.ad.dr.ess>@LOCALKDC-REALM dynamically—something that a centrally-managed KDC would only do through statically registered service principal names (SPNs), which are a pain to update as machines move across networks.

Adding support for dynamic features means new code needs to be written. MIT Kerberos is written in C, so our choices are either to continue writing in C or to integrate with whatever new language we choose. Initially, we kept the local KDC database driver written in C and decided to build the infrastructure we need in Rust. The end goal is to have most bits written in Rust.

The local KDC database isn’t supposed to handle millions of principal entries, but even for millions of them, MIT Kerberos has a pretty good default database driver built on LMDB: klmdb. We wanted to get out of the data store business and instead focus on higher-level logic. Thus, we made the same change I made in Samba around 2003 for virtual file system modules: we introduced support for stackable KDB drivers. This is also a part of the MIT Kerberos 1.22 release: a KDB driver implementation can ask the KDC to load a different KDB driver and choose to delegate some requests to it. The local KDC driver is using klmdb for that purpose.

With the database handled for us by klmdb, we focused on the local KDC-specific logic. We wanted to dynamically discover user principals from the operating system so that administrators do not need to maintain separate databases for them. systemd provides a userdb API to query such information over a varlink interface (also available over a UNIX domain socket) in a structured way, using JSON format. Thus, the Kirmes project was born. Kirmes is a Rust data library backed by the userdb API. It handles varlink communication through the wonderful Zlink library and exposes both asynchronous and synchronous access to user and group information.

The local KDC database driver prototype used the Kirmes C API. We demonstrated it at FOSDEM 2025: a user lookup is done over varlink, and if a user is present on the system, their Kerberos key is then looked up in klmdb using a specially-formatted userdb:<username> principal. You still need to handle those keys somehow, but there is a way to avoid that: use RADIUS.

Pre-authentication

A bit of historical reference. In 2012, Red Hat collaborated with MIT to introduce a KDC-side implementation of RFC 6560 (the OTP pre-authentication mechanism; at that point implemented in a proprietary solution by the RSA corporation). This mechanism allowed the KDC to get a hint out of a KDB driver and ask a RADIUS server to authenticate the credentials provided by the Kerberos client. Unlike traditional Kerberos symmetric keys, in this case, the client is sending a plain-text credential over the Kerberos protocol, and this credential can be forwarded to the RADIUS server. The plain-text nature of the RADIUS credential requires the use of a secure communication channel, and a good part of RFC 6560 relies on Flexible Authentication Secure Tunneling (FAST, RFC6113), where a pre-existing Kerberos ticket is used to encrypt the content of that tunnel.

Since ~2013, FreeIPA has used this mechanism to provide multi-factor authentication mechanisms: HOTP/TOTP tokens, RADIUS proxying to remote servers, the OAuth2 device authorization grant flow, and FIDO2 tokens. The list of mechanisms can be extended, as long as the model fits into the somewhat constrained Kerberos exchange flow. FreeIPA handles all communication from the KDC side via a local UNIX domain socket-activated daemon, ipa-otpd, which performs a user principal lookup and then decides on the details of how that user will be authenticated.

For the local KDC case, we used a similar approach but wrote a simplified version, localkdc-pam-auth, which uses PAM to authenticate user credentials. It works well and allows for a drop-in replacement: once the local KDC is set up, users defined on the system will automatically be able to receive Kerberos tickets, with no need to change any passwords or migrate their credentials into the Kerberos KDC. All we need now is the business logic to guide the KDC to use the OTP pre-authentication mechanism so that our RADIUS ‘proxy’ (localkdc-pam-auth) gets activated. This logic is implemented and will be available in the first localkdc release soon.

API bindings

But back to the KDC side. As mentioned above, our goal was to write the local KDC database driver in a modern, safe language. Interfacing Rust with the MIT Kerberos KDC means building an interface that allows aligning code on both sides. This is what this blog is actually about (sorry for the long prelude…): how to make an MIT Kerberos KDB driver in Rust.

Today I published Kurbu5, a project that aims to provide these API bindings to Rust. The name is a transliteration of “krb5” into Mesopotamian cuneiform phonology: Kurbu-ḫamšat-qaqqadī—”The Blessed Five-Headed One”.

Creating API bindings is tedious work: there are many interfaces, each representing multiple functions and structures. MIT Kerberos has 12 interfaces which altogether expose roughly 117 methods that plugin authors implement, backed by around 70 supporting types (data structures passed into and out of those methods). It all sounds like a Tolkien tale: nine interfaces for core Kerberos functionality (checking password quality, mapping hostnames to Kerberos realms, mapping Kerberos principals to local accounts, selecting which credential cache to use, handling pre-authentication on both the client and server side, enforcing KDC policy, authorizing PKINIT certificates, and auditing events on the KDC side), the database backend interface, and two administrative interfaces. This is something that could be automated with agentic workflows—which I did to allow a parallel porting effort. The resulting agent instructions are useful artifacts in themselves: they show how to work when porting MIT Kerberos C code to Rust.

The result is split over several Rust crates to allow targeted reuse. The bulk of the code lives in three crates. The core Kerberos plugin crate (kurbu5-rs) is the largest at around 12,600 lines. The database backend crate (kurbu5-kdb-rs) follows at 5,600 lines, and the administration crate (kurbu5-kadm5-rs) at 3,100 lines. The remaining crates—the proc-macro derives and the raw FFI sys crates—are much smaller, with the sys crates being almost trivially thin (the KDB and kadm5 ones are under 40 lines each, since they mostly just re-export bindings from the main sys crate).

All crates are available on crates.io and share the same MIT license as the original MIT Kerberos.

  • kurbu5-sys — Raw FFI bindings to the MIT Kerberos libkrb5 and KDB plugin API
  • kurbu5-derive — Proc-macro derives for kurbu5-rs non-KDB plugin interfaces
  • kurbu5-rs — Safe, idiomatic Rust API for writing MIT Kerberos non-KDB plugin modules
  • kurbu5-kdb-sys — KDB plugin API re-export — thin wrapper over kurbu5-sys adding libkdb5 linkage
  • kurbu5-kdb-derive — Proc-macro derive for kurbu5-kdb-rs KDB driver plugins
  • kurbu5-kdb-rs — Safe, idiomatic Rust API for writing MIT Kerberos KDB driver plugins
  • kurbu5-kadm5-sys — KADM5 plugin API bindings — links libkadm5srv_mit and re-exports kurbu5-sys types
  • kurbu5-kadm5-derive — Proc-macro derives for kurbu5-kadm5-rs KADM5_AUTH and KADM5_HOOK plugin interfaces
  • kurbu5-kadm5-rs — Safe, idiomatic Rust API for writing MIT Kerberos KADM5_AUTH and KADM5_HOOK plugin modules

In the localkdc project, we use kurbu5 to build a KDB driver and provide our audit plugin. We also have an experimental re-implementation of the OTP pre-authentication mechanism, both client and KDC sides, that was used to test interoperability with MIT Kerberos versions. The core of the KDB driver is ~520 lines of heavily documented Rust code, mostly handling business logic.

misc fedora bits first week of april 2026

Posted by Kevin Fenzi on 2026-04-04 18:18:28 UTC
Scrye into the crystal ball

A somewhat quiet week in fedora land this time, which is nice, as it allows for catching up on planned work. Of course there was the usual flow of day to day items too.

DeploymentConfig to Deployment

Long ago OpenShift used a custom object called 'DeploymentConfig' to define how to deploy applications. After a while it was deprecated in favor of the normal k8s 'Deployment' object. We have a bunch of apps using the old DeploymentConfig and we wanted to migrate them to the new Deployment.

To be clear, this is just a deprecation right now, it's not been removed from OpenShift yet, but we wanted to get things moveed sooner rather than later.

So, Pedro did all the heavy lifting here and created pull requests for all our apps to move them.

I spent some time this last week merging those and then doing the dance to change the existing app over, which roughly was:

  • merge pull request

  • delete DeploymentConfig

  • run ansible to deploy the Deployment

  • check that everything was redeployed and working correctly.

I managed to find a few apps in staging that were not working or deployed correctly and had to fix those up along the way. We also hit some issues with selectors not getting updated, so applications didn't have correct routes/services.

There's a few more of these to do, but will probibly wait until after freeze is over to do them as they could be disruptive.

Fedora 44 Final freeze

Speaking of freeze, we started the Fedora 44 Final infrastructure freeze. So far things are looking smooth for composes and such.

There are a few blockers currently, but hopefully we can get them sorted out and get a good release soon.

koji packaging

koji 1.36.0 came out last week and I spent a bit of time this week looking at modernizing the fedora spec to more match the python packaging guidelines and also to enable tests.

My somewhat hacky pr is at https://src.fedoraproject.org/rpms/koji/pull-request/29

It's nice to run the tests and have things not throwing deprecation warnings.

Upcoming blogs and vacation

I have some posts planned which I need to actually write up sometime. One on my solar system, which is mostly going great, and another fun one on open source monitoring of blood glucose levels. Perhaps this weekend.

I'm going to be largely away from the internet the week of April 20th. I'm going on a family vacation to Hawaii. :) I have never been there, so it should be pretty fun. I'll probibly check emails from time to time, but I will definitely not be around day to day on matrix/slack/irc/whatever.

comments? additions? reactions?

As always, comment on mastodon: https://fosstodon.org/@nirik/116347877029785741

Community Update – Week 14 2026

Posted by Fedora Community Blog on 2026-04-03 10:00:00 UTC

This is a report created by CLE Team, which is a team containing community members working in various Fedora groups for example Infrastructure, Release Engineering, Quality etc. This team is also moving forward some initiatives inside Fedora project.

Week: 31 Mar – 03 Apr 2026

Fedora Infrastructure

This team is taking care of day to day business regarding Fedora Infrastructure.
It’s responsible for services running in Fedora infrastructure.
Ticket tracker

  • [Ansible] Perform mapping for Fedora Science teams and groups [Commit]
  • [Badges] Review: Refactor badge builder to WTForms and Jinja YAML output [Rejected]
  • [Badges] Fix add_tag view to not accept request as parameter [Suggested]
  • [Badges] Replace history_limit with page-based pagination on user profile [Suggestion A] [Suggestion B]
  • [Badges] Return hashed email instead of raw email in JSON responses [Suggestion A] [Suggestion B]
  • Communishift namespace request for draft-share
  • Fix ipsilon links still pointing to pagure.io instead of forge.fedoraproject.org
  • [Monitoring]
    •  200 more items deleted from Nagios (400 to go, 1800 originally)
    • PostrgeSQL checking in place and seems to be solid
    • New monitoring for 5xx errors in HAProxy is finding some interesting things 
    • Detection of crashlooping OCP pods is in the works

CentOS Infra including CentOS CI

This team is taking care of day to day business regarding CentOS Infrastructure and CentOS Stream Infrastructure.
It’s responsible for services running in CentOS Infratrusture and CentOS Stream.
CentOS ticket tracker
CentOS Stream ticket tracker

Release Engineering

This team is taking care of day to day business regarding Fedora releases.
It’s responsible for releases, retirement process of packages and package builds.
Ticket tracker

  • Documentation updates, regular operations tickets, business as usual.
  • Final freeze is in effect now, which means Release Candidate compose requests will start coming shortly.
  • Fedora 44 final release is currently scheduled for Tue 2026-04-14.

AI

This is the summary of the work done regarding AI in Fedora.

QE

This team is taking care of quality of Fedora. Maintaining CI, organizing test days
and keeping an eye on overall quality of Fedora releases.

  • General release validation and bug reporting work throughout the whole week, in preparation for F44 Final.

Forgejo

This team is working on introduction of https://forge.fedoraproject.org to Fedora
and migration of repositories from pagure.io.

EPEL

This team is working on keeping Epel running and helping package things.

  • ongoing planning with openjdk maintainer regarding koji targets and tags
  • routine package maintenance (python-pydbus, python-edgegrid, llvm19, llvm20)
  • documentation refinement (EPEL minor EOL SOP)

List of new releases of apps maintained by I&R Team

If you have any questions or feedback, please respond to this report or contact us on #admin:fedoraproject.org channel on matrix.

The post Community Update – Week 14 2026 appeared first on Fedora Community Blog.

Building and pushing container images on Codeberg CI

Posted by Christof Damian on 2026-04-02 22:00:00 UTC
Codeberg logo
Logo: Codeberg e.V.CC BY-SA 4.0

As part of moving my services to EU-based infrastructure, I’ve been migrating away from GitHub to Codeberg, a non-profit code hosting platform based in Germany. One of the things I needed was a CI pipeline to build a container image and push it to the Codeberg container registry for my wildfires project (which also posts to @catfires@rls.social). It took a few tries to get right, so here’s what works for me.