/rss20.xml">

Fedora People

GDB source-tracking breakpoints

Posted by Fedora Magazine on 2026-04-15 16:33:47 UTC

One of the main abilities of a debugger is setting breakpoints.
GDB: The GNU Project Debugger now introduces an experimental feature
called source-tracking breakpoints that tracks the source line a breakpoint
was set to.

Introduction

Imagine you are debugging: you set breakpoints on a bunch of
source lines, inspect some values, and get ideas about how to change your
code. You edit the source and recompile, but keep your GDB session running
and type run to reload the newly compiled executable. Because you changed
the source, the breakpoint line numbers shifted. Right now, you have to
disable the existing breakpoints and set new ones.

GDB source-tracking breakpoints change this situation. When you set a
breakpoint using file:line notation, when this feature is enabled, GDB
captures a small window of the surrounding source code. When you recompile
and reload the executable, GDB adjusts any breakpoints whose lines shifted
due to source changes. This is especially helpful in ad-hoc debug sessions
where you want to keep debugging without manually resetting breakpoints
after each edit-compile cycle.

Setting a source-tracking breakpoint

To enable the source-tracking feature, run:

(gdb) set breakpoint source-tracking enabled on

Set a breakpoint using file:line notation:

(gdb) break myfile.c:42
Breakpoint 1 at 0x401234: file myfile.c, line 42.

GDB now tracks the source around this line. The info breakpoints command
shows whether a breakpoint is tracked:

(gdb) info breakpoints
Num Type Disp Enb Address What
1 breakpoint keep y 0x0000000000401234 in calculate at myfile.c:42
source-tracking enabled (tracking 3 lines around line 42)

Now edit the source — say a few lines are added above the breakpoint,
shifting it from line 42 to line 45. After recompiling and reloading the
executable with run, GDB resets the breakpoint to the new line and displays:

Breakpoint 1 adjusted from line 42 to line 45.

Run info breakpoints again to confirm the new location:

(gdb) info breakpoints
Num Type Disp Enb Address What
1 breakpoint keep y 0x0000000000401256 in calculate at myfile.c:45
source-tracking enabled (tracking 3 lines around line 45)

As you can see, GDB updated the breakpoint line to match the new location.

Limitations

The matching algorithm requires an exact string match of the captured source
lines. Whitespace-only changes or trivial reformatting of the tracked lines
will confuse the matcher and may cause the breakpoint not to be found.

GDB only searches within a 12-line window around the original location. If
the code shifted by more than that — for example, because a large block was
inserted above — the breakpoint will not be found. GDB will keep the
original location and print a warning:

warning: Breakpoint 1 source code not found after reload, keeping original
location.

Source context cannot be captured when a breakpoint is created pending
(e.g., with set breakpoint pending on), because no symbol table is available
yet. When the breakpoint later resolves to a location, it will not be
source-tracked.

Source tracking is not supported for ranged breakpoints (set with
break-range).

Breakpoints on inline functions that expand to multiple locations are not
source-tracked, as each location may have moved differently.

How to try this experimental feature

This feature is not yet available in a stable GDB release. There are two
ways to try it.

Install from COPR (for Fedora users)

A pre-built package is available through a COPR repository. Enable it and
install:

sudo dnf copr enable ahajkova/GDB-source-tracking-breakpoints
sudo dnf upgrade gdb

To disable the repository again after testing:

sudo dnf copr disable ahajkova/GDB-source-tracking-breakpoints

The COPR project page is at:
https://copr.fedorainfracloud.org/coprs/ahajkova/GDB-source-tracking-breakpo
ints/

Build from source

  1. Clone the GDB repository:
    git clone git://sourceware.org/git/binutils-gdb.git
    cd binutils-gdb
  2. Download and apply the patch from the upstream mailing list:
    https://sourceware.org/pipermail/gdb-patches/2026-April/226349.html
  3. Build GDB:
    mkdir build && cd build
    ../configure --prefix=/usr/local
    make -j$(nproc) all-gdb
  4. Run the newly built GDB:
    ./gdb/gdb

Conclusion

GDB source-tracking breakpoints are an experimental feature currently under
upstream review and not yet available in a stable GDB release. This link
https://sourceware.org/gdb/current/onlinedocs/gdb.html/Set-Breaks.html
covers all available breakpoint commands. If you try this feature out and
hit any kind of unexpected behavior, feedback is very welcome — you can
follow and respond to the upstream patch discussion on the GDB mailing list
at https://sourceware.org/pipermail/gdb-patches/2026-April/226349.html

Streaming syslog-ng data to your lakehouse using OpenTelemetry

Posted by Peter Czanik on 2026-04-15 12:12:22 UTC

Version 4.11.0 of syslog-ng contains contributions from Databricks related to OAuth2 authentication. Recently, they published a blog about how this enables their customers to send logs to their data lake using syslog-ng and the OpenTelemetry protocol.

The syslog-ng project received two contributions from Databricks in the last weeks of 2025. The first one turned the already existing OAuth2 support generic and extensible, so it can be used anywhere, not just with Microsoft Azure (but of course, Azure compatibility was preserved). The next pull request was built on the first one and enabled OAuth2 support for gRPC-based destinations, like OpenTelemetry, Loki, BigQuery, PubSub, ClickHouse, etc. These changes were released as part of the syslog-ng 4.11.0 release. You can read more about these in the release notes at https://github.com/syslog-ng/syslog-ng/releases/tag/syslog-ng-4.11.0

Besides an excellent overview about syslog-ng, the related Databricks blog also provides step-by-step instructions on how to use syslog-ng with their product. You can read it at: https://community.databricks.com/t5/technical-blog/streaming-syslog-ng-data-to-your-lakehouse-powered-by-zerobus/ba-p/153979

syslog-ng logo

Originally published at https://www.syslog-ng.com/community/b/blog/posts/streaming-syslog-ng-data-to-your-lakehouse-using-opentelemetry

The best way to setup a kanban board

Posted by Ben Cotton on 2026-04-15 12:00:00 UTC

The best way to setup a kanban board is…whatever way works best for you. I have at least two distinct styles of board setup across three tools (yes, I have a problem) because that’s what works best for me. How you setup your boards is a matter of style. The most important thing is to set it up in such a way that you’ll actually use it — an unused board is full of lies and can cause confusion with your collaborators. Since I often get asked for help on this, it seems worth writing down some considerations.

Direction of flow

Most tools I’ve used assume a left-to-right flow. I recently switched to using a right-to-left flow after reading Philippe Bourgau’s post on the topic. Using right-to-left means you’re starting with the stuff that’s in progress instead of the backlog. You “pull” cards instead of “pushing” them. I made the suggestion at work, and folks seem to generally like the change. The main problem is that a lot of tools I’ve used treat the leftmost column as the default starting point, so the card creation experience involves an extra click or two.

Number of columns

This is one that I’ve often seen people overthink. In the simplest configuration, you have “to do”, “doing”, and “done.” That’s often enough. Simplicity is good. On the other end, one of the boards I use at work has something like a dozen columns. Not every card flows through every column, so it’s not as wild as it sounds. Much of the column sprawl is because the tool we use doesn’t support swimlanes. I would normally not suggest double-digit columns, but it works well in this case.

A good rule to determine if a column is necessary is if it never has any cards, or cards only briefly live in the column, it’s not worthwhile.

Examples

Here are a few examples of how I have different boards set up.

The meal planning board my wife and I use to plan the week’s dinner has three columns: ideas, need ingredients, and planned. “Ideas” is the backlog, “need ingredients” means we intend to cook it but first we have to go grocery shopping, and “planned” is ready to cook.

The board I use to track laundry has five columns: dirty (the backlog), ready (sorted), washing (duh), drying (also duh), and folding (triple duh). The folding column can probably go away, but sometimes the basket sits for a few hours or until the next day.

My board for Duck Alignment Academy has five columns (plus one). “New” is for ideas that I don’t really fleshed out yet. “Ready” is for posts that I could sit down and write. “In progress” is for posts that I am currently writing. “Scheduled” is for completed posts waiting to publish (I wish this had more cards in it). “Done” is for posts that have published. There’s an extra “archived” column that I move cards to after I send that month’s Duck Alignment Academy newsletter. (I have a column because the tool doesn’t support archiving cards directly.)

Extra fields/metadata

Most tools let you apply labels, add due dates, create custom fields, and so on. I’ve certainly made use of that when setting up boards, but I find myself not really paying attention to it most of the time. There’s often an urge to design a system so that everything will be perfectly organized. But in the same way that a backup that you never test restores from is not reliable, metadata that you only write is not useful. Metadata is so easy to overoptimize, because it requires getting everyone using the board to buy in and then also to have a reason to use it. I wrote a similar piece about issue labels earlier this year.

In my experience, it’s almost always better to ignore metadata when initially creating a board. Only when you can identify a concrete problem that you’re actually experiencing should you add metadata.

A great example of this is in my Duck Alignment Academy board. When I was first creating the board, I was also creating the website. I had cards for tasks like domain registration, hosting setup, page creation, and so on. In the four-plus years since I launched the site, the cards have almost exclusively been blog posts. The “blog post” label that I created doesn’t add a lot of value. What does add value are two custom fields I added: “URL” and “description.” I put the post’s URL and the excerpt (that gets used for social media preview and the like) into those two fields so that later on when I go to add them to the newsletter or share them elsewhere, they’re available and consistent.

Another use that is actually useful is on my meal planning board. I have labels for the primary protein in a meal, which helps me see at a glance if we’ve planned chicken five days in a row.

Work in progress limits

If your tool supports setting work in progress limits, I recommend that you do. If nothing else, it forces you to be honest about what you’re actually working on and what you’ve set aside for one reason or another. I’ve found that WIP limits seem to work better on single-user boards, but I haven’t used them much on collaborative boards.

Automate and integrate

The more your board does the boring work for you, the more likely you are to keep it up to date. If it supports auto-archiving cards when they’re complete, set that up. If you can tie it in to the tools you’re already using, do that. (The kanban board available in GitHub issues works pretty well in that regard.) Make the board a hub of your work and you’ll get use out of it. Make the board a chore that you have to go update and you won’t.

This post’s featured photo by airfocus on Unsplash.

The post The best way to setup a kanban board appeared first on Duck Alignment Academy.

Browser wars

Posted by Vedran Miletić on 2026-04-15 07:28:05 UTC

Browser wars


brown fox on snow field

Photo source: Ray Hennessy (@rayhennessy) | Unsplash


Last week in Rijeka we held Science festival 2015. This is the (hopefully not unlucky) 13th instance of the festival that started in 2003. Popular science events were organized in 18 cities in Croatia.

I was invited to give a popular lecture at the University departments open day, which is a part of the festival. This is the second time in a row that I got invited to give popular lecture at the open day. In 2014 I talked about The Perfect Storm in information technology caused by the fall of economy during 2008-2012 Great Recession and the simultaneous rise of low-cost, high-value open-source solutions. Open source completely changed the landscape of information technology in just a few years.

The follow-up

Posted by Vedran Miletić on 2026-04-15 07:28:05 UTC

The follow-up


people watching concert

Photo source: Andre Benz (@trapnation) | Unsplash


When Linkin Park released their second album Meteora, they had a quote on their site that went along the lines of

Musicians have their entire lives to come up with a debut album, and only a very short time afterward to release a follow-up.

Open-source magic all around the world

Posted by Vedran Miletić on 2026-04-15 07:28:05 UTC

Open-source magic all around the world


woman blowing sprinkle in her hand

Photo source: Almos Bechtold (@almosbech) | Unsplash


Last week brought us two interesting events related to open-source movement: 2015 Red Hat Summit (June 23-26, Boston, MA) and Skeptics in the pub (June 26, Rijeka, Croatia).

Joys and pains of interdisciplinary research

Posted by Vedran Miletić on 2026-04-15 07:28:05 UTC

Joys and pains of interdisciplinary research


white and black coffee maker

Photo source: Trnava University (@trnavskauni) | Unsplash


In 2012 University of Rijeka became NVIDIA GPU Education Center (back then it was called CUDA Teaching Center). For non-techies: NVIDIA is a company producing graphical processors (GPUs), the computer chips that draw 3D graphics in games and the effects in modern movies. In the last couple of years, NVIDIA and other manufacturers allowed the usage of GPUs for general computations, so one can use them to do really fast multiplication of large matrices, finding paths in graphs, and other mathematical operations.

What is the price of open-source fear, uncertainty, and doubt?

Posted by Vedran Miletić on 2026-04-15 07:28:05 UTC

What is the price of open-source fear, uncertainty, and doubt?


turned on red open LED signage

Photo source: j (@janicetea) | Unsplash


The Journal of Physical Chemistry Letters (JPCL), published by American Chemical Society, recently put out two Viewpoints discussing open-source software:

  1. Open Source and Open Data Should Be Standard Practices by J. Daniel Gezelter, and
  2. What Is the Price of Open-Source Software? by Anna I. Krylov, John M. Herbert, Filipp Furche, Martin Head-Gordon, Peter J. Knowles, Roland Lindh, Frederick R. Manby, Peter Pulay, Chris-Kriton Skylaris, and Hans-Joachim Werner.

Viewpoints are not detailed reviews of the topic, but instead, present the author's view on the state-of-the-art of a particular field.

The first of two articles stands for open source and open data. The article describes Quantum Chemical Program Exchange (QCPE), which was used in the 1980s and 1990s for the exchange of quantum chemistry codes between researchers and is roughly equivalent to the modern-day GitHub. The second of two articles questions the open-source software development practice, advocating the usage and development of proprietary software. I will dissect and counter some of the key points from the second article below.

On having leverage and using it for pushing open-source software adoption

Posted by Vedran Miletić on 2026-04-15 07:28:05 UTC

On having leverage and using it for pushing open-source software adoption


Open 24 Hours neon signage

Photo source: Alina Grubnyak (@alinnnaaaa) | Unsplash


Back in late August and early September, I attended 4th CP2K Tutorial organized by CECAM in Zürich. I had the pleasure of meeting Joost VandeVondele's Nanoscale Simulations group at ETHZ and working with them on improving CP2K. It was both fun and productive; we overhauled the wiki homepage and introduced acronyms page, among other things. During a coffee break, there was a discussion on the JPCL viewpoint that speaks against open-source quantum chemistry software, which I countered in the previous blog post.

But there is a story from the workshop which somehow remained untold, and I wanted to tell it at some point. One of the attendants, Valérie Vaissier, told me how she used proprietary quantum chemistry software during her Ph.D.; if I recall correctly, it was Gaussian. Eventually, she decided to learn CP2K and made the switch. She liked CP2K better than the proprietary software package because it is available free of charge, the reported bugs get fixed quicker, and the group of developers behind it is very enthusiastic about their work and open to outsiders who want to join the development.

AMD and the open-source community are writing history

Posted by Vedran Miletić on 2026-04-15 07:28:05 UTC

AMD and the open-source community are writing history


a close up of a cpu chip on top of a motherboard

Photo source: Andrew Dawes (@andrewdawes) | Unsplash


Over the last few years, AMD has slowly been walking the path towards having fully open source drivers on Linux. AMD did not walk alone, they got help from Red Hat, SUSE, and probably others. Phoronix also mentions PathScale, but I have been told on Freenode channel #radeon this is not the case and found no trace of their involvement.

AMD finally publically unveiled the GPUOpen initiative on the 15th of December 2015. The story was covered on AnandTech, Maximum PC, Ars Technica, Softpedia, and others. For the open-source community that follows the development of Linux graphics and computing stack, this announcement comes as hardly surprising: Alex Deucher and Jammy Zhou presented plans regarding amdgpu on XDC2015 in September 2015. Regardless, public announcement in mainstream media proves that AMD is serious about GPUOpen.

I believe GPUOpen is the best chance we will get in this decade to open up the driver and software stacks in the graphics and computing industry. I will outline the reasons for my optimism below. As for the history behind open-source drivers for ATi/AMD GPUs, I suggest the well-written reminiscence on Phoronix.

I am still not buying the new-open-source-friendly-Microsoft narrative

Posted by Vedran Miletić on 2026-04-15 07:28:05 UTC

I am still not buying the new-open-source-friendly-Microsoft narrative


black framed window

Photo source: Patrick Bellot (@pbellot) | Unsplash


This week Microsoft released Computational Network Toolkit (CNTK) on GitHub, after open sourcing Edge's JavaScript engine last month and a whole bunch of projects before that.

Even though the open sourcing of a bunch of their software is a very nice move from Microsoft, I am still not convinced that they have changed to the core. I am sure there are parts of the company who believe that free and open source is the way to go, but it still looks like a change just on the periphery.

All the projects they have open-sourced so far are not the core of their business. Their latest version of Windows is no more friendly to alternative operating systems than any version of Windows before it, and one could argue it is even less friendly due to more Secure Boot restrictions. Using Office still basically requires you to use Microsoft's formats and, in turn, accept their vendor lock-in.

Put simply, I think all the projects Microsoft has opened up so far are a nice start, but they still have a long way to go to gain respect from the open-source community. What follows are three steps Microsoft could take in that direction.

Free to know: Open access and open source

Posted by Vedran Miletić on 2026-04-15 07:28:05 UTC

Free to know: Open access and open source


yellow and black come in we're open sign

Photo source: Álvaro Serrano (@alvaroserrano) | Unsplash


!!! info Reposted from Free to Know: Open access & open source, originally posted by STEMI education on Medium.

Q&A with Vedran Miletić

In June 2014, Elon Musk opened up all Tesla patents. In a blog post announcing this, he wrote that patents "serve merely to stifle progress, entrench the positions of giant corporations and enrich those in the legal profession, rather than the actual inventors." In other words, he joined those who believe that free knowledge is the prerequisite for a great society -- that it is the vibrancy of the educated masses that can make us capable of handling the strange problems our world is made of.

The movements that promote and cultivate this vibrancy are probably most frequently associated with terms "Open access" and "open source". In order to learn more about them, we Q&A-ed Vedran Miletić, the Rocker of Science -- researcher, developer and teacher, currently working in computational chemistry, and a free and open source software contributor and activist. You can read more of his thoughts on free software and related themes on his great blog, Nudged Elastic Band. We hope you will join him, us, and Elon Musk in promoting free knowledge, cooperation and education.

The academic and the free software community ideals

Posted by Vedran Miletić on 2026-04-15 07:28:05 UTC

The academic and the free software community ideals


book lot on black wooden shelf

Photo source: Giammarco Boscaro (@giamboscaro) | Unsplash


Today I vaguely remembered there was one occasion in 2006 or 2007 when some guy from the academia doing something with Java and Unicode posted on some mailing list related to the free and open-source software about a tool he was developing. What made it interesting was that the tool was open source, and he filed a patent on the algorithm.

Celebrating Graphics and Compute Freedom Day

Posted by Vedran Miletić on 2026-04-15 07:28:05 UTC

Celebrating Graphics and Compute Freedom Day


stack of white and brown ceramic plates

Photo source: Elena Mozhvilo (@miracleday) | Unsplash


Hobbyists, activists, geeks, designers, engineers, etc have always tinkered with technologies for their purposes (in early personal computing, for example). And social activists have long advocated the power of giving tools to people. An open hardware movement driven by these restless innovators is creating ingenious versions of all sorts of technologies, and freely sharing the know-how through the Internet and more recently through social media. Open-source software and more recently hardware is also encroaching upon centers of manufacturing and can empower serious business opportunities and projects.

The free software movement is cited as both an inspiration and a model for open hardware. Free software practices have transformed our culture by making it easier for people to become involved in producing things from magazines to music, movies to games, communities to services. With advances in digital fabrication making it easier to manipulate materials, some now anticipate an analogous opening up of manufacturing to mass participation.

Enabling HTTP/2, HTTPS, and going HTTPS-only on inf2

Posted by Vedran Miletić on 2026-04-15 07:28:05 UTC

Enabling HTTP/2, HTTPS, and going HTTPS-only on inf2


an old padlock on a wooden door

Photo source: Arkadiusz Gąsiorowski (@ambuscade) | Unsplash


Inf2 is a web server at University of Rijeka Department of Informatics, hosting Sphinx-produced static HTML course materials (mirrored elsewhere), some big files, a WordPress instance (archived elsewhere), and an internal instance of Moodle.

HTTPS was enabled on inf2 for a long time, albeit using a self-signed certificate. However, with Let's Encrpyt coming into public beta, we decided to join the movement to HTTPS.

Why we use reStructuredText and Sphinx static site generator for maintaining teaching materials

Posted by Vedran Miletić on 2026-04-15 07:28:05 UTC

Why we use reStructuredText and Sphinx static site generator for maintaining teaching materials


open book lot

Photo source: Patrick Tomasso (@impatrickt) | Unsplash


Yesterday I was asked by Edvin Močibob, a friend and a former student teaching assistant of mine, the following question:

You seem to be using Sphinx for your teaching materials, right? As far as I can see, it doesn't have an online WYSIWYG editor. I would be interested in comparison of your solution with e.g. MediaWiki.

While the advantages and the disadvantages of static site generators, when compared to content management systems, have been written about and discussed already, I will outline our reasons for the choice of Sphinx below. Many of the points have probably already been presented elsewhere.

Fly away, little bird

Posted by Vedran Miletić on 2026-04-15 07:28:05 UTC

Fly away, little bird


macro-photography blue, brown, and white sparrow on branch

Photo source: Vincent van Zalinge (@vincentvanzalinge) | Unsplash


The last day of July happened to be the day that Domagoj Margan, a former student teaching assistant and a great friend of mine, set up his own DigitalOcean droplet running a web server and serving his professional website on his own domain domargan.net. For a few years, I was helping him by providing space on the server I owned and maintained, and I was always glad to do so. Let me explain why.

Mirroring free and open-source software matters

Posted by Vedran Miletić on 2026-04-15 07:28:05 UTC

Mirroring free and open-source software matters


gold and silver steel wall decor

Photo source: Tuva Mathilde Løland (@tuvaloland) | Unsplash


Post theme song: Mirror mirror by Blind Guardian

A mirror is a local copy of a website that's used to speed up access for the users residing in the area geographically close to it and reduce the load on the original website. Content distribution networks (CDNs), which are a newer concept and perhaps more familiar to younger readers, serve the same purpose, but do it in a way that's transparent to the user; when using a mirror, the user will see explicitly which mirror is being used because the domain will be different from the original website, while, in case of CDNs, the domain will remain the same, and the DNS resolution (which is invisible to the user) will select a different server.

Free and open-source software was distributed via (FTP) mirrors, usually residing in the universities, basically since its inception. The story of Linux mentions a directory on ftp.funet.fi (FUNET is the Finnish University and Research Network) where Linus Torvalds uploaded the sources, which was soon after mirrored by Ted Ts'o on MIT's FTP server. The GNU Project's history contains an analogous process of making local copies of the software for faster downloading, which was especially important in the times of pre-broadband Internet, and it continues today.

Markdown vs reStructuredText for teaching materials

Posted by Vedran Miletić on 2026-04-15 07:28:05 UTC

Markdown vs reStructuredText for teaching materials


blue wooden door surrounded by book covered wall

Photo source: Eugenio Mazzone (@eugi1492) | Unsplash


Back in summer 2017. I wrote an article explaining why we used Sphinx and reStructuredText to produce teaching materials and not a wiki. In addition to recommending Sphinx as the solution to use, it was general praise for generating static HTML files from Markdown or reStructuredText.

This summer I made the conversion of teaching materials from reStructuredText to Markdown. Unfortunately, the automated conversion using Pandoc didn't quite produce the result I wanted so I ended up cooking my own Python script that converted the specific dialect of reStructuredText that was used for writing the contents of the group website and fixing a myriad of inconsistencies in the writing style that accumulated over the years.

Don't use RAR

Posted by Vedran Miletić on 2026-04-15 07:28:05 UTC

Don't use RAR


a large white tank

Photo source: Tim Mossholder (@ctimmossholder) | Unsplash


I sometimes joke with my TA Milan Petrović that his usage of RAR does not imply that he will be driving a rari. After all, he is not Devito rapping^Wsinging Uh 😤. Jokes aside, if you search for "should I use RAR" or a similar phrase on your favorite search engine, you'll see articles like 2007 Don't Use ZIP, Use RAR and 2011 Why RAR Is Better Than ZIP & The Best RAR Software Available.

Should I do a Ph.D.?

Posted by Vedran Miletić on 2026-04-15 07:28:05 UTC

Should I do a Ph.D.?


a bike is parked in front of a building

Photo source: Santeri Liukkonen (@iamsanteri) | Unsplash


Tough question, and the one that has been asked and answered over and over. The simplest answer is, of course, it depends on many factors.

As I started blogging at the end of my journey as a doctoral student, the topic of how I selected the field and ultimately decided to enroll in the postgraduate studies never really came up. In the following paragraphs, I will give a personal perspective on my Ph.D. endeavor. Just like other perspectives from doctors of not that kind, it is specific to the person in the situation, but parts of it might apply more broadly.

Alumni Meeting 2023 at HITS and the reminiscence of the postdoc years

Posted by Vedran Miletić on 2026-04-15 07:28:05 UTC

Alumni Meeting 2023 at HITS and the reminiscence of the postdoc years


a fountain in the middle of a town square

Photo source: Jahanzeb Ahsan (@jahan_photobox) | Unsplash


This month we had Alumni Meeting 2023 at the Heidelberg Institute for Theoretical Studies, or HITS for short. I was very glad to attend this whole-day event and reconnect with my former colleagues as well as researchers currently working in the area of computational biochemistry at HITS. After all, this is the place and the institution where I worked for more than half of my time as a postdoc, where I started regularly contributing code to GROMACS molecular dynamics simulator, and published some of my best papers.

My perspective after two years as a research and teaching assistant at FIDIT

Posted by Vedran Miletić on 2026-04-15 07:28:05 UTC

My perspective after two years as a research and teaching assistant at FIDIT


human statues near white building

Photo source: Darran Shen (@darranshen) | Unsplash


My employment as a research and teaching assistant at Faculty of Informatics and Digital Technologies (FIDIT for short), University of Rijeka (UniRi) ended last month with the expiration of the time-limited contract I had. This moment has marked almost two full years I spent in this institution and I think this is a good time to take a look back at everything that happened during that time. Inspired by the recent posts by the PI of my group, I decided to write my perspective on the time that I hope is just the beginning of my academic career.

Matrix server maintenance

Posted by Fedora Infrastructure Status on 2026-04-14 11:15:00 UTC

Element Matrix Services is performing scheduled maintenance on our matrix server (fedora.im).

Affected Services:

  • chat.fedoraproject.org
  • fedora.im
  • matrix services

Things I Read: 1-13 April 2026

Posted by Brian (bex) Exelbierd on 2026-04-14 11:00:00 UTC

I’ve long been inspired by the recently read lists published by my friends Ben Cotton and Vadim Rutkovsky. In that spirit, here’s a list of things I found interesting, or that stuck with me for other reasons.

This period was marked by some trips down memory lane. The Supreme Court of the US is always in the news, these days usually for less than pleasant reasons, and this time it was joined by the fast-food burger chain Wendy’s.

Disclaimer: I work at Microsoft on upstream Linux in Azure. These are my personal notes and opinions.

Memory Lane

The Surprising Reason Neil Gorsuch Has Been So Good on Native Rights

I searched for this piece after a random Mastodon toot referred to Justice Gorsuch as being "ride or die" for Native Americans. I knew I wouldn't find anything that would change my opinion of his rulings, but what I did find was still unexpected. The original framers of the Constitution were very specific (weirdly so for the time period) about Native rights, and as an originalist Gorsuch supports this view. Even a broken clock is right twice a day ...

What the Hell Happened to Wendy’s?

It’s also increasingly difficult for any brand to keep anyone’s attention in a manic and incoherent media ecosystem where the aura that a CEO projects while biting into a burger on camera is seemingly more important than the quality of the burger itself.

Wendy's has always had amazing chili (and they provide hot sauce!) and a spicy chicken sandwich that couldn't be beaten. While I wouldn't miss their fries, I don't honestly care if the McDonald's CEO looks good eating a hamburger or not.

LLMs and AI

THE 2028 GLOBAL INTELLIGENCE CRISIS

What if our AI bullishness continues to be right...and what if that’s actually bearish?

This think piece surfaced some ideas about how success with LLMs, and to a degree AGI, could still turn into a downward cycle. I hadn't thought about the threat of disintermediation multiplied by the loss of per-seat revenue. It also makes recent comments by Microsoft about LLM agents needing seat licenses make more sense.

I used AI. It worked. I hated it.

the tool requires expertise to validate, but its use diminishes expertise and stunts its growth. How does one become an expert? There are no shortcuts; there is only continuous hard work and dedication.

There is a continuous drumbeat about the loss of the expertise required to operate LLMs effectively, while simultaneously insisting that the tools are failures unless completely untrained people can use them without trouble. This isn't a new problem. Almost all technological innovation comes at the cost of higher abstraction levels that require a deeper understanding of the space while also eliminating some core knowledge of how things work. I am firmly of the opinion that using any tool effectively requires training, and that we have a long history of doing a bad job training people on computing tools. But I also don't believe upping the abstraction level automatically means we can't train a new generation to be effective. Airline pilots, accountants and machinists all did it. Why can't developers?

Other Technology

Self hosting as much of my online presence as practical

[M]y ISP doesn’t guarantee a static IPv4 [therefore I am] running a Wireguard link between a box that sits in a cupboard in my living room and the smallest OVH instance I can

My ISP literally routes me no inbound internet except in response to an outbound request. It is simultaneously refreshing and frustrating. I had long debated trying to pull off a link like this with Tailscale but hadn't bothered. I may now be inspired to try.

I tested three Windows laptops in the MacBook Neo’s price range — there’s no contest

Despite years of rumors, the MacBook Neo still seemed to take the Windows world by surprise.

Is literally anyone surprised by the lack of actual consumer focus in the low-end laptop market? The problem here has never been the "Windows Tax." It's the unwillingness to do the integration work required to make the hardware and the OS behave like a coherent product. OEMs and Microsoft could do this, but the OEMs won't force their suppliers to meet that bar.

The Nail Test: Why this $54 billion innovation is terrifying Western auto executives

If you can reproduce the failure one hundred times, identically, then and only then have you understood the mechanism.

It's interesting to see failure-based TDD in the industrial world. I knew China was big on EVs and batteries, but this specific engineering drive was one I hadn't seen written up so directly before. It is a Fast Company article so the quality may drift, but it's a solid read.

Fun

🎓 On Geldings and the 'Natural' Social Order of Horses

It also, if you think about it for more than a second, cannot possibly be how natural horse societies work.

Almost everything Eleanor writes is amazing, and this is a great piece on something I never thought I'd be interested in.

Even if horses aren't your thing, her comments on 'bachelor bands' and the place they hold in non-breeding male horse life are worth reading. The introduction of geldings by humans changed how horses interact and socialize. It also reminded me how badly we've warped the ways young men are taught to socialize in an age where garbage like the Manosphere is given a platform.

Test Your Body Awareness

If you're like me and a person approaching a certain age, well ... you need to know. One of the best personal changes I have made is going to the gym twice a week for almost the last year. I am privileged and able to have a personal trainer. It isn't just the accountability, it is the ability to have an expert in a domain where I am not an expert do the thinking. She directs, I lift.

Nominate Your Fedora Heroes: Mentor and Contributor Recognition 2026

Posted by Fedora Magazine on 2026-04-14 09:30:00 UTC
Mentor and Contributor Recognition 2026 - Generated using Gemini Nano Banana Pro by Akashdeep Dhar

It’s time to show our appreciation of the amazing contributors who help shape the Feodra community.

The Fedora Project thrives through the devotion, guidance, and tireless drive of the contributors who consistently perform. From developing testcases to onboarding contributors, from technical writing to coordinating events, it is these vital champions who ensure that the community flourishes. In coordination with the Fedora Mentor Summit 2026, we will be returning to Flock To Fedora 2026 to announce the winners. This wiki reflects the deep gratitude and careful thought behind this community recognition program.

As we prepare to spotlight exceptional mentors and contributors across the Fedora Project, we invite you to help us appreciate the amazing contributors who help shape the community. Whether it is a veteran mentor who helped you begin your journey or a contributor whose efforts have truly reshaped the community’s landscape, now is the moment to celebrate them! Discover more about the nomination guidelines and submit your entry using the link provided below:


👉 Find more information here: https://fedoraproject.org/wiki/Contributor_Recognition_Program_2026
👉 Submit your nominations here: https://forms.gle/mBAVKw4qLu14R5YY7

🗓 Deadline: 15th May 2026

Let us appreciate the amazing contributors who help shape the community. Your nomination could be the recognition that might enable them to do more – and a moment of achievement for the entire community.

Rustbucket

Posted by Tony Asleson on 2026-04-14 01:38:18 UTC

Sorting a terabyte of data in the late 1990s meant serious hardware, serious planning, and probably a serious budget approval process. Today you can do it on a workstation before lunch. I wanted to know how fast, so I wrote rustbucket to find out.

It’s a two-phase external sort implemented in Rust, built around io_uring, and named for reasons that should be obvious to anyone who has spent time with either Rust or storage systems.

RHEL 10 (GNOME 47) Accessibility Conformance Report

Posted by Felipe Borges on 2026-04-13 10:24:49 UTC

Red Hat just published the Accessibility Conformance Report (ACR) for Red Hat Enterprise Linux 10.

Accessibility Conformance Reports basically document how our software measures up against accessibility standards like WCAG and Section 508. Since RHEL 10 is built on GNOME 47, this report is a good look at how our stack handles various accessibility things from screen readers to keyboard navigation.

Getting a desktop environment to meet these requirements is a huge task and it’s only possible because of the work done by our community in projects like: Orca, GTK, Libadwaita, Mutter, GNOME Shell, core apps, etc…

Kudos to everyone in the GNOME project that cares about improving accessibility. We all know there’s a long way to go before desktop computing is fully accessible to everyone, but we are surely working on that.

If you’re curious about the state of accessibility in the 47 release or how these audits work, you can find the full PDF here.

misc fedora bits second week of april 2026

Posted by Kevin Fenzi on 2026-04-12 17:14:00 UTC
Scrye into the crystal ball

Another saturday and... oh wait, it's sunday! I was away almost all the day yesterday (morning at https://beaverbarcamp.org/ and afternoon/evening visiting family ), so this will be a day late. :)

This week we were still in Fedora 44 final freeze (we canceled the go/nogo on thursday because there were still unaddressed blockers) so there was a lot of catching up on old issues/processing docs and other pull requests and the like.

There were a few things that stood out however:

Matrix bots learn new tricks

Diego wrote up a pull request to adjust our matrix bot to point to forge.fedoraproject.org for things that have moved there from pagure.io ( https://github.com/fedora-infra/maubot-fedora/pull/150 ) and so I merged it, figured out how to cut a release there, figured out how to deploy it to first staging and the production.

So, now !epel !ticket !releng should all work for those trackers, and

!forge org repo should work for a generic pointer to any forge project.

Hopefully this will make meetings and discussion on matrix nicer.

Fixed a websocket proxy issue with openqa

We had to move openqa behind anubis as the scrapers discovered it and were making it unusable. Unfortunately, openqa has a mode where you can update test screens that uses websockets and those were not correctly passing though anubis so that functionality was broken.

I was going to go look at apache docs and see if I could track down what needed to be set to do that, but decided to just ride the ai wave and ask a ai agent about it.

It snarfed in the config, thought about it for a bit, then spewed out a solution. The solution was largely for older apache versions (but I didn't tell it what apache version we were running), but at the end it correctly noted that on newer versions passing "upgrade=websocket" to the proxy commandline would fix it.

It did. It definitely saved me time poking through the apache docs.

a few new builders soon

We have had a number of machines we moved from our old datacenter that we wanted to repurpose as builders sitting around. It's not been very high priority to get them setup, but what better time that a freeze to get them online.

So, I got 3 of them ready, which involved updating a bunch of firmware on them, installing them, configuring networking, etc.

Will need a small freeze break to add them into ansible and finish them up, but then there should be 3 more buildhw-x86 builders.

Fedora 44 upgrades

Since I was catching up on things I decided to go ahead and upgrade my main server and it's vmhost to fedora 44 this morning.

Everything went super fast and painlessly, aside one issue with matrix-synapse (The f43 packager is newer than the f44 one so it would not work with my config/database). For now I just "downgraded" (or is that "sidegraded") to the f43 one. It seems to have a pretty nasty tangle of rust crate version changes, so it might not be too easy to sort out quickly.

Family Vacation time

The week after next (the week of april 20th) I will be away all week. I might look in on matrix/email some, but don't count on it. I'll be at a family vacation in hawaii. Please file tickets and be kind to my co-workers who are perfectly capable of handling anything in my absense. :)

comments? additions? reactions?

As always, comment on mastodon: https://fosstodon.org/@nirik/116392979078747195

HowTo: Cómo instalar y anclar una versión específica con Flatpak

Posted by Rénich Bon Ćirić on 2026-04-10 20:30:00 UTC

¿Te ha pasado que sale la actualización de tu software favorito y nomás no puedes saltar a ella? Pues resulta que Bitwig Studio llegó a su versión 6 y, pues, yo andaba sin chamba y sin feria para el upgrade.

Lo uso con Flatpak porque, la neta, yo mismo inicié, propuse y doné la primera implementación en Flathub. Total, me tuve que quedar en la versión 5.x y nunca había tenido que anclar una versión usando Flatpak.

Le pregunté a un LLM cómo hacerle (pa' qué le miento, compa) y me dió la respuesta. Así de fácil se hace:

Paso a paso

Revisar el log:

Primero hay que ver qué versiones hay disponibles en el historial de Flathub.

flatpak remote-info --log flathub com.bitwig.BitwigStudio
Instalar la versión deseada:

Copia el hash del commit que te interese e instálalo.

sudo flatpak update --commit=00535d779d2ebead55f129a406ed819064b7d3a28bd638aa25a0c8dda919197e com.bitwig.BitwigStudio
Anclar (mask):

Esto es lo más importante para que no se te actualice por error la próxima vez que hagas un update general.

flatpak mask com.bitwig.BitwigStudio

No tiene chiste, ¿verdad? Con esto ya puedes estar tranquilo de que tu flujo de trabajo no se va a romper por una actualización que no pediste... o, mejor dicho, pa' la que no te alcanza. ;D

Sobre el software privativo

La neta, yo no soy muy partidario de estas cosas. Siempre recomiendo estar al día pero, con software privativo y que pagas por usar, pues no me quedó de otra. Estos batos de Bitwig no mantienen versiones anteriores de forma sencilla; hay que estar al día con ellos a huevo.

Note

A muchos les parecerá una patraña o una traición usar software no libre en Fedora. En mi caso, pasé varios años intentando grabar mis rolas con software libre. Nunca pude. No por falta de software sino, más que nada, por la falta de infraestructura.

Para grabar con puro Open Source como Ardour, que es una chulada, necesitas un estudio pro: micros, batería, un baterista y que todos tengan tiempo. Como yo soy bien huevón para coordinar gente, si no hago las cosas en el momento, ya no las hice. Bitwig Studio me resuelve la vida porque ya trae todo el arsenal: samples, instrumentos y FX de súper buena calidad.

Al menos no tengo que usar Windows. Es mucho más de lo que tuve en los 2000s con el buen Renoise o LMMS. El flujo de trabajo importa bastante, y aunque en GNU/Linux hay plugins increíbles (como los Calf en LV2), armar una sesión estable era un pinche desmadre impráctico.

Conclusión

Ahí se las dejo, compas. Ojalá y a alguien le sirva este mini-howto para no andar batallando con las versiones de Flatpak. ¡A seguirle dando a la música!

Community Update – Week 15 2026

Posted by Fedora Community Blog on 2026-04-10 10:00:00 UTC

This is a report created by CLE Team, which is a team containing community members working in various Fedora groups for example Infrastructure, Release Engineering, Quality etc. This team is also moving forward some initiatives inside Fedora project.

Week: 06 – 10 Apr 2026

Fedora Infrastructure

This team is taking care of day to day business regarding Fedora Infrastructure.
It’s responsible for services running in Fedora infrastructure.
Ticket tracker

CentOS Infra including CentOS CI

This team is taking care of day to day business regarding CentOS Infrastructure and CentOS Stream Infrastructure.
It’s responsible for services running in CentOS Infratrusture and CentOS Stream.
CentOS ticket tracker
CentOS Stream ticket tracker

QE

This team is taking care of quality of Fedora. Maintaining CI, organizing test days
and keeping an eye on overall quality of Fedora releases.

  • General release validation and bug reporting work throughout the whole week, in preparation for F44 Final.
  • Automated a new openQA test for basic IPv4 and IPv6 connectivity (https://forge.fedoraproject.org/quality/os-autoinst-distri-fedora/pulls/504)
  • Blockerbugs migration to Forge almost ready for staging deployment
  • Ongoing collaboration with OSCI team to improve and rationalize generic test pipelines
  • Contributed Kiwi and Koji logging improvements to Pungi as part of compose-critical script work

Forgejo

This team is working on introduction of https://forge.fedoraproject.org to Fedora
and migration of repositories from pagure.io.

UX

This team is working on improving User experience. Providing artwork, user experience,
usability, and general design services to the Fedora project

  • Madeline submitted a proposal for All Things Open

If you have any questions or feedback, please respond to this report or contact us on #admin:fedoraproject.org channel on matrix.

The post Community Update – Week 15 2026 appeared first on Fedora Community Blog.

⚙️ PHP version 8.4.20 and 8.5.5

Posted by Remi Collet on 2026-04-10 05:48:00 UTC

RPMs of PHP version 8.5.5 are available in the remi-modular repository for Fedora ≥ 42 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).

RPMs of PHP version 8.4.20 are available in the remi-modular repository for Fedora ≥ 42 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).

ℹ️ These versions are also available as Software Collections in the remi-safe repository.

ℹ️ The packages are available for x86_64 and aarch64.

ℹ️ There is no security fix this month, so no update for versions 8.2.30 and 8.3.30.

Version announcements:

ℹ️ Installation: Use the Configuration Wizard and choose your version and installation mode.

Replacement of default PHP by version 8.5 installation (simplest):

On Enterprise Linux (dnf 4)

dnf module switch-to php:remi-8.5/common

On Fedora (dnf 5)

dnf module reset php
dnf module enable php:remi-8.5
dnf update

Parallel installation of version 8.5 as Software Collection

yum install php85

Replacement of default PHP by version 8.4 installation (simplest):

On Enterprise Linux (dnf 4)

dnf module switch-to php:remi-8.4/common

On Fedora (dnf 5)

dnf module reset php
dnf module enable php:remi-8.4
dnf update

Parallel installation of version 8.4 as Software Collection

yum install php84

And soon in the official updates:

⚠️ To be noticed :

  • EL-10 RPMs are built using RHEL-10.1
  • EL-9 RPMs are built using RHEL-9.7
  • EL-8 RPMs are built using RHEL-8.10
  • intl extension now uses libicu74 (version 74.2)
  • mbstring extension (EL builds) now uses oniguruma5php (version 6.9.10, instead of the outdated system library)
  • oci8 extension now uses the RPM of Oracle Instant Client version 23.26 on x86_64 and aarch64
  • A lot of extensions are also available; see the PHP extensions RPM status (from PECL and other sources) page

ℹ️ Information:

Base packages (php)

Software Collections (php83 / php84 / php85)

 

⚙️ PHP version 8.4.19 and 8.5.4

Posted by Remi Collet on 2026-03-13 05:32:00 UTC

RPMs of PHP version 8.5.4 are available in the remi-modular repository for Fedora ≥ 42 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).

RPMs of PHP version 8.4.19 are available in the remi-modular repository for Fedora ≥ 42 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).

ℹ️ These versions are also available as Software Collections in the remi-safe repository.

ℹ️ The packages are available for x86_64 and aarch64.

ℹ️ There is no security fix this month, so no update for versions 8.2.30 and 8.3.30.

Version announcements:

ℹ️ Installation: Use the Configuration Wizard and choose your version and installation mode.

Replacement of default PHP by version 8.5 installation (simplest):

On Enterprise Linux (dnf 4)

dnf module switch-to php:remi-8.5/common

On Fedora (dnf 5)

dnf module reset php
dnf module enable php:remi-8.5
dnf update

Parallel installation of version 8.5 as Software Collection

yum install php85

Replacement of default PHP by version 8.4 installation (simplest):

On Enterprise Linux (dnf 4)

dnf module switch-to php:remi-8.4/common

On Fedora (dnf 5)

dnf module reset php
dnf module enable php:remi-8.4
dnf update

Parallel installation of version 8.4 as Software Collection

yum install php84

And soon in the official updates:

⚠️ To be noticed :

  • EL-10 RPMs are built using RHEL-10.1
  • EL-9 RPMs are built using RHEL-9.7
  • EL-8 RPMs are built using RHEL-8.10
  • intl extension now uses libicu74 (version 74.2)
  • mbstring extension (EL builds) now uses oniguruma5php (version 6.9.10, instead of the outdated system library)
  • oci8 extension now uses the RPM of Oracle Instant Client version 23.26 on x86_64 and aarch64
  • A lot of extensions are also available; see the PHP extensions RPM status (from PECL and other sources) page

ℹ️ Information:

Base packages (php)

Software Collections (php83 / php84 / php85)

 

Friday Links 26-12

Posted by Christof Damian on 2026-04-09 22:00:00 UTC
Open sandwich with vegetarian coronation chicken

Two weeks of links. A lot of AI related articles again. I liked the paper about using chatbots in leadership, and the one about the YAML of the mind. If you want a podcast, check out the one about night trains.

Quote of the Week
The technologies we use to try to “get on top of everything” always fail us, in the end, because they increase the size of the “everything” of which we’re trying to get on top.
Four Thousand Weeks
Oliver Burkeman

Leadership

We-ness: The secret cause of Psychological Safety [Podcast] - very interesting deep dive into how feeling like a team can increase psychological safety.

Sécuriser ses serveurs derrière Cloudflare : Authenticated Origin Pulls

Posted by Guillaume Kulakowski on 2026-04-09 18:24:00 UTC
Cloudflare protège efficacement vos services… sauf si votre serveur reste accessible en direct. Dans cet article, on met en place Authenticated Origin Pulls pour garantir que seules les requêtes provenant de Cloudflare peuvent atteindre votre infrastructure, avec deux niveaux de sécurité.

Handling a PR disaster for your project

Posted by Ben Cotton on 2026-04-08 12:00:00 UTC

I want to say up front that the point of this post is not to disparage Trivy or its maintainers. They’ve had a rough few weeks and I feel for them. I only discuss Trivy as a recent example of a bad day at the office.

The saying “there’s no such thing as bad publicity” always felt a little gross to me. There are a lot of reasons you might get noticed that are bad, and it should feel bad to do bad things. But maybe there’s something to it. If you handle a PR disaster well, you might come out ahead (although you’d still probably prefer to not deal with it in the first place).

Bad publicity is good?

As you may know, the Trivy project fell victim to an attack last month. The compromise affected not just Trivy, but its sponsoring company and many downstream projects and companies. It was — and continues to be — a big deal.

Out of curiosity, I decided to look to see if people were switching from Trivy to other projects. I decided to use GitHub stars as a proxy for interest. Yes, GitHub stars are meaningless. But I figured a relative change might indicate interest in alternatives. Much to my surprise, Trivy’s star count increased pretty dramatically post-compromise. Syft and cdxgen, which seem to be the main alternatives for SBOM generation, saw no such bump. Of course, this doesn’t necessarily mean that people aren’t shifting away from Trivy. But I expected to see the opposite of what the star counts show.

Graph of GitHub stars over time for three projects. Trivy shows a steady increase with a sharp uptick in the last few months. Syft and cdxgen show slower-but-still-steady increases with no recent changes.

The past few weeks have been a PR nightmare for Trivy, and I’m sure it’s been entirely unpleasant for the maintainers. I don’t wish this on anyone, but if you find yourself in this kind of situation, take heart. There’s at least some indication that your project can survive this kind of catastrophic event.

It’s worth noting that it may just be too early to tell what the future holds for Trivy. While they’ve received a lot more stars and attention, there haven’t been any commits to main in three weeks. People are still opening issues and pull requests, so it may just be that the maintainers are still focused on cleanup. I hope that the project comes back stronger and more secure, but time will tell.

Disaster recovery

If your project has a PR nightmare, stay calm. If you have the resources to bring in a crisis PR expert, do that and ignore everything else I say after this. Most likely, though, you don’t have the resources to bring in a pro. So here’s my amateur advice:

  • Mitigate the damage. Take whatever technical steps are necessary to keep the problem from getting worse. This may include rotating credentials, disabling CI pipelines, locking issues, or turning off services. This step is the most important because you don’t want the problem getting worse while you’re handling communication.
  • Create an advisory if appropriate. Do this if there’s a vulnerability in your software that you’re addressing. You can skip this for other flavors of disaster. If you’re on GitHub, you can follow the GHSA creation process. Other hosting platforms may have their own system. As a last resort, you can request a CVE at https://cveform.mitre.org/.
  • Communicate honestly and factually. Tell people what happened and, if applicable, how to know if they’re affected and how to address it. Don’t try to hide uncomfortable facts. Be honest, but don’t speculate or guess. You can always update as new facts come to light, but you can’t un-say things.
  • Don’t feel obligated to correct everyone. As the story spreads, people will get things wrong. Perhaps aggressively so. You don’t need to put your energy into correcting every misstatement posted by some rando online. If news outlets or influential people misstate facts, you can give them the correct version. If they draw inferences that you disagree with, it’s probably best to let it go.

This post’s featured photo by Ante Hamersmit on Unsplash.

The post Handling a PR disaster for your project appeared first on Duck Alignment Academy.

🎲 PHP version 8.4.20RC1 and 8.5.5RC1

Posted by Remi Collet on 2026-03-27 06:38:00 UTC

Release Candidate versions are available in the testing repository for Fedora and Enterprise Linux (RHEL / CentOS / Alma / Rocky and other clones) to allow more people to test them. They are available as Software Collections, for parallel installation, the perfect solution for such tests, and as base packages.

RPMs of PHP version 8.5.5RC1 are available

  • as base packages in the remi-modular-test for Fedora 42-44 and Enterprise Linux ≥ 8
  • as SCL in remi-test repository

RPMs of PHP version 8.4.20RC1 are available

  • as base packages in the remi-modular-test for Fedora 42-44 and Enterprise Linux ≥ 8
  • as SCL in remi-test repository

ℹ️ The packages are available for x86_64 and aarch64.

ℹ️ PHP version 8.3 is now in security mode only, so no more RC will be released.

ℹ️ Installation: follow the wizard instructions.

ℹ️ Announcements:

Parallel installation of version 8.5 as Software Collection:

yum --enablerepo=remi-test install php85

Parallel installation of version 8.4 as Software Collection:

yum --enablerepo=remi-test install php84

Update of system version 8.5:

dnf module switch-to php:remi-8.5
dnf --enablerepo=remi-modular-test update php\*

Update of system version 8.4:

dnf module switch-to php:remi-8.4
dnf --enablerepo=remi-modular-test update php\*

ℹ️ Notice:

  • version 8.5.4RC1 is in Fedora rawhide for QA
  • EL-10 packages are built using RHEL-10.1 and EPEL-10.1
  • EL-9 packages are built using RHEL-9.7 and EPEL-9
  • EL-8 packages are built using RHEL-8.10 and EPEL-8
  • oci8 extension uses the RPM of the Oracle Instant Client version 23.9 on x86_64 and aarch64
  • intl extension uses libicu 74.2
  • RC version is usually the same as the final version (no change accepted after RC, exception for security fix).
  • versions 8.4.19 and 8.5.4 are planed for March 12th, in 2 weeks.

Software Collections (php84, php85)

Base packages (php)

Howto: a very nice way of organizing your bash env variables and settings

Posted by Rénich Bon Ćirić on 2026-04-08 01:30:00 UTC

So, I know you have either ~/.bashrc and ~/.bash_profile or ~/.profile in your installation. We all do. And many apps we use on a daily basis use those. Plus, you like your aliases, your own env variables and maybe even one or two bash functions you like to use.

That creates a problem. You have everything in a single file (or two) and you have a mess. It's hard to read, hard to organize and a single mistake renders that file useless. Well, maybe not useless, but you get the idea. It's a bad idea to have a 500-line config file, ¿no crees?

So, which solutions are there?

Easy, just npm install .... Yeah, right. Who wants more terrible TypeScript/EcmaScript code in their environment? I mean, really. And it comes in troves! Huge amounts of it everywhere! No mames, we can do better with plain old Bash.

The trick is actually much simpler. Here's how I do it:

Filename: ~/.bashrc

## Load any supplementary scripts from ~/.bashrc.d/
if [[ -d $HOME/.bashrc.d ]]; then
   for f in "$HOME"/.bashrc.d/*.bash; do
      [[ -f "$f" ]] && source "$f"
   done

   unset -v f
fi

Note

This snippet goes into your ~/.bashrc. It checks if the directory ~/.bashrc.d exists and then loops through every file ending in .bash to "source" it. This effectively evaluates those files into your current session.

Now, you can do the same for your bash profile, which is the preferred place to put things like environment variables and such.

Filename: ~/.bash_profile

## Load any supplementary scripts from ~/.bash_profile.d/
if [[ -d ~/.bash_profile.d ]]; then
   for f in ~/.bash_profile.d/*.bash; do
      [[ -f "$f" ]] && source "$f"
   done

   unset -v f
fi

This is a neat trick, if I may say so. It enables me to create independent files for different things. For example, I like my $GOPATH env variable to point to ~/Projects/go. Also, I like to add Go's bin directory to my $PATH. Easy enough, right?

But, where do I put it?

Main Differences:
~/.bashrc:
Read every time you open a new interactive terminal. Perfect for aliases and prompts.
~/.bash_profile:
Read only once upon login. Best for environment variables that should be inherited by all child processes.

Tip

If you want to be able to overwrite your $PATH entries and expect them to persist between terminals without re-logging, putting the loading logic in ~/.bashrc is the way to go.

That said, I am putting my go.bash file in ~/.bashrc.d/go.bash:

# go settings
export GOPATH=$HOME/src/go
export PATH=$PATH:$GOPATH/bin

Now, it's as easy as opening a new terminal (I set up my terminal to use a login shell) or I can just source ~/.bash_profile. In Fedora, sourcing ~/.bash_profile will source ~/.bashrc if it exists anyway. ;D

One more customization I really like:

# ~/.bash_profile.d/ls.bash
alias ls='ls --color=auto --group-directories-first'

That one makes my directories appear before the files when using ls. The --color=auto flag is just to make the default colors stay.

Conclusion

Keep your environment clean, dude. Organizing your configs in .d directories makes it much easier to manage and debug. No more messy files!

Referencias

Fedora Code of Conduct Report 2025

Posted by Fedora Community Blog on 2026-04-07 12:00:00 UTC

The Fedora Project’s Code of Conduct and its reports are managed by the Fedora Code of Conduct Committee, the Fedora Community Architect, and the Fedora Project Leader. We publish this summary to demonstrate our commitment to community safety and our project’s social fabric.

This post covers the year of reports received in the 2025 calendar year. The purpose of publishing the annual Code of Conduct Report is to provide transparency, insight, and awareness into the health signs of the community.

How’d it go in 2025

In 2025, we had a slight uptick in engagement from 2024. 14 reports were opened in 2025, compared to 11 reports in 2024. While we saw some members step down this year, the Fedora Code of Conduct Committee (CoCC) also refreshed its membership with new voices. Jef Spaleta, Chris Idoko, and Ankur Sinha were nominated this year to maintain responsiveness and steer our community standards forward.

The majority of issues reported in the year 2025 were largely handled through “shoulder taps” and formal reach-outs. This is in comparison to disciplinary actions or emergency action requiring bans or long-term suspensions. While reports did increase from 2024 to 2025, the difference is negligible. The Committee expects this number to fluctuate annually, as world events and international conflicts often impact the social dynamics of communities like ours.

You can see the full data from 2025 in the table below.

Community Health Assessment

After six years of reporting, looking back at our journey from the modernization of the Code of Conduct to where we stand today, it is encouraging to see how much we have grown together. Yearly reports indicate while our community continues to have conflicts (as any healthy community ought to), incident severity continues to decrease comparing reports spanning 2020 through 2025. We attribute this consistent reduction in “opened reports” and “CoC interventions” to the maturity of our self-moderation culture.

A significant part of this positive atmosphere is thanks to the refreshed CoC guidelines established by Marie Nordin in 2021 successfully addressing the peak in incidents that occurred in the COVID-19. These were roadmaps on how we want to treat each other. Seeing these guidelines in actions in our reports shows that they are working as hoped. We feel the community is in a healthy place at this time, but a healthy committee is one that never stops listening. We would love to hear your thoughts, feedback and suggestions on how we can continue to help our shared spaces feel safe, inclusive and welcoming.  

YearReports OpenedReports ClosedWarnings IssuedModerations IssuedSuspensions IssuedBans Issued
202514141200
202411111010
202317175311
202221246300
202123242101
202020168420

Looking forward to 2026

If you witness or are part of a situation that violates Fedora’s Code of Conduct, please open a private report on the [Code of Conduct repo] or email codeofconduct@fedoraproject.org. As always, your reports are confidential and only visible to the Code of Conduct Committee.

Remember that opening a CoC report does not automatically mean action will be taken. Sometimes things can be clarified, improved, or resolved entirely. Or, it could be something pretty small, but it definitely wasn’t okay, and you don’t want to make a big deal… open that report anyway, because it could show a pattern of behavior that is negatively impacting more people than yourself.

Here is a reminder to our Fedora community to be kind and considerate to each other in all our interactions. We all depend on each other to create a community that is healthy, safe, and happy. Most of all, we love seeing folks self-moderate and stand up for the right thing day to day in our community. Keep it up, and keep being awesome Fedora, we <3 you!

About the Committee

Fedora Project’s Code of Conduct and reports are managed by the Fedora Code of Conduct Committee (CoCC). The Fedora CoCC is made up of the Fedora Project Leader, Jef Spaleta; the Fedora Community Architect, Justin Wheeler; the Red Hat legal team, as appropriate; and community nominated members. Jef Spaleta, Chris Onoja Idoko, Ankur Sinha, nominated this year.

We’re incredibly grateful to Josh Berkus and Laura Santamaria for stepping up as term-limited members of the Fedora Code of Conduct Committee (CoCC). Their commitment ensured we had consistent coverage through September 30th, 2025, providing vital support until our newest nominees were fully onboarded and trained.

The post Fedora Code of Conduct Report 2025 appeared first on Fedora Community Blog.

One Day

Posted by Justin Wheeler on 2026-04-07 08:00:00 UTC

It has been a minute. If you look at my blog archives, my last post went up in 2024. Recently, I decided it was time for a massive digital renovation: I completely migrated this blog from WordPress to Hugo using my own theme.

Fortunately, I was able to meet my one key requirement. The migration was a complete one-to-one pairing from WordPress to Hugo. Every post I wrote between 2015 and 2024 made the jump intact. Even the images and URL schema! You can go back, browse the archives, and read a decade’s worth of my written word in my new site. Best of all, every old URL for my WordPress blog will seamlessly redirect to the new home here.

But I didn’t just move the content; I also built a custom Hugo theme from the ground up. (Because of course I did.) I began working on the theme over a year ago for my own site (this very one!). Originally I developed the code inside my own website, but eventually, I moved the theme code into its own repository in June 2025. However, I spent a lot of time in March working on my theme, giving it a solid structure for blogging, and turning it into something highly functional. I confess that AI was significantly used in improving my Hugo theme. It was my first time ever using an AI agent to do something outside of a browser. For various reasons, I chose to work with Claude AI for this project, and it helped me accomplish clearly-defined milestones in my mind since a long time. I wanted to create a theme that was still useful for me, but had the broad appeals of any basic blogging tool or engine out there today. And I believe I achieved that together with AI assistance, my pedantic review patterns, and OCD-like obsession for my design vision. My hope is that eventually, more people than just me could benefit from it.

Of course, a beautifully optimized, custom-themed blog is still just an empty vessel if you don’t write. And to say a lot has happened in my life since 2024 would be an understatement. The last twenty-three months had much to teach me in holding profound grief and incredible joy at the same time.

The Hardest Goodbyes 🔗

The heaviest reality of this past year was a prolonged season of caregiving that culminated in back-to-back losses. Right before Christmas in December 2023, my mother was diagnosed with cholangiocarcinoma, better known as bile duct cancer. Throughout 2024, my sister and I walked alongside her through her cancer journey, doing everything we could to support her. Alongside this, my maternal grandmother’s health was steadily declining due to the onset of dementia.

The emotional and physical toll of managing both of their needs is why I spent so much time away from work throughout 2025, and why my availability became so unpredictable. Ultimately, we faced an unimaginable timeline: my mother passed away in September 2025, and then one month later, in October 2025, my grandmother also passed.

Toward the end of 2025, after they were both gone, I slowly but steadily began the process of climbing out and getting caught up on everything. Throughout all of this, my sister was my absolute rock. Even now, my sister and I are still dealing with the long-term ripple effects and the heavy administrative burden of navigating probate court and managing an estate. Walking through this long, heavy aftermath as partners with my sister means everything to me. I could not navigate this season of life without her.

Finding “Home” Across an Ocean 🔗

On the opposite end of the emotional spectrum, my life expanded in the best way possible: I married the love of my life and muse of my soul. In November 2025, my wife and I began the next chapter of life together. She is currently living and working in Germany. Most of the time since then is spent navigating the unique complexities of our union This includes what is usually a simple question for most married couples: where to live.

Because international immigration is a notoriously slow and complex machine, our life is currently a transatlantic hybrid. Right now, while I permanently reside in Georgia, USA, my time is shuffled between the USA, being with my wife in Germany, and traveling for work.

While we are managing the distance for now, our biggest ongoing project is my official relocation to Germany. The exact timeline is fluid, but our hope & prayer is to celebrate the winter holidays in Germany together this year as residents. I look forward to sharing more about this process as it unfolds. (Including any potential trauma of migrating from temperate, warm Georgia to somewhere much colder most of the year.)

The Weight of Context Switching 🔗

Between the flights, the time zones, and my day job at Red Hat supporting Fedora, my brain is regularly forced into a relentless state of context switching.

The “Execution Mode” I use to navigate probate court, resolving medical bills, and executing an estate actually uses the exact same back-office muscles I use to manage budgets and plan events for Fedora. The hardest part lately was not lack of passion, but the sheer volume of threads I am holding. I am constantly shifting gears between my work at Red Hat and Fedora, then to coordinating international immigration, and dealing with the immediate reality of life—like trying to figure out when a technician can fix the broken outdoor air-conditioning unit at my house in the middle of a workday.

If you have noticed me working odd, irregular, or even borderline unhealthy hours lately, that is why. Work is not necessarily an escape from the grief; it is one engine that keeps me moving. So, that is a part of my coping mechanism. But feeling spread this thin has also been a wake-up call that I need to delegate more, reduce the number of hats I am wearing, and focus on delivering deeper, higher-quality work on fewer things.

The Anchor and The Code 🔗

When I am dropping plates and feeling completely drained, someone might wonder why I keep showing up to work. For me, it was always about Fedora. I do not mean this as a humble brag, because I understand it is not this way for everyone. But for me, Fedora was always more than a paycheck; Fedora is the people and community bonds. Getting to build a free and Open Source operating system that aligns with my values, alongside a community I genuinely love, is what anchors me here.

That same drive to build and organize is the same reason why I took on this massive blog migration. Occasionally, I have some deep-seated OCD-like tendencies. Creating structure is another way how I cope with a world that often feels entirely out of my control. During my mother’s and grandmother’s health declines, the volume of incoming paperwork was overwhelming. It was an endless stream of letters, bills, hospital discharge packets, and insurance statements.

To manage it, I accidentally built a massive, semantic digital library. I ended up purchasing one of the best Linux-compatible HP digital scanners on the market to handle the influx of paper. I became incredibly efficient at scanning stacks of paper, writing rules to sort and filter emails, sorting and categorizing PDFs, and developing strict file-naming patterns so everything was easily searchable. It sounds novel, but keeping the physical paper stacks from taking over my own space gave me a tangible sense of peace. So, organizing the things I can control gives me the confidence to leap in and handle the chaotic, uncontrollable moments when they arrive.

Plus, if I am being completely honest, I am exhausted from the WordPress ecosystem altogether. I held significant anticipation for canceling my expensive WordPress hosting service and various other subscriptions and fees tied to running WordPress. However, what I did not expect to find while working on this project was a spark of joy for creation that I did not feel in a long time. My childhood and adolescence were filled with a curious desire to make things that were helpful and useful. This is perhaps what nudged me in the direction of computer science and information technology, because these were domains I could understand. I confess feeling mixed emotions that this rediscovery of joy for creation was mixed with AI assistance. Yet at the same time, this is a project that was on my list since several years, and “pays off” a lot of technical debt. I look forward to maintaining and hosting my website here, and rediscovering my writing voice. (And I can use Vim to write blog posts now too, hooray!)

My creative engineering spark is still very much alive.

Taking it One Day at a Time 🔗

It has been twenty-three months of extreme migrations—digital, geographical, and emotional. The dust is not all settled yet, and I am still finding my steady footing. But now that my new blog engine is finally running, I am excited to share more of the journey, the code, and whatever else comes next.

(One more yak shaved.)

Historia de Planeta Libre

Posted by Rénich Bon Ćirić on 2026-04-07 06:00:00 UTC

¿Te acuerdas de Planeta Linux México? Qué tiempos aquellos, compa. Hoy me puse nostálgico pensando en cómo ese espacio nos unía a todos los que andábamos metidos en el rollo del software libre hace ya más de dos décadas. Planeta Libre no es nomás un agregador de blogs; es la neta, es la continuación de un esfuerzo comunitario que se niega a morir.

Orígenes y Nostalgia

El proyecto nace de esa pinche nostalgia por los tiempos dorados. Durante los primeros años de los 2000, ese espacio era el punto de encuentro fundamental. Como miembro de la comunidad de Fedora, siempre valoré esa interacción única que se armaba entre desarrolladores, usuarios y entusiastas. ¡Era puro fuego!

Note

Fue en aquel Planeta original donde tuve el honor de conocer y convivir con grandes luminarios de la escena mexicana.

Figuras como Gunnar Wolf, cuyo blog Nice Grey Life ha sido una referencia constante, y muchos otros que le dieron forma a lo que hoy entendemos como el ecosistema de software libre en nuestro país. Había varios otros. La neta, les debemos un buen.

La Misión de Continuar

Con el paso de los años, las plataformas cambian y los espacios a veces se pierden gacho. Pero la necesidad de ese "pulso" comunitario sigue ahí, bien latente. Planeta Libre surgió como un intento personal de recuperar esa chispa, de mantener viva la conversación que iniciamos hace tantos años. ¿No crees que hace falta más de eso hoy en día?

Evolución Técnica y Apoyo Comunitario

Inicialmente, el proyecto comenzó su andadura técnica bajo el dominio planeta.libre, utilizando la red OpenNIC. Era una declaración de principios: un espacio libre en una red libre. ¡Bien perro!

Poco después, recibimos un impulso chido, que estuvo de locos: Octavio Álvarez (alvarezp), un pilar indiscutible de nuestra comunidad (conocido por su chamba en el Gultij y en proyectos como Debian y LibreOffice), decidió donar el dominio oficial planetalibre.org.

Tip

Esta generosa donación no solo nos dio un hogar más estable, sino que validó todo el esfuerzo. ¡Gracias, mi Octavio!

Hoy, Planeta Libre está construido con tecnología moderna (Crystal y Kemal), buscando ser rápido, seguro y, sobre todo, un espejo fiel de lo que nuestra comunidad está creando día con día. Digo, sin mencionar que se la pela todo el mundo con la velocidad de procesamiento; la cual es < 100 µs. Micro... como en millonésimas de segundo.

Agradecimientos

Este proyecto existe gracias a gente chingona:

La comunidad original de Planeta Linux México:
Por darnos el ejemplo y el espacio inicial que tanto extrañábamos.
Los de OpenNIC:
Sin esos batos, la neta, no se arma la cosa. Que chido que existan y que nos echen la mano.
Octavio Álvarez (alvarezp):
Por su generosidad al donar el dominio y su eterno compromiso técnico con México. Esperemos que sean muchos, muchos... MUCHOS años. ;D
La Comunidad de Fedora:
Por ser el motor de mi propia formación técnica y comunitaria. ¡Fedora manda!

¡Sigamos escribiendo esta historia juntos, que esto apenas se vuelve a poner bueno!

Matrix server maintenance

Posted by Fedora Infrastructure Status on 2026-04-06 11:15:00 UTC

Element Matrix Services is performing scheduled maintenance on our matrix server (fedora.im).

Affected Services:

  • chat.fedoraproject.org
  • fedora.im
  • matrix services

Migrer ses DNS sur Cloudflare : retour d’expérience et pièges à éviter

Posted by Guillaume Kulakowski on 2026-04-06 08:00:14 UTC
Dans cet article, je détaille la migration de mes DNS vers Cloudflare : configuration, mise en place du CDN, gestion des certificats avec Traefik et retour sur les problèmes rencontrés (ACME, SSH, mTLS). Un retour d’expérience concret avec les pièges à éviter.

kurbu5: MIT Kerberos plugins in Rust

Posted by Alexander Bokovoy on 2026-04-04 19:10:00 UTC

For a couple of years, Andreas Schneider and I have been working on a project we call the ‘local authentication hub’: an effort to use the Kerberos protocol to track authentication and authorization context for applications, regardless of whether the system they run on is enrolled into a larger organizational domain or is standalone. We aim to reuse the code and experience we got while developing Samba and FreeIPA over the past twenty years.

Local authentication hub

The local authentication hub relies on a Kerberos KDC available on demand on each system. We achieved this by allowing MIT Kerberos to communicate over UNIX domain sockets. On Linux systems, systemd allows processes to be started on demand when someone connects to a UNIX domain socket, and MIT Kerberos 1.22 has support for this mode.

A KDC accessible over a UNIX domain socket is not very useful in itself: it is only available within the context of a single machine (or a single container, or pod, if UNIX domain sockets are shared across multiple containers). Otherwise, it is a fully featured KDC with its own quirks. And we can start looking at what could be improved based on the enhanced context locality we have achieved. For example, a KDB driver can see host-specific network interfaces and thus be able to react to requests such as host/<ip.ad.dr.ess>@LOCALKDC-REALM dynamically—something that a centrally-managed KDC would only do through statically registered service principal names (SPNs), which are a pain to update as machines move across networks.

Adding support for dynamic features means new code needs to be written. MIT Kerberos is written in C, so our choices are either to continue writing in C or to integrate with whatever new language we choose. Initially, we kept the local KDC database driver written in C and decided to build the infrastructure we need in Rust. The end goal is to have most bits written in Rust.

The local KDC database isn’t supposed to handle millions of principal entries, but even for millions of them, MIT Kerberos has a pretty good default database driver built on LMDB: klmdb. We wanted to get out of the data store business and instead focus on higher-level logic. Thus, we made the same change I made in Samba around 2003 for virtual file system modules: we introduced support for stackable KDB drivers. This is also a part of the MIT Kerberos 1.22 release: a KDB driver implementation can ask the KDC to load a different KDB driver and choose to delegate some requests to it. The local KDC driver is using klmdb for that purpose.

With the database handled for us by klmdb, we focused on the local KDC-specific logic. We wanted to dynamically discover user principals from the operating system so that administrators do not need to maintain separate databases for them. systemd provides a userdb API to query such information over a varlink interface (also available over a UNIX domain socket) in a structured way, using JSON format. Thus, the Kirmes project was born. Kirmes is a Rust data library backed by the userdb API. It handles varlink communication through the wonderful Zlink library and exposes both asynchronous and synchronous access to user and group information.

The local KDC database driver prototype used the Kirmes C API. We demonstrated it at FOSDEM 2025: a user lookup is done over varlink, and if a user is present on the system, their Kerberos key is then looked up in klmdb using a specially-formatted userdb:<username> principal. You still need to handle those keys somehow, but there is a way to avoid that: use RADIUS.

Pre-authentication

A bit of historical reference. In 2012, Red Hat collaborated with MIT to introduce a KDC-side implementation of RFC 6560 (the OTP pre-authentication mechanism; at that point implemented in a proprietary solution by the RSA corporation). This mechanism allowed the KDC to get a hint out of a KDB driver and ask a RADIUS server to authenticate the credentials provided by the Kerberos client. Unlike traditional Kerberos symmetric keys, in this case, the client is sending a plain-text credential over the Kerberos protocol, and this credential can be forwarded to the RADIUS server. The plain-text nature of the RADIUS credential requires the use of a secure communication channel, and a good part of RFC 6560 relies on Flexible Authentication Secure Tunneling (FAST, RFC6113), where a pre-existing Kerberos ticket is used to encrypt the content of that tunnel.

Since ~2013, FreeIPA has used this mechanism to provide multi-factor authentication mechanisms: HOTP/TOTP tokens, RADIUS proxying to remote servers, the OAuth2 device authorization grant flow, and FIDO2 tokens. The list of mechanisms can be extended, as long as the model fits into the somewhat constrained Kerberos exchange flow. FreeIPA handles all communication from the KDC side via a local UNIX domain socket-activated daemon, ipa-otpd, which performs a user principal lookup and then decides on the details of how that user will be authenticated.

For the local KDC case, we used a similar approach but wrote a simplified version, localkdc-pam-auth, which uses PAM to authenticate user credentials. It works well and allows for a drop-in replacement: once the local KDC is set up, users defined on the system will automatically be able to receive Kerberos tickets, with no need to change any passwords or migrate their credentials into the Kerberos KDC. All we need now is the business logic to guide the KDC to use the OTP pre-authentication mechanism so that our RADIUS ‘proxy’ (localkdc-pam-auth) gets activated. This logic is implemented and will be available in the first localkdc release soon.

API bindings

But back to the KDC side. As mentioned above, our goal was to write the local KDC database driver in a modern, safe language. Interfacing Rust with the MIT Kerberos KDC means building an interface that allows aligning code on both sides. This is what this blog is actually about (sorry for the long prelude…): how to make an MIT Kerberos KDB driver in Rust.

Today I published Kurbu5, a project that aims to provide these API bindings to Rust. The name is a transliteration of “krb5” into Mesopotamian cuneiform phonology: Kurbu-ḫamšat-qaqqadī—”The Blessed Five-Headed One”.

Creating API bindings is tedious work: there are many interfaces, each representing multiple functions and structures. MIT Kerberos has 12 interfaces which altogether expose roughly 117 methods that plugin authors implement, backed by around 70 supporting types (data structures passed into and out of those methods). It all sounds like a Tolkien tale: nine interfaces for core Kerberos functionality (checking password quality, mapping hostnames to Kerberos realms, mapping Kerberos principals to local accounts, selecting which credential cache to use, handling pre-authentication on both the client and server side, enforcing KDC policy, authorizing PKINIT certificates, and auditing events on the KDC side), the database backend interface, and two administrative interfaces. This is something that could be automated with agentic workflows—which I did to allow a parallel porting effort. The resulting agent instructions are useful artifacts in themselves: they show how to work when porting MIT Kerberos C code to Rust.

The result is split over several Rust crates to allow targeted reuse. The bulk of the code lives in three crates. The core Kerberos plugin crate (kurbu5-rs) is the largest at around 12,600 lines. The database backend crate (kurbu5-kdb-rs) follows at 5,600 lines, and the administration crate (kurbu5-kadm5-rs) at 3,100 lines. The remaining crates—the proc-macro derives and the raw FFI sys crates—are much smaller, with the sys crates being almost trivially thin (the KDB and kadm5 ones are under 40 lines each, since they mostly just re-export bindings from the main sys crate).

All crates are available on crates.io and share the same MIT license as the original MIT Kerberos.

  • kurbu5-sys — Raw FFI bindings to the MIT Kerberos libkrb5 and KDB plugin API
  • kurbu5-derive — Proc-macro derives for kurbu5-rs non-KDB plugin interfaces
  • kurbu5-rs — Safe, idiomatic Rust API for writing MIT Kerberos non-KDB plugin modules
  • kurbu5-kdb-sys — KDB plugin API re-export — thin wrapper over kurbu5-sys adding libkdb5 linkage
  • kurbu5-kdb-derive — Proc-macro derive for kurbu5-kdb-rs KDB driver plugins
  • kurbu5-kdb-rs — Safe, idiomatic Rust API for writing MIT Kerberos KDB driver plugins
  • kurbu5-kadm5-sys — KADM5 plugin API bindings — links libkadm5srv_mit and re-exports kurbu5-sys types
  • kurbu5-kadm5-derive — Proc-macro derives for kurbu5-kadm5-rs KADM5_AUTH and KADM5_HOOK plugin interfaces
  • kurbu5-kadm5-rs — Safe, idiomatic Rust API for writing MIT Kerberos KADM5_AUTH and KADM5_HOOK plugin modules

In the localkdc project, we use kurbu5 to build a KDB driver and provide our audit plugin. We also have an experimental re-implementation of the OTP pre-authentication mechanism, both client and KDC sides, that was used to test interoperability with MIT Kerberos versions. The core of the KDB driver is ~520 lines of heavily documented Rust code, mostly handling business logic.

misc fedora bits first week of april 2026

Posted by Kevin Fenzi on 2026-04-04 18:18:28 UTC
Scrye into the crystal ball

A somewhat quiet week in fedora land this time, which is nice, as it allows for catching up on planned work. Of course there was the usual flow of day to day items too.

DeploymentConfig to Deployment

Long ago OpenShift used a custom object called 'DeploymentConfig' to define how to deploy applications. After a while it was deprecated in favor of the normal k8s 'Deployment' object. We have a bunch of apps using the old DeploymentConfig and we wanted to migrate them to the new Deployment.

To be clear, this is just a deprecation right now, it's not been removed from OpenShift yet, but we wanted to get things moveed sooner rather than later.

So, Pedro did all the heavy lifting here and created pull requests for all our apps to move them.

I spent some time this last week merging those and then doing the dance to change the existing app over, which roughly was:

  • merge pull request

  • delete DeploymentConfig

  • run ansible to deploy the Deployment

  • check that everything was redeployed and working correctly.

I managed to find a few apps in staging that were not working or deployed correctly and had to fix those up along the way. We also hit some issues with selectors not getting updated, so applications didn't have correct routes/services.

There's a few more of these to do, but will probibly wait until after freeze is over to do them as they could be disruptive.

Fedora 44 Final freeze

Speaking of freeze, we started the Fedora 44 Final infrastructure freeze. So far things are looking smooth for composes and such.

There are a few blockers currently, but hopefully we can get them sorted out and get a good release soon.

koji packaging

koji 1.36.0 came out last week and I spent a bit of time this week looking at modernizing the fedora spec to more match the python packaging guidelines and also to enable tests.

My somewhat hacky pr is at https://src.fedoraproject.org/rpms/koji/pull-request/29

It's nice to run the tests and have things not throwing deprecation warnings.

Upcoming blogs and vacation

I have some posts planned which I need to actually write up sometime. One on my solar system, which is mostly going great, and another fun one on open source monitoring of blood glucose levels. Perhaps this weekend.

I'm going to be largely away from the internet the week of April 20th. I'm going on a family vacation to Hawaii. :) I have never been there, so it should be pretty fun. I'll probibly check emails from time to time, but I will definitely not be around day to day on matrix/slack/irc/whatever.

comments? additions? reactions?

As always, comment on mastodon: https://fosstodon.org/@nirik/116347877029785741

Community Update – Week 14 2026

Posted by Fedora Community Blog on 2026-04-03 10:00:00 UTC

This is a report created by CLE Team, which is a team containing community members working in various Fedora groups for example Infrastructure, Release Engineering, Quality etc. This team is also moving forward some initiatives inside Fedora project.

Week: 31 Mar – 03 Apr 2026

Fedora Infrastructure

This team is taking care of day to day business regarding Fedora Infrastructure.
It’s responsible for services running in Fedora infrastructure.
Ticket tracker

  • [Ansible] Perform mapping for Fedora Science teams and groups [Commit]
  • [Badges] Review: Refactor badge builder to WTForms and Jinja YAML output [Rejected]
  • [Badges] Fix add_tag view to not accept request as parameter [Suggested]
  • [Badges] Replace history_limit with page-based pagination on user profile [Suggestion A] [Suggestion B]
  • [Badges] Return hashed email instead of raw email in JSON responses [Suggestion A] [Suggestion B]
  • Communishift namespace request for draft-share
  • Fix ipsilon links still pointing to pagure.io instead of forge.fedoraproject.org
  • [Monitoring]
    •  200 more items deleted from Nagios (400 to go, 1800 originally)
    • PostrgeSQL checking in place and seems to be solid
    • New monitoring for 5xx errors in HAProxy is finding some interesting things 
    • Detection of crashlooping OCP pods is in the works

CentOS Infra including CentOS CI

This team is taking care of day to day business regarding CentOS Infrastructure and CentOS Stream Infrastructure.
It’s responsible for services running in CentOS Infratrusture and CentOS Stream.
CentOS ticket tracker
CentOS Stream ticket tracker

Release Engineering

This team is taking care of day to day business regarding Fedora releases.
It’s responsible for releases, retirement process of packages and package builds.
Ticket tracker

  • Documentation updates, regular operations tickets, business as usual.
  • Final freeze is in effect now, which means Release Candidate compose requests will start coming shortly.
  • Fedora 44 final release is currently scheduled for Tue 2026-04-14.

AI

This is the summary of the work done regarding AI in Fedora.

QE

This team is taking care of quality of Fedora. Maintaining CI, organizing test days
and keeping an eye on overall quality of Fedora releases.

  • General release validation and bug reporting work throughout the whole week, in preparation for F44 Final.

Forgejo

This team is working on introduction of https://forge.fedoraproject.org to Fedora
and migration of repositories from pagure.io.

EPEL

This team is working on keeping Epel running and helping package things.

  • ongoing planning with openjdk maintainer regarding koji targets and tags
  • routine package maintenance (python-pydbus, python-edgegrid, llvm19, llvm20)
  • documentation refinement (EPEL minor EOL SOP)

List of new releases of apps maintained by I&R Team

If you have any questions or feedback, please respond to this report or contact us on #admin:fedoraproject.org channel on matrix.

The post Community Update – Week 14 2026 appeared first on Fedora Community Blog.

Building and pushing container images on Codeberg CI

Posted by Christof Damian on 2026-04-02 22:00:00 UTC
Codeberg logo
Logo: Codeberg e.V.CC BY-SA 4.0

As part of moving my services to EU-based infrastructure, I’ve been migrating away from GitHub to Codeberg, a non-profit code hosting platform based in Germany. One of the things I needed was a CI pipeline to build a container image and push it to the Codeberg container registry for my wildfires project (which also posts to @catfires@rls.social). It took a few tries to get right, so here’s what works for me.

Pourquoi, dans un projet Spring, je préfère souvent le client Java Kafka natif à Spring Kafka ?

Posted by Guillaume Kulakowski on 2026-04-02 18:37:36 UTC
Spring, pourquoi pas. Spring Kafka, beaucoup moins automatiquement. Quand on touche aux garanties de livraison, aux stratégies de commit ou au diagnostic en production, le client Java Kafka natif reste souvent plus lisible, plus honnête, et plus sûr.

Fedora Code of Conduct Report 2024

Posted by Fedora Community Blog on 2026-04-02 12:00:00 UTC

The Fedora Project’s Code of Conduct and its reports are managed by the Fedora Code of Conduct Committee, the Fedora Community Architect, and the Fedora Project Leader. We publish this summary to demonstrate our commitment to community safety and our project’s social fabric.

This post covers the year of reports received in the 2024 calendar year. The 2023 and 2024 annual report posts are published with delays due to changes in membership in the Code of Conduct Committee and rebalancing existing work. The purpose of publishing the reports now is to provide transparency, insight, and awareness into the health signs of the community.

How’d it go in 2024

The Fedora community continues to see a mix of hurdles in collaborations within the community, off-platform brand management, and a significant focus on moderator accountability.

2024 included reports about external social media posts made outside of our core community spaces. The Fedora Code of Conduct Committee (CoCC) were no longer just “putting out fires” of individual indifferences; we actively set expectations for how contributors represent Fedora on the web and its communities. To support this mission and bring in fresh perspectives to our work, we expanded our committee by welcoming three new members Jona Azizaj, Dave Cantrell, and Dorka Volavkova.

Overall, the 2024 data shows a significant decrease in new reports opened from the previous years. Additionally, fewer warnings and moderations were issued than previous years. The data matches the experience of the Code of Conduct Committee, in that the case load from new reports was finally beginning to decrease in volume. The incidents we received in 2024 were typically less intense and time-consuming than prior years. This supports a hypothesis made by the Committee that reports will decrease as time goes on from the global pandemic. The 2021 initiative of modernizing the Fedora Code of Conduct for sustainability was a successful effort.

YearReports OpenedReports ClosedWarnings IssuedModerations IssuedSuspensions IssuedBans Issued
202411111010
202317175311
202221246300
202123242101
202020168420

Looking forward to 2025

If you witness or are part of a situation that violates Fedora’s Code of Conduct, please open a private report on the Code of Conduct repo or email codeofconduct@fedoraproject.org. As always, your reports are confidential and only visible to the Code of Conduct Committee.

Remember that opening a CoC report does not automatically mean action will be taken. Sometimes things can be clarified, improved, or resolved entirely. Or, it could be something pretty small, but it definitely wasn’t okay, and you don’t want to make a big deal… open that report anyway, because it could show a pattern of behavior that is negatively impacting more people than yourself.

Here is a reminder to our Fedora community to be kind and considerate to each other in all our interactions. We all depend on each other to create a community that is healthy, safe, and happy. Most of all, we love seeing folks self-moderate and stand up for the right thing day-to-day in our community. Keep it up, and keep being awesome Fedora, we <3 you!

About the Committee

Fedora Project’s Code of Conduct and reports are managed by the Fedora Code of Conduct Committee (CoCC). The Fedora CoCC is made up of the Fedora Project Leader, Matthew Miller; the Fedora Community Architect, Justin Wheeler; the Red Hat legal team, as appropriate; and community nominated members. In 2024, the Fedora Code of Conduct Committee (CoCC) expanded its membership by adding three new members. Jona Azizaj, David Cantrell, Dorka Volavkova came in this year.

The post Fedora Code of Conduct Report 2024 appeared first on Fedora Community Blog.

A Few More Thoughts on Sashiko and the Kernel

Posted by Brian (bex) Exelbierd on 2026-04-02 11:50:00 UTC

Disclaimer: I work at Microsoft on upstream Linux in Azure. These are my personal notes and opinions.

I kept thinking about the LWN article $ and the basic analysis I did yesterday. I kept coming back to one of the central themes of the mailing list conversation: false positives. Sashiko’s false positive rate is debated, but, I’m gathering, is pretty good by LLM standards. Still, there was a complaint about the number of false positives focused on the burden that false positives put on contributors and maintainers.

I wanted to understand if the false positive rate, and by extension the burden, was higher from an LLM than from human reviewers. To run that experiment, I needed to define what a false positive actually is. That turns out to be the interesting part.

The Definition Problem

My initial naïve definition of a false positive was any substantial comment that doesn’t yield a code change. If you said something and the code wasn’t changed, then even if it generated future work, it wasn’t applicable to this change now. The obvious hole is a comment that raises a future code change coming in a different patch set. But it felt like this number could be directionally accurate for understanding if we get more false positives or not.

The deeper problem is that “comment that doesn’t change code” isn’t really what false positive means in review. The act of questioning code can lead to greater confidence in the patch being proposed. It can reveal unrelated changes that are required or surface features that should also be considered. Not a negative outcome, but potentially not relevant to the actual patch set under discussion. So I tried reframing from false positives to burden: any comment that doesn’t result in a code change and was actually read by the contributor or maintainer is burdensome. It doesn’t matter whether a human or LLM reviewer raised the comment. If it didn’t result in a change, it was work or thought they didn’t need to do. For example, a back-and-forth conversation to prove the correctness of something that was already correct.

But that definition fails too, and the reason it fails is the real insight.

If two humans are engaged in a review process and there’s a back-and-forth conversation that does not result in a code change, most likely neither human would describe this as unnecessary burden. They would probably describe it as work they had to do or effort they expended, but both humans have likely come out of that conversation changed. Greater understanding of different parts of the system. Better ability to express oneself so the questions aren’t raised next time. Increased confidence in the correctness of a solution. There is a change assumed to have happened to one or both of the people.

A review conversation that doesn’t change code but changes the people having it isn’t a false positive. It only looks like one when the reviewer is a machine that won’t be changed.

For what it’s worth, I did look at existing studies of human review false positive rates. In my brief and non-exhaustive look, I’ve come to believe they aren’t useful here, not only because the question is moot when both parties come out changed, but because many are flawed or non-comparable. Some are in domains where reviewers are generalists talking to a specialist, unlikely in the kernel. Others misclassify trivial exchanges like “LGTM” or “thanks” as false positives. And none have been conducted over the kernel.

When the Reviewer Is a Machine

When a finding or probing question is raised by an LLM agent, the assumption that both parties come out changed breaks down.

Probing questions may not even be welcome from an LLM agent. One could never really be sure whether this was a “humans normally say this kind of thing in this context” situation versus an “I see something that maybe is wrong” situation.

But the more important part is this: if a human has to read a false positive, they have to put in their side of the work to validate, verify, explore, or test the question, and ultimately determine that it’s not an issue. They are unlikely to be changed in the absence of an exchange. And we know for a fact that the machine is not going to be changed.

In theory, we could wire up a training loop for Sashiko to take these back-and-forth exchanges and learn from them to reduce the incidence of false positives. I suspect it would have very little impact overall. First, the analysis showed that there’s almost no situation where the same bug is being surfaced over and over again. The machine is unlikely to run into the same finding and then have learned that finding isn’t valid. Second, the machine is not arguing from a position of true reasoning, therefore it is never clear if it backed down because it decided to be an agreeable sycophant or because the additional commentary made the correctness argument airtight.

The Social Problem

At its true core, I think the conversation around false positives, based on what I read in the article, is likely a social problem, like most truly intractable problems in computer science.

If an LLM agent reviews my contribution and the maintainer insists that I address the review, I am not only forced to do what turns out, in the case of a false positive, to be unnecessary work, but forced to performatively defend myself against a machine. Or worse, argue with the machine performatively. The combination of unnecessary work that generates no value, plus being forced to do so performatively in the face of knowing it generates no value, but now having to do more work to show that I generated the work that did no value is a line too far for most of our psyches.

A Possible Path

Setting aside the separate question of whether LLM ability will continue improving and therefore the number of false positives will go down, the core question of how to deal with false positives needs to be addressed at a social level.

In a space like the kernel, I would argue it may be appropriate to allow those whose code has been reviewed to react to LLM-generated findings with something along the lines of “smells like bullshit” and not have to go through the performative exercise of proving it’s bullshit, because we trust their instinct.

That said, it is probably worth creating some kind of long-term profile or scoreboard, both of those being the wrong words, for a contributor, so that they can over time understand if their intuition has blind spots. If an LLM is consistently raising a certain kind of feedback that they are dismissing, but we later discover a bug and have to fix it, or if human reviewers come back and their synthesis of their own experience plus what the LLM provided leads them to believe there’s a real, demonstrable problem, that’s a learning opportunity for the contributor.

The challenge is that there are no systems I’m aware of in modern use where these kinds of profiles are ever not used abusively against those profiled. Which is yet another social problem.