/rss20.xml">

Fedora People

Matrix server maintenance

Posted by Fedora Infrastructure Status on 2026-04-06 11:15:00 UTC

Element Matrix Services is performing scheduled maintenance on our matrix server (fedora.im).

Affected Services:

  • chat.fedoraproject.org
  • fedora.im
  • matrix services

kurbu5: MIT Kerberos plugins in Rust

Posted by Alexander Bokovoy on 2026-04-04 19:10:00 UTC

For a couple of years, Andreas Schneider and I have been working on a project we call the ‘local authentication hub’: an effort to use the Kerberos protocol to track authentication and authorization context for applications, regardless of whether the system they run on is enrolled into a larger organizational domain or is standalone. We aim to reuse the code and experience we got while developing Samba and FreeIPA over the past twenty years.

Local authentication hub

The local authentication hub relies on a Kerberos KDC available on demand on each system. We achieved this by allowing MIT Kerberos to communicate over UNIX domain sockets. On Linux systems, systemd allows processes to be started on demand when someone connects to a UNIX domain socket, and MIT Kerberos 1.22 has support for this mode.

A KDC accessible over a UNIX domain socket is not very useful in itself: it is only available within the context of a single machine (or a single container, or pod, if UNIX domain sockets are shared across multiple containers). Otherwise, it is a fully featured KDC with its own quirks. And we can start looking at what could be improved based on the enhanced context locality we have achieved. For example, a KDB driver can see host-specific network interfaces and thus be able to react to requests such as host/<ip.ad.dr.ess>@LOCALKDC-REALM dynamically—something that a centrally-managed KDC would only do through statically registered service principal names (SPNs), which are a pain to update as machines move across networks.

Adding support for dynamic features means new code needs to be written. MIT Kerberos is written in C, so our choices are either to continue writing in C or to integrate with whatever new language we choose. Initially, we kept the local KDC database driver written in C and decided to build the infrastructure we need in Rust. The end goal is to have most bits written in Rust.

The local KDC database isn’t supposed to handle millions of principal entries, but even for millions of them, MIT Kerberos has a pretty good default database driver built on LMDB: klmdb. We wanted to get out of the data store business and instead focus on higher-level logic. Thus, we made the same change I made in Samba around 2003 for virtual file system modules: we introduced support for stackable KDB drivers. This is also a part of the MIT Kerberos 1.22 release: a KDB driver implementation can ask the KDC to load a different KDB driver and choose to delegate some requests to it. The local KDC driver is using klmdb for that purpose.

With the database handled for us by klmdb, we focused on the local KDC-specific logic. We wanted to dynamically discover user principals from the operating system so that administrators do not need to maintain separate databases for them. systemd provides a userdb API to query such information over a varlink interface (also available over a UNIX domain socket) in a structured way, using JSON format. Thus, the Kirmes project was born. Kirmes is a Rust data library backed by the userdb API. It handles varlink communication through the wonderful Zlink library and exposes both asynchronous and synchronous access to user and group information.

The local KDC database driver prototype used the Kirmes C API. We demonstrated it at FOSDEM 2025: a user lookup is done over varlink, and if a user is present on the system, their Kerberos key is then looked up in klmdb using a specially-formatted userdb:<username> principal. You still need to handle those keys somehow, but there is a way to avoid that: use RADIUS.

Pre-authentication

A bit of historical reference. In 2012, Red Hat collaborated with MIT to introduce a KDC-side implementation of RFC 6560 (the OTP pre-authentication mechanism; at that point implemented in a proprietary solution by the RSA corporation). This mechanism allowed the KDC to get a hint out of a KDB driver and ask a RADIUS server to authenticate the credentials provided by the Kerberos client. Unlike traditional Kerberos symmetric keys, in this case, the client is sending a plain-text credential over the Kerberos protocol, and this credential can be forwarded to the RADIUS server. The plain-text nature of the RADIUS credential requires the use of a secure communication channel, and a good part of RFC 6560 relies on Flexible Authentication Secure Tunneling (FAST, RFC6113), where a pre-existing Kerberos ticket is used to encrypt the content of that tunnel.

Since ~2013, FreeIPA has used this mechanism to provide multi-factor authentication mechanisms: HOTP/TOTP tokens, RADIUS proxying to remote servers, the OAuth2 device authorization grant flow, and FIDO2 tokens. The list of mechanisms can be extended, as long as the model fits into the somewhat constrained Kerberos exchange flow. FreeIPA handles all communication from the KDC side via a local UNIX domain socket-activated daemon, ipa-otpd, which performs a user principal lookup and then decides on the details of how that user will be authenticated.

For the local KDC case, we used a similar approach but wrote a simplified version, localkdc-pam-auth, which uses PAM to authenticate user credentials. It works well and allows for a drop-in replacement: once the local KDC is set up, users defined on the system will automatically be able to receive Kerberos tickets, with no need to change any passwords or migrate their credentials into the Kerberos KDC. All we need now is the business logic to guide the KDC to use the OTP pre-authentication mechanism so that our RADIUS ‘proxy’ (localkdc-pam-auth) gets activated. This logic is implemented and will be available in the first localkdc release soon.

API bindings

But back to the KDC side. As mentioned above, our goal was to write the local KDC database driver in a modern, safe language. Interfacing Rust with the MIT Kerberos KDC means building an interface that allows aligning code on both sides. This is what this blog is actually about (sorry for the long prelude…): how to make an MIT Kerberos KDB driver in Rust.

Today I published Kurbu5, a project that aims to provide these API bindings to Rust. The name is a transliteration of “krb5” into Mesopotamian cuneiform phonology: Kurbu-ḫamšat-qaqqadī—”The Blessed Five-Headed One”.

Creating API bindings is tedious work: there are many interfaces, each representing multiple functions and structures. MIT Kerberos has 12 interfaces which altogether expose roughly 117 methods that plugin authors implement, backed by around 70 supporting types (data structures passed into and out of those methods). It all sounds like a Tolkien tale: nine interfaces for core Kerberos functionality (checking password quality, mapping hostnames to Kerberos realms, mapping Kerberos principals to local accounts, selecting which credential cache to use, handling pre-authentication on both the client and server side, enforcing KDC policy, authorizing PKINIT certificates, and auditing events on the KDC side), the database backend interface, and two administrative interfaces. This is something that could be automated with agentic workflows—which I did to allow a parallel porting effort. The resulting agent instructions are useful artifacts in themselves: they show how to work when porting MIT Kerberos C code to Rust.

The result is split over several Rust crates to allow targeted reuse. The bulk of the code lives in three crates. The core Kerberos plugin crate (kurbu5-rs) is the largest at around 12,600 lines. The database backend crate (kurbu5-kdb-rs) follows at 5,600 lines, and the administration crate (kurbu5-kadm5-rs) at 3,100 lines. The remaining crates—the proc-macro derives and the raw FFI sys crates—are much smaller, with the sys crates being almost trivially thin (the KDB and kadm5 ones are under 40 lines each, since they mostly just re-export bindings from the main sys crate).

All crates are available on crates.io and share the same MIT license as the original MIT Kerberos.

  • kurbu5-sys — Raw FFI bindings to the MIT Kerberos libkrb5 and KDB plugin API
  • kurbu5-derive — Proc-macro derives for kurbu5-rs non-KDB plugin interfaces
  • kurbu5-rs — Safe, idiomatic Rust API for writing MIT Kerberos non-KDB plugin modules
  • kurbu5-kdb-sys — KDB plugin API re-export — thin wrapper over kurbu5-sys adding libkdb5 linkage
  • kurbu5-kdb-derive — Proc-macro derive for kurbu5-kdb-rs KDB driver plugins
  • kurbu5-kdb-rs — Safe, idiomatic Rust API for writing MIT Kerberos KDB driver plugins
  • kurbu5-kadm5-sys — KADM5 plugin API bindings — links libkadm5srv_mit and re-exports kurbu5-sys types
  • kurbu5-kadm5-derive — Proc-macro derives for kurbu5-kadm5-rs KADM5_AUTH and KADM5_HOOK plugin interfaces
  • kurbu5-kadm5-rs — Safe, idiomatic Rust API for writing MIT Kerberos KADM5_AUTH and KADM5_HOOK plugin modules

In the localkdc project, we use kurbu5 to build a KDB driver and provide our audit plugin. We also have an experimental re-implementation of the OTP pre-authentication mechanism, both client and KDC sides, that was used to test interoperability with MIT Kerberos versions. The core of the KDB driver is ~520 lines of heavily documented Rust code, mostly handling business logic.

misc fedora bits first week of april 2026

Posted by Kevin Fenzi on 2026-04-04 18:18:28 UTC
Scrye into the crystal ball

A somewhat quiet week in fedora land this time, which is nice, as it allows for catching up on planned work. Of course there was the usual flow of day to day items too.

DeploymentConfig to Deployment

Long ago OpenShift used a custom object called 'DeploymentConfig' to define how to deploy applications. After a while it was deprecated in favor of the normal k8s 'Deployment' object. We have a bunch of apps using the old DeploymentConfig and we wanted to migrate them to the new Deployment.

To be clear, this is just a deprecation right now, it's not been removed from OpenShift yet, but we wanted to get things moveed sooner rather than later.

So, Pedro did all the heavy lifting here and created pull requests for all our apps to move them.

I spent some time this last week merging those and then doing the dance to change the existing app over, which roughly was:

  • merge pull request

  • delete DeploymentConfig

  • run ansible to deploy the Deployment

  • check that everything was redeployed and working correctly.

I managed to find a few apps in staging that were not working or deployed correctly and had to fix those up along the way. We also hit some issues with selectors not getting updated, so applications didn't have correct routes/services.

There's a few more of these to do, but will probibly wait until after freeze is over to do them as they could be disruptive.

Fedora 44 Final freeze

Speaking of freeze, we started the Fedora 44 Final infrastructure freeze. So far things are looking smooth for composes and such.

There are a few blockers currently, but hopefully we can get them sorted out and get a good release soon.

koji packaging

koji 1.36.0 came out last week and I spent a bit of time this week looking at modernizing the fedora spec to more match the python packaging guidelines and also to enable tests.

My somewhat hacky pr is at https://src.fedoraproject.org/rpms/koji/pull-request/29

It's nice to run the tests and have things not throwing deprecation warnings.

Upcoming blogs and vacation

I have some posts planned which I need to actually write up sometime. One on my solar system, which is mostly going great, and another fun one on open source monitoring of blood glucose levels. Perhaps this weekend.

I'm going to be largely away from the internet the week of April 20th. I'm going on a family vacation to Hawaii. :) I have never been there, so it should be pretty fun. I'll probibly check emails from time to time, but I will definitely not be around day to day on matrix/slack/irc/whatever.

comments? additions? reactions?

As always, comment on mastodon: https://fosstodon.org/@nirik/116347877029785741

Browser wars

Posted by Vedran Miletić on 2026-04-04 14:43:39 UTC

Browser wars


brown fox on snow field

Photo source: Ray Hennessy (@rayhennessy) | Unsplash


Last week in Rijeka we held Science festival 2015. This is the (hopefully not unlucky) 13th instance of the festival that started in 2003. Popular science events were organized in 18 cities in Croatia.

I was invited to give a popular lecture at the University departments open day, which is a part of the festival. This is the second time in a row that I got invited to give popular lecture at the open day. In 2014 I talked about The Perfect Storm in information technology caused by the fall of economy during 2008-2012 Great Recession and the simultaneous rise of low-cost, high-value open-source solutions. Open source completely changed the landscape of information technology in just a few years.

The follow-up

Posted by Vedran Miletić on 2026-04-04 14:43:39 UTC

The follow-up


people watching concert

Photo source: Andre Benz (@trapnation) | Unsplash


When Linkin Park released their second album Meteora, they had a quote on their site that went along the lines of

Musicians have their entire lives to come up with a debut album, and only a very short time afterward to release a follow-up.

Open-source magic all around the world

Posted by Vedran Miletić on 2026-04-04 14:43:39 UTC

Open-source magic all around the world


woman blowing sprinkle in her hand

Photo source: Almos Bechtold (@almosbech) | Unsplash


Last week brought us two interesting events related to open-source movement: 2015 Red Hat Summit (June 23-26, Boston, MA) and Skeptics in the pub (June 26, Rijeka, Croatia).

Joys and pains of interdisciplinary research

Posted by Vedran Miletić on 2026-04-04 14:43:39 UTC

Joys and pains of interdisciplinary research


white and black coffee maker

Photo source: Trnava University (@trnavskauni) | Unsplash


In 2012 University of Rijeka became NVIDIA GPU Education Center (back then it was called CUDA Teaching Center). For non-techies: NVIDIA is a company producing graphical processors (GPUs), the computer chips that draw 3D graphics in games and the effects in modern movies. In the last couple of years, NVIDIA and other manufacturers allowed the usage of GPUs for general computations, so one can use them to do really fast multiplication of large matrices, finding paths in graphs, and other mathematical operations.

What is the price of open-source fear, uncertainty, and doubt?

Posted by Vedran Miletić on 2026-04-04 14:43:39 UTC

What is the price of open-source fear, uncertainty, and doubt?


turned on red open LED signage

Photo source: j (@janicetea) | Unsplash


The Journal of Physical Chemistry Letters (JPCL), published by American Chemical Society, recently put out two Viewpoints discussing open-source software:

  1. Open Source and Open Data Should Be Standard Practices by J. Daniel Gezelter, and
  2. What Is the Price of Open-Source Software? by Anna I. Krylov, John M. Herbert, Filipp Furche, Martin Head-Gordon, Peter J. Knowles, Roland Lindh, Frederick R. Manby, Peter Pulay, Chris-Kriton Skylaris, and Hans-Joachim Werner.

Viewpoints are not detailed reviews of the topic, but instead, present the author's view on the state-of-the-art of a particular field.

The first of two articles stands for open source and open data. The article describes Quantum Chemical Program Exchange (QCPE), which was used in the 1980s and 1990s for the exchange of quantum chemistry codes between researchers and is roughly equivalent to the modern-day GitHub. The second of two articles questions the open-source software development practice, advocating the usage and development of proprietary software. I will dissect and counter some of the key points from the second article below.

On having leverage and using it for pushing open-source software adoption

Posted by Vedran Miletić on 2026-04-04 14:43:39 UTC

On having leverage and using it for pushing open-source software adoption


Open 24 Hours neon signage

Photo source: Alina Grubnyak (@alinnnaaaa) | Unsplash


Back in late August and early September, I attended 4th CP2K Tutorial organized by CECAM in Zürich. I had the pleasure of meeting Joost VandeVondele's Nanoscale Simulations group at ETHZ and working with them on improving CP2K. It was both fun and productive; we overhauled the wiki homepage and introduced acronyms page, among other things. During a coffee break, there was a discussion on the JPCL viewpoint that speaks against open-source quantum chemistry software, which I countered in the previous blog post.

But there is a story from the workshop which somehow remained untold, and I wanted to tell it at some point. One of the attendants, Valérie Vaissier, told me how she used proprietary quantum chemistry software during her Ph.D.; if I recall correctly, it was Gaussian. Eventually, she decided to learn CP2K and made the switch. She liked CP2K better than the proprietary software package because it is available free of charge, the reported bugs get fixed quicker, and the group of developers behind it is very enthusiastic about their work and open to outsiders who want to join the development.

AMD and the open-source community are writing history

Posted by Vedran Miletić on 2026-04-04 14:43:39 UTC

AMD and the open-source community are writing history


a close up of a cpu chip on top of a motherboard

Photo source: Andrew Dawes (@andrewdawes) | Unsplash


Over the last few years, AMD has slowly been walking the path towards having fully open source drivers on Linux. AMD did not walk alone, they got help from Red Hat, SUSE, and probably others. Phoronix also mentions PathScale, but I have been told on Freenode channel #radeon this is not the case and found no trace of their involvement.

AMD finally publically unveiled the GPUOpen initiative on the 15th of December 2015. The story was covered on AnandTech, Maximum PC, Ars Technica, Softpedia, and others. For the open-source community that follows the development of Linux graphics and computing stack, this announcement comes as hardly surprising: Alex Deucher and Jammy Zhou presented plans regarding amdgpu on XDC2015 in September 2015. Regardless, public announcement in mainstream media proves that AMD is serious about GPUOpen.

I believe GPUOpen is the best chance we will get in this decade to open up the driver and software stacks in the graphics and computing industry. I will outline the reasons for my optimism below. As for the history behind open-source drivers for ATi/AMD GPUs, I suggest the well-written reminiscence on Phoronix.

I am still not buying the new-open-source-friendly-Microsoft narrative

Posted by Vedran Miletić on 2026-04-04 14:43:39 UTC

I am still not buying the new-open-source-friendly-Microsoft narrative


black framed window

Photo source: Patrick Bellot (@pbellot) | Unsplash


This week Microsoft released Computational Network Toolkit (CNTK) on GitHub, after open sourcing Edge's JavaScript engine last month and a whole bunch of projects before that.

Even though the open sourcing of a bunch of their software is a very nice move from Microsoft, I am still not convinced that they have changed to the core. I am sure there are parts of the company who believe that free and open source is the way to go, but it still looks like a change just on the periphery.

All the projects they have open-sourced so far are not the core of their business. Their latest version of Windows is no more friendly to alternative operating systems than any version of Windows before it, and one could argue it is even less friendly due to more Secure Boot restrictions. Using Office still basically requires you to use Microsoft's formats and, in turn, accept their vendor lock-in.

Put simply, I think all the projects Microsoft has opened up so far are a nice start, but they still have a long way to go to gain respect from the open-source community. What follows are three steps Microsoft could take in that direction.

Free to know: Open access and open source

Posted by Vedran Miletić on 2026-04-04 14:43:39 UTC

Free to know: Open access and open source


yellow and black come in we're open sign

Photo source: Álvaro Serrano (@alvaroserrano) | Unsplash


!!! info Reposted from Free to Know: Open access & open source, originally posted by STEMI education on Medium.

Q&A with Vedran Miletić

In June 2014, Elon Musk opened up all Tesla patents. In a blog post announcing this, he wrote that patents "serve merely to stifle progress, entrench the positions of giant corporations and enrich those in the legal profession, rather than the actual inventors." In other words, he joined those who believe that free knowledge is the prerequisite for a great society -- that it is the vibrancy of the educated masses that can make us capable of handling the strange problems our world is made of.

The movements that promote and cultivate this vibrancy are probably most frequently associated with terms "Open access" and "open source". In order to learn more about them, we Q&A-ed Vedran Miletić, the Rocker of Science -- researcher, developer and teacher, currently working in computational chemistry, and a free and open source software contributor and activist. You can read more of his thoughts on free software and related themes on his great blog, Nudged Elastic Band. We hope you will join him, us, and Elon Musk in promoting free knowledge, cooperation and education.

The academic and the free software community ideals

Posted by Vedran Miletić on 2026-04-04 14:43:39 UTC

The academic and the free software community ideals


book lot on black wooden shelf

Photo source: Giammarco Boscaro (@giamboscaro) | Unsplash


Today I vaguely remembered there was one occasion in 2006 or 2007 when some guy from the academia doing something with Java and Unicode posted on some mailing list related to the free and open-source software about a tool he was developing. What made it interesting was that the tool was open source, and he filed a patent on the algorithm.

Celebrating Graphics and Compute Freedom Day

Posted by Vedran Miletić on 2026-04-04 14:43:39 UTC

Celebrating Graphics and Compute Freedom Day


stack of white and brown ceramic plates

Photo source: Elena Mozhvilo (@miracleday) | Unsplash


Hobbyists, activists, geeks, designers, engineers, etc have always tinkered with technologies for their purposes (in early personal computing, for example). And social activists have long advocated the power of giving tools to people. An open hardware movement driven by these restless innovators is creating ingenious versions of all sorts of technologies, and freely sharing the know-how through the Internet and more recently through social media. Open-source software and more recently hardware is also encroaching upon centers of manufacturing and can empower serious business opportunities and projects.

The free software movement is cited as both an inspiration and a model for open hardware. Free software practices have transformed our culture by making it easier for people to become involved in producing things from magazines to music, movies to games, communities to services. With advances in digital fabrication making it easier to manipulate materials, some now anticipate an analogous opening up of manufacturing to mass participation.

Enabling HTTP/2, HTTPS, and going HTTPS-only on inf2

Posted by Vedran Miletić on 2026-04-04 14:43:39 UTC

Enabling HTTP/2, HTTPS, and going HTTPS-only on inf2


an old padlock on a wooden door

Photo source: Arkadiusz Gąsiorowski (@ambuscade) | Unsplash


Inf2 is a web server at University of Rijeka Department of Informatics, hosting Sphinx-produced static HTML course materials (mirrored elsewhere), some big files, a WordPress instance (archived elsewhere), and an internal instance of Moodle.

HTTPS was enabled on inf2 for a long time, albeit using a self-signed certificate. However, with Let's Encrpyt coming into public beta, we decided to join the movement to HTTPS.

Why we use reStructuredText and Sphinx static site generator for maintaining teaching materials

Posted by Vedran Miletić on 2026-04-04 14:43:39 UTC

Why we use reStructuredText and Sphinx static site generator for maintaining teaching materials


open book lot

Photo source: Patrick Tomasso (@impatrickt) | Unsplash


Yesterday I was asked by Edvin Močibob, a friend and a former student teaching assistant of mine, the following question:

You seem to be using Sphinx for your teaching materials, right? As far as I can see, it doesn't have an online WYSIWYG editor. I would be interested in comparison of your solution with e.g. MediaWiki.

While the advantages and the disadvantages of static site generators, when compared to content management systems, have been written about and discussed already, I will outline our reasons for the choice of Sphinx below. Many of the points have probably already been presented elsewhere.

Fly away, little bird

Posted by Vedran Miletić on 2026-04-04 14:43:39 UTC

Fly away, little bird


macro-photography blue, brown, and white sparrow on branch

Photo source: Vincent van Zalinge (@vincentvanzalinge) | Unsplash


The last day of July happened to be the day that Domagoj Margan, a former student teaching assistant and a great friend of mine, set up his own DigitalOcean droplet running a web server and serving his professional website on his own domain domargan.net. For a few years, I was helping him by providing space on the server I owned and maintained, and I was always glad to do so. Let me explain why.

Mirroring free and open-source software matters

Posted by Vedran Miletić on 2026-04-04 14:43:39 UTC

Mirroring free and open-source software matters


gold and silver steel wall decor

Photo source: Tuva Mathilde Løland (@tuvaloland) | Unsplash


Post theme song: Mirror mirror by Blind Guardian

A mirror is a local copy of a website that's used to speed up access for the users residing in the area geographically close to it and reduce the load on the original website. Content distribution networks (CDNs), which are a newer concept and perhaps more familiar to younger readers, serve the same purpose, but do it in a way that's transparent to the user; when using a mirror, the user will see explicitly which mirror is being used because the domain will be different from the original website, while, in case of CDNs, the domain will remain the same, and the DNS resolution (which is invisible to the user) will select a different server.

Free and open-source software was distributed via (FTP) mirrors, usually residing in the universities, basically since its inception. The story of Linux mentions a directory on ftp.funet.fi (FUNET is the Finnish University and Research Network) where Linus Torvalds uploaded the sources, which was soon after mirrored by Ted Ts'o on MIT's FTP server. The GNU Project's history contains an analogous process of making local copies of the software for faster downloading, which was especially important in the times of pre-broadband Internet, and it continues today.

Markdown vs reStructuredText for teaching materials

Posted by Vedran Miletić on 2026-04-04 14:43:39 UTC

Markdown vs reStructuredText for teaching materials


blue wooden door surrounded by book covered wall

Photo source: Eugenio Mazzone (@eugi1492) | Unsplash


Back in summer 2017. I wrote an article explaining why we used Sphinx and reStructuredText to produce teaching materials and not a wiki. In addition to recommending Sphinx as the solution to use, it was general praise for generating static HTML files from Markdown or reStructuredText.

This summer I made the conversion of teaching materials from reStructuredText to Markdown. Unfortunately, the automated conversion using Pandoc didn't quite produce the result I wanted so I ended up cooking my own Python script that converted the specific dialect of reStructuredText that was used for writing the contents of the group website and fixing a myriad of inconsistencies in the writing style that accumulated over the years.

Don't use RAR

Posted by Vedran Miletić on 2026-04-04 14:43:39 UTC

Don't use RAR


a large white tank

Photo source: Tim Mossholder (@ctimmossholder) | Unsplash


I sometimes joke with my TA Milan Petrović that his usage of RAR does not imply that he will be driving a rari. After all, he is not Devito rapping^Wsinging Uh 😤. Jokes aside, if you search for "should I use RAR" or a similar phrase on your favorite search engine, you'll see articles like 2007 Don't Use ZIP, Use RAR and 2011 Why RAR Is Better Than ZIP & The Best RAR Software Available.

Should I do a Ph.D.?

Posted by Vedran Miletić on 2026-04-04 14:43:39 UTC

Should I do a Ph.D.?


a bike is parked in front of a building

Photo source: Santeri Liukkonen (@iamsanteri) | Unsplash


Tough question, and the one that has been asked and answered over and over. The simplest answer is, of course, it depends on many factors.

As I started blogging at the end of my journey as a doctoral student, the topic of how I selected the field and ultimately decided to enroll in the postgraduate studies never really came up. In the following paragraphs, I will give a personal perspective on my Ph.D. endeavor. Just like other perspectives from doctors of not that kind, it is specific to the person in the situation, but parts of it might apply more broadly.

Alumni Meeting 2023 at HITS and the reminiscence of the postdoc years

Posted by Vedran Miletić on 2026-04-04 14:43:39 UTC

Alumni Meeting 2023 at HITS and the reminiscence of the postdoc years


a fountain in the middle of a town square

Photo source: Jahanzeb Ahsan (@jahan_photobox) | Unsplash


This month we had Alumni Meeting 2023 at the Heidelberg Institute for Theoretical Studies, or HITS for short. I was very glad to attend this whole-day event and reconnect with my former colleagues as well as researchers currently working in the area of computational biochemistry at HITS. After all, this is the place and the institution where I worked for more than half of my time as a postdoc, where I started regularly contributing code to GROMACS molecular dynamics simulator, and published some of my best papers.

My perspective after two years as a research and teaching assistant at FIDIT

Posted by Vedran Miletić on 2026-04-04 14:43:39 UTC

My perspective after two years as a research and teaching assistant at FIDIT


human statues near white building

Photo source: Darran Shen (@darranshen) | Unsplash


My employment as a research and teaching assistant at Faculty of Informatics and Digital Technologies (FIDIT for short), University of Rijeka (UniRi) ended last month with the expiration of the time-limited contract I had. This moment has marked almost two full years I spent in this institution and I think this is a good time to take a look back at everything that happened during that time. Inspired by the recent posts by the PI of my group, I decided to write my perspective on the time that I hope is just the beginning of my academic career.

Community Update – Week 14 2026

Posted by Fedora Community Blog on 2026-04-03 10:00:00 UTC

This is a report created by CLE Team, which is a team containing community members working in various Fedora groups for example Infrastructure, Release Engineering, Quality etc. This team is also moving forward some initiatives inside Fedora project.

Week: 31 Mar – 03 Apr 2026

Fedora Infrastructure

This team is taking care of day to day business regarding Fedora Infrastructure.
It’s responsible for services running in Fedora infrastructure.
Ticket tracker

  • [Ansible] Perform mapping for Fedora Science teams and groups [Commit]
  • [Badges] Review: Refactor badge builder to WTForms and Jinja YAML output [Rejected]
  • [Badges] Fix add_tag view to not accept request as parameter [Suggested]
  • [Badges] Replace history_limit with page-based pagination on user profile [Suggestion A] [Suggestion B]
  • [Badges] Return hashed email instead of raw email in JSON responses [Suggestion A] [Suggestion B]
  • Communishift namespace request for draft-share
  • Fix ipsilon links still pointing to pagure.io instead of forge.fedoraproject.org
  • [Monitoring]
    •  200 more items deleted from Nagios (400 to go, 1800 originally)
    • PostrgeSQL checking in place and seems to be solid
    • New monitoring for 5xx errors in HAProxy is finding some interesting things 
    • Detection of crashlooping OCP pods is in the works

CentOS Infra including CentOS CI

This team is taking care of day to day business regarding CentOS Infrastructure and CentOS Stream Infrastructure.
It’s responsible for services running in CentOS Infratrusture and CentOS Stream.
CentOS ticket tracker
CentOS Stream ticket tracker

Release Engineering

This team is taking care of day to day business regarding Fedora releases.
It’s responsible for releases, retirement process of packages and package builds.
Ticket tracker

  • Documentation updates, regular operations tickets, business as usual.
  • Final freeze is in effect now, which means Release Candidate compose requests will start coming shortly.
  • Fedora 44 final release is currently scheduled for Tue 2026-04-14.

AI

This is the summary of the work done regarding AI in Fedora.

QE

This team is taking care of quality of Fedora. Maintaining CI, organizing test days
and keeping an eye on overall quality of Fedora releases.

  • General release validation and bug reporting work throughout the whole week, in preparation for F44 Final.

Forgejo

This team is working on introduction of https://forge.fedoraproject.org to Fedora
and migration of repositories from pagure.io.

EPEL

This team is working on keeping Epel running and helping package things.

  • ongoing planning with openjdk maintainer regarding koji targets and tags
  • routine package maintenance (python-pydbus, python-edgegrid, llvm19, llvm20)
  • documentation refinement (EPEL minor EOL SOP)

List of new releases of apps maintained by I&R Team

If you have any questions or feedback, please respond to this report or contact us on #admin:fedoraproject.org channel on matrix.

The post Community Update – Week 14 2026 appeared first on Fedora Community Blog.

Pourquoi, dans un projet Spring, je préfère souvent le client Java Kafka natif à Spring Kafka ?

Posted by Guillaume Kulakowski on 2026-04-02 18:37:36 UTC
Spring, pourquoi pas. Spring Kafka, beaucoup moins automatiquement. Quand on touche aux garanties de livraison, aux stratégies de commit ou au diagnostic en production, le client Java Kafka natif reste souvent plus lisible, plus honnête, et plus sûr.

Fedora Code of Conduct Report 2024

Posted by Fedora Community Blog on 2026-04-02 12:00:00 UTC

The Fedora Project’s Code of Conduct and its reports are managed by the Fedora Code of Conduct Committee, the Fedora Community Architect, and the Fedora Project Leader. We publish this summary to demonstrate our commitment to community safety and our project’s social fabric.

This post covers the year of reports received in the 2024 calendar year. The 2023 and 2024 annual report posts are published with delays due to changes in membership in the Code of Conduct Committee and rebalancing existing work. The purpose of publishing the reports now is to provide transparency, insight, and awareness into the health signs of the community.

How’d it go in 2024

The Fedora community continues to see a mix of hurdles in collaborations within the community, off-platform brand management, and a significant focus on moderator accountability.

2024 included reports about external social media posts made outside of our core community spaces. The Fedora Code of Conduct Committee (CoCC) were no longer just “putting out fires” of individual indifferences; we actively set expectations for how contributors represent Fedora on the web and its communities. To support this mission and bring in fresh perspectives to our work, we expanded our committee by welcoming three new members Jona Azizaj, Dave Cantrell, and Dorka Volavkova.

Overall, the 2024 data shows a significant decrease in new reports opened from the previous years. Additionally, fewer warnings and moderations were issued than previous years. The data matches the experience of the Code of Conduct Committee, in that the case load from new reports was finally beginning to decrease in volume. The incidents we received in 2024 were typically less intense and time-consuming than prior years. This supports a hypothesis made by the Committee that reports will decrease as time goes on from the global pandemic. The 2021 initiative of modernizing the Fedora Code of Conduct for sustainability was a successful effort.

YearReports OpenedReports ClosedWarnings IssuedModerations IssuedSuspensions IssuedBans Issued
202411111010
202317175311
202221246300
202123242101
202020168420

Looking forward to 2025

If you witness or are part of a situation that violates Fedora’s Code of Conduct, please open a private report on the Code of Conduct repo or email codeofconduct@fedoraproject.org. As always, your reports are confidential and only visible to the Code of Conduct Committee.

Remember that opening a CoC report does not automatically mean action will be taken. Sometimes things can be clarified, improved, or resolved entirely. Or, it could be something pretty small, but it definitely wasn’t okay, and you don’t want to make a big deal… open that report anyway, because it could show a pattern of behavior that is negatively impacting more people than yourself.

Here is a reminder to our Fedora community to be kind and considerate to each other in all our interactions. We all depend on each other to create a community that is healthy, safe, and happy. Most of all, we love seeing folks self-moderate and stand up for the right thing day-to-day in our community. Keep it up, and keep being awesome Fedora, we <3 you!

About the Committee

Fedora Project’s Code of Conduct and reports are managed by the Fedora Code of Conduct Committee (CoCC). The Fedora CoCC is made up of the Fedora Project Leader, Matthew Miller; the Fedora Community Architect, Justin Wheeler; the Red Hat legal team, as appropriate; and community nominated members. In 2024, the Fedora Code of Conduct Committee (CoCC) expanded its membership by adding three new members. Jona Azizaj, David Cantrell, Dorka Volavkova came in this year.

The post Fedora Code of Conduct Report 2024 appeared first on Fedora Community Blog.

A Few More Thoughts on Sashiko and the Kernel

Posted by Brian (bex) Exelbierd on 2026-04-02 11:50:00 UTC

Disclaimer: I work at Microsoft on upstream Linux in Azure. These are my personal notes and opinions.

I kept thinking about the LWN article $ and the basic analysis I did yesterday. I kept coming back to one of the central themes of the mailing list conversation: false positives. Sashiko’s false positive rate is debated, but, I’m gathering, is pretty good by LLM standards. Still, there was a complaint about the number of false positives focused on the burden that false positives put on contributors and maintainers.

I wanted to understand if the false positive rate, and by extension the burden, was higher from an LLM than from human reviewers. To run that experiment, I needed to define what a false positive actually is. That turns out to be the interesting part.

The Definition Problem

My initial naïve definition of a false positive was any substantial comment that doesn’t yield a code change. If you said something and the code wasn’t changed, then even if it generated future work, it wasn’t applicable to this change now. The obvious hole is a comment that raises a future code change coming in a different patch set. But it felt like this number could be directionally accurate for understanding if we get more false positives or not.

The deeper problem is that “comment that doesn’t change code” isn’t really what false positive means in review. The act of questioning code can lead to greater confidence in the patch being proposed. It can reveal unrelated changes that are required or surface features that should also be considered. Not a negative outcome, but potentially not relevant to the actual patch set under discussion. So I tried reframing from false positives to burden: any comment that doesn’t result in a code change and was actually read by the contributor or maintainer is burdensome. It doesn’t matter whether a human or LLM reviewer raised the comment. If it didn’t result in a change, it was work or thought they didn’t need to do. For example, a back-and-forth conversation to prove the correctness of something that was already correct.

But that definition fails too, and the reason it fails is the real insight.

If two humans are engaged in a review process and there’s a back-and-forth conversation that does not result in a code change, most likely neither human would describe this as unnecessary burden. They would probably describe it as work they had to do or effort they expended, but both humans have likely come out of that conversation changed. Greater understanding of different parts of the system. Better ability to express oneself so the questions aren’t raised next time. Increased confidence in the correctness of a solution. There is a change assumed to have happened to one or both of the people.

A review conversation that doesn’t change code but changes the people having it isn’t a false positive. It only looks like one when the reviewer is a machine that won’t be changed.

For what it’s worth, I did look at existing studies of human review false positive rates. In my brief and non-exhaustive look, I’ve come to believe they aren’t useful here, not only because the question is moot when both parties come out changed, but because many are flawed or non-comparable. Some are in domains where reviewers are generalists talking to a specialist, unlikely in the kernel. Others misclassify trivial exchanges like “LGTM” or “thanks” as false positives. And none have been conducted over the kernel.

When the Reviewer Is a Machine

When a finding or probing question is raised by an LLM agent, the assumption that both parties come out changed breaks down.

Probing questions may not even be welcome from an LLM agent. One could never really be sure whether this was a “humans normally say this kind of thing in this context” situation versus an “I see something that maybe is wrong” situation.

But the more important part is this: if a human has to read a false positive, they have to put in their side of the work to validate, verify, explore, or test the question, and ultimately determine that it’s not an issue. They are unlikely to be changed in the absence of an exchange. And we know for a fact that the machine is not going to be changed.

In theory, we could wire up a training loop for Sashiko to take these back-and-forth exchanges and learn from them to reduce the incidence of false positives. I suspect it would have very little impact overall. First, the analysis showed that there’s almost no situation where the same bug is being surfaced over and over again. The machine is unlikely to run into the same finding and then have learned that finding isn’t valid. Second, the machine is not arguing from a position of true reasoning, therefore it is never clear if it backed down because it decided to be an agreeable sycophant or because the additional commentary made the correctness argument airtight.

The Social Problem

At its true core, I think the conversation around false positives, based on what I read in the article, is likely a social problem, like most truly intractable problems in computer science.

If an LLM agent reviews my contribution and the maintainer insists that I address the review, I am not only forced to do what turns out, in the case of a false positive, to be unnecessary work, but forced to performatively defend myself against a machine. Or worse, argue with the machine performatively. The combination of unnecessary work that generates no value, plus being forced to do so performatively in the face of knowing it generates no value, but now having to do more work to show that I generated the work that did no value is a line too far for most of our psyches.

A Possible Path

Setting aside the separate question of whether LLM ability will continue improving and therefore the number of false positives will go down, the core question of how to deal with false positives needs to be addressed at a social level.

In a space like the kernel, I would argue it may be appropriate to allow those whose code has been reviewed to react to LLM-generated findings with something along the lines of “smells like bullshit” and not have to go through the performative exercise of proving it’s bullshit, because we trust their instinct.

That said, it is probably worth creating some kind of long-term profile or scoreboard, both of those being the wrong words, for a contributor, so that they can over time understand if their intuition has blind spots. If an LLM is consistently raising a certain kind of feedback that they are dismissing, but we later discover a bug and have to fix it, or if human reviewers come back and their synthesis of their own experience plus what the LLM provided leads them to believe there’s a real, demonstrable problem, that’s a learning opportunity for the contributor.

The challenge is that there are no systems I’m aware of in modern use where these kinds of profiles are ever not used abusively against those profiled. Which is yet another social problem.

What’s Actually in a Sashiko Review?

Posted by Brian (bex) Exelbierd on 2026-04-01 21:20:00 UTC

Disclaimer: I work at Microsoft on upstream Linux in Azure. These are my personal notes and opinions. And, yes, I’m aware of the date. The data is real - and in 40 minutes it won’t be April 1 anymore, at least where I live.

Daroc Alden’s LWN article on Sashiko $ captures a real tension in the Linux kernel community. Andrew Morton wants to make Sashiko - an LLM-based patch reviewer - a mandatory part of the memory management workflow. Lorenzo Stoakes and others say it’s too noisy and adds burden to already-overworked maintainers. Morton points to a ~60% hit rate on actual bugs. Stoakes points out that’s per-review, not per-comment, so the individual false positive rate is worse.

Reading the thread, I kept wondering about two specific mechanisms that could be driving maintainer frustration beyond the false positive question.

Two Hypotheses

Hypothesis 1: Reviewers are getting told about bugs they didn’t create. Sashiko’s review protocol explicitly instructs the LLM to read surrounding code, not just the diff. That’s good review practice - but it means the tool might flag pre-existing bugs in code the patch author merely touched, putting those problems in their inbox.

Hypothesis 2: The same pre-existing bugs surface repeatedly. If a known issue in a subsystem doesn’t get fixed between review runs, every patch touching nearby code could trigger the same finding. That would create a steady drip of duplicate noise across the mailing list.

I pulled data from Sashiko’s public API and tested both.

Method

I fetched all 406 patchsets from the linux-mm mailing list and a 500-patchset sample from LKML as of April 1, 2026. Of the 252 linux-mm reviews with findings, 204 had full review text available for analysis.

I had an LLM write Python scripts to classify the 466 extracted findings into three categories using deterministic regex pattern matching - roughly 50 weighted patterns that look for specific language in the review text. The classification code runs the same way every time on the same input. An LLM wrote it, but the scanning itself involves no inferencing.

The three categories:

  1. Patch-specific - about the actual changed lines. Patterns match phrases like “this patch adds,” “the new code,” “missing check.”
  2. Interaction - about how new code interacts with existing code. Patterns match references to callers, callees, lock state, concurrent access.
  3. Pre-existing - about bugs in surrounding code not introduced by the patch. Patterns match “not introduced by this patch,” “pre-existing,” “noticed while reviewing.”

When a finding matched multiple categories, the most specific won: pre-existing > interaction > patch-specific. About 7% of findings didn’t match any pattern and were excluded from further analysis.

For duplication, the scripts computed pairwise text similarity across reviews within the same subsystem. Again - deterministic comparison, LLM-authored code.

The full methodology, including the code used, a cached copy of the reviews, and the classification patterns and caveats, is in the analysis document in github.com/bexelbie/sashiko-analysis.

What the Data Shows

Hypothesis 2 is dead. Cross-review duplication was essentially zero. Across 16 LKML subsystems with 5+ reviewed patches each, only one pair of findings exceeded the similarity threshold - and that was the same author submitting similar patches, not the same bug recurring. Whatever is driving maintainer frustration, it’s not the same findings appearing over and over. While it is possible this would surface in a larger sample size, I personally find it unlikely.

Hypothesis 1 is partially supported, but the story is in the distribution. About 9% of findings explicitly discuss pre-existing issues. Averaged across all reviews, that’s roughly 12 words per review - barely noticeable.

But the average is misleading. The distribution is bimodal: 81% of reviews contain zero pre-existing findings. The other 19% contain pre-existing findings that constitute 28% of the review on average, adding roughly 19 lines to what the patch author reads. A few reviews are 75-82% pre-existing content.

Here’s the breakdown of what an average review with findings contains:

Category % of findings Avg words
About the submitted patch 72% 74
Patch × existing code interactions 12% 103
Pre-existing issues 9% 62
Unclassified 8% 47

The interaction findings (category 2) are worth calling out. They’re the longest - 103 words on average, 39% more than patch-specific findings - because explaining how new code breaks against existing behavior requires describing that behavior. These are arguably the hardest findings for a human reviewer to produce and exactly where a tool with codebase-wide context adds value.

Who Owns This Bug Now?

The sharpest question the data raises isn’t statistical. It’s social.

When you submit a patch to linux-mm and get a Sashiko review, there’s roughly a 1-in-5 chance that a meaningful chunk of that review describes a bug you didn’t write - a race, a leak, a use-after-free in the code you’re modifying. Some of these are trivial (typos in nearby comments). Some are substantive.

Either way, the review has put it in your inbox. You are now the person who has been told about it.

Morton’s position - “don’t add bugs” as Rule #1 - makes sense if the tool’s output is mostly about your patch. And it is: ~85% of findings concern either the submitted change or its direct interactions with existing code. But 1 in 5 reviewees is also getting handed someone else’s problem, with an implicit expectation to respond.

Stoakes’s concern about maintainer burden lands differently when you see the bimodal distribution. The average review is manageable. The tail is not.

What This Doesn’t Answer

This analysis classifies scope - whether a finding is about the submitted patch, its interactions, or pre-existing code. It does not measure correctness. The core Morton/Stoakes disagreement is about false positive rates within on-topic findings - how often Sashiko flags something in your patch that turns out to be wrong. That question requires domain expertise to evaluate each finding individually, and this data doesn’t go there.

The classification also has limits. The regex patterns achieve ~93% coverage but aren’t semantic - borderline cases between categories get decided by pattern specificity, not understanding. The proportions are directionally sound but not precise.

The full data, methodology, and API references are in the repository, github.com/bexelbie/sashiko-analysis if anyone wants to reproduce or extend this.

My new toy: April 1 syslog-ng performance tests

Posted by Peter Czanik on 2026-04-01 10:35:28 UTC

Almost 15 years ago, Balabit had a campaign, stating that syslog-ng could process 650k messages a second. Now I am happy to present 7 million EPS (events per second). Timing the announcement to April 1 is not a coincidence :-)

While the 650k EPS measurement was true, it was misleading. This value was measured right after syslog-ng 3.2 introduced multi-threading, in lab environment, under optimal circumstances, using synthetic log messages. However, there was no fine print explaining this, just the statement that syslog-ng could process 650k EPS. It was fixed after a while, but it took years to recover from the effects of this marketing campaign, and engineers ten years later still had a nervous breakdown when someone mentioned “650k”. Why? Because from that moment, everyone expected syslog-ng to collect logs at that message rate in a production environment with complex configurations. Which was of course not the case.

Fast-forward to today, I’m happy to share that:

syslog-ng can collect logs at 7 million EPS

  • Is this measurement value valid? Yes.

  • Does it apply to real world? No.

  • Does it sound good? Definitely :-)

My latest syslog-ng benchmark results

The tool: sngbench

I love playing with various non-x86 systems. I have various ARM, POWER, MIPS systems at home, and sometimes I access other architectures, like RISC-V remotely. And, of course, not just different architectures, but different operating systems: various Linux distributions, MacOS, FreeBSD, sometimes also other BSD variants. I’m a server guy, and for the past 15+ years: a syslog-ng guy. Sometimes I had access to an exotic system on the other side of the world only for less than an hour, but I almost always tested syslog-ng.

For many years I had a bunch of shell scripts and configs to benchmark syslog-ng performance. Not for real world production loads, but rather for comparing architectures and operating systems. I needed a script which could do measurements with minimal dependencies and do it quickly, in one go. This is how sngbench was born, based on my previous ugly scripts. It has quite a few advantages and shortcomings:

  • Minimal dependencies: bash and syslog-ng

  • No complex setup: everything runs on the same host

  • network bandwidth is not a limiting factor

  • loggen and syslog-ng processes are competing for resources

  • Two bundled configurations: a performance tuned and the default syslog-ng.conf from openSUSE with minimal modifications to add a TCP source

  • By default, very short (20 seconds) measurements, so disk I/O is not a limiting factor

  • Many different test scenarios: from a single TCP connection to 4 * 128

Of course this describes just the “factory defaults”. You can easily change the test scenarios and configurations too.

How I reached 7 million EPS, and why it is not relevant

I was testing syslog-ng code which was not yet even merged to the development branch. First, I tested these patches with various settings. Along the way I remembered that Splunk guidelines mention so-rcvbuf tuning also for TCP connections. Previously I only used that for optimizing UDP performance. Now I have done it for TCP. Wonders happened :-)

But, of course, the main question is: can you achieve this performance in production? TL;DR: No.

My tests are run from localhost. Network bandwidth is not an issue. Tests are run in short bursts. This is peak performance; when it comes to writing logs to files or forwarding to a cluster of Splunk or Elasticsearch endpoints around the clock, that would be slower. Also, in my fastest test case, logs came from four different loggen instances, over 32 TCP connections each, at a constant rate. In the real world, logs come in bursts and connections are opened and closed regularly.

Test environment and tests

I used my AI mini workstation with Fedora Linux 44 Beta. First, I took a base line with stock syslog-ng 4.11.0 included in the distribution. Then I used my syslog-ng git snapshot packages for Fedora from https://copr.fedorainfracloud.org/coprs/czanik/syslog-ng-githead/. Initially it also had jemalloc support compiled in. Later I disabled it and purely focused on the yet to be merged parallelize() optimizations from GitHub. I experimented with enabling and disabling parallelize(), adding various batch_size() values, and finally also so-recbuf().

AI in a miniature box :-)

This blog is part of a longer series about my adventures with my new machine and AI. You can reach me to discuss this blog on one of the contacts listed in the upper right corner. You can read the rest of the blogs under the toy tag.

Fedora's aarch64 images support Secure Boot

Posted by Jeremy Cline on 2026-04-01 08:51:00 UTC

About seven years ago, a ticket was filed noting aarch64 systems were shipping with Secure Boot enabled, and that Fedora should start signing its boot path to support these devices out of the box.

I’m pleased to say that today’s Fedora Rawhide images - what will be Fedora 45 - finally does this thanks to the work of a whole bunch of people.

This means you can grab the latest Rawhide images and boot them on your favorite aarch64 laptop without turning off Secure Boot, or launch VMs in any of the major clouds with Secure Boot on. For example, I’m able to start a VM in Azure with the TrustedLaunch security type:

❯ az group create --name "jcline-aarch64-secureboot" --location "eastus2"
❯ az vm create --location eastus2 --name fedora \
  --resource-group jcline-aarch64-secureboot \
  --image /CommunityGalleries/Fedora-5e266ba4-2250-406d-adad-5d73860d958f/Images/Fedora-Cloud-Rawhide-Arm64/Versions/latest \
  --security-type TrustedLaunch \
  --size Standard_D2plds_v6 \
  --accept-term \
  --ssh-key-values @/home/jcline/.ssh/id_ed25519.pub
❯ ssh jcline@20.12.69.183
[jcline@fedora ~]$ mokutil --sb-state
SecureBoot enabled

Why Now?

The way Fedora used to sign UEFI applications for Secure Boot was delightfully simple (for some value of simple). The keys were in a smart card, plugged into a special build host, and anything that needed a signature was routed to be built on that host. pesign, one of the common utilities to sign PE applications, has a mode where it can run as a daemon and sign anything provided to it over a Unix socket. That Unix socket is threaded into the build environment, where builds can access it to sign PE applications with pesign-client.

Unfortunately, that host was x86_64 so when aarch64 started shipping with Secure Boot enabled, an alternative approach was needed.

Ultimately we moved the smart card to the signing server we use for RPMs and other things. The tricky bit about the whole process is that Fedora signs each bit of the boot chain during the build. Each time any of the UEFI applications in the boot chain is built it needs to be signed. One way to do this is to build the application in Fedora’s infrastructure, and then have a second build which uses the output of the first build along with a signature as input to construct a signed final version. However, this means you’ve got two specfiles which you have to keep in sync, and there’s probably other painful aspects I’ve not considered. In any case, that’s not what Fedora does.

Instead, Fedora signs the UEFI applications during the build. Since we want the signing key to be stored in a remote server, this implies some sort of networking, but builds aren’t permitted network access. Nor can the build environment provide the necessary secrets to authenticate with the signing service. In order to handle this, I wrote a small service that pretends to be the pesign Unix socket, and that can be exposed to the build environment in the same way. However, it just shovels anything it gets to the signing server and returns whatever the signing server does.

That service got deployed last week, and after a little bit of debugging it even worked. In fact, everything was signed for aarch64 last week, except for the fallback UEFI application that adds a boot entry for Fedora if it’s not there, which happens on first boot. Without that, booting new images would fail unless you explicitly added the correct Fedora boot entry manually. Yesterday, shim got rebuilt and everything works.

Stable When?

It’s possible this will eventually work in Fedora 44 Cloud images. Shim in Fedora 44 hasn’t (yet) been rebuilt and we’re in the final freeze for Fedora 44, so unfortunately we just missed it, but if it does get rebuilt later, Cloud images will be updated and will start working.

For Fedora 43 and older, the version of shim shipped doesn’t include the version signed for aarch64. I’m not sure it’s worth the risk to update it, as much as I’d like it to work there, as well.

Anyway, Fedora 45 will be upon us before you know it, and after seven years, six more months isn’t so bad, right?

Comments and Feedback

Thoughts, comments, or feedback greatly welcomed on Mastodon

Make a private CA with step-ca

Posted by Fedora Magazine on 2026-04-01 08:00:46 UTC

In this article you will learn how TLS (Transport Layer Security) and SSH (Secure SHell) use public/private key-pairs to authenticate web servers you visit and linux machines you log in to. You will also learn how the TLS framework installed by default in mainstream web browsers fails to prevent MITM (Man In The Middle) attacks in critical ways. Then we will walk through setting up a private .FEDORA TLD (Top Level Domain), setting up your own private CA with the smallstep package, and using the acme-tiny package to issue certificates for a website under that private TLD.

I will not cover setting up a simple “Hello World” website using your favorite web server packaged with Fedora. This needs to be up and running on HTTP to follow along. For this article, the website will be named hello.fedora.

Sadly, we will also explain how this does not completely solve the MITM problem – but this is already a big article. Click here to skip the background and motivation and go directly to the HowTo.

How Public Keys Prevent Man-In-The-Middle Attacks

While NSA director Admiral Bobby revealed that intel agencies were aware of two key, or public-key cryptography since the 1960s, the first unclassified paper was published by Whitfield Diffie and Martin E. Hellman in 1976. In college, I remember playing with cryptosystems based on the knapsack problem. These had various vulnerabilities. What revolutionized the field was publication of the RSA algorithm in 1977. I vividly remember where I sat in the college library when I read the paper. There was some controversy over “you can’t patent algorithms”. However RSA patented their implementation (which is already protected by copyright – but that is another discussion). Yes, you can whip up a 1 line Perl implementation in a few minutes (we all did) – but a secure implementation that does not leak the private key through various side channels is NOT trivial.

The original concept of public keys was to look up a recipient’s pubkey in a directory, and use it to encrypt a message that only the possessor of the corresponding private key can decrypt. This can also be used to authenticate a correspondent via a protocol that proves they hold the corresponding private key. The basic idea is to encrypt a random token with a pubkey, the recipient decrypts the token and sends it back encrypted by your pubkey. The details are not trivial. The primary concern is MITM attacks. SSH and TLS support several widely accepted algorithms for authentication and key exchange.

The Directory of Pubkeys is Critical

If you think about it, that “directory” is all important. Suppose you have a “secure” phone app (without naming names) that uses a public directory to map telephone number to pubkey. Whoever runs that directory can return their own pubkey (likely a different one for each telephone number), decrypt the data, and send it on, re-encrypted to the real pubkey of the intended recipient (and the same for the other direction). I.e. – the classic MITM attack. This is why such secure applications usually provide a way to verify you have the real pubkey via an in-person meeting or alternate medium.

So how do you know the real pubkey for a secure (https) website? Websites provide a “certificate” saying “this pubkey is for these domain names” (and other information we are not concerned with here). Well, anyone can create such a certificate – in fact we will do so in this article – so how do you know it is truthful? The certificate is “signed” by a Certificate Authority (CA). Pubkeys can be used to sign data. For RSA, the basic concept is to compute a secure “hash” (e.g. SHA256) of the certificate data, and “decrypt” it using the private key of the CA. The signature can be verified by using the pubkey of the CA to “encrypt” the result, – which should match the hash of the signed data. RSA is nice in that decryption and encryption are symmetrical – verifying a signature is the same operation as encrypting the signature to the owner of the privkey for the pubkey . So now, instead of every web user maintaining a private database of pubkeys for domain names, the browser has a list of trusted CAs which sign website certificates after verifying them in some way. In case a private key is compromised, CAs publish a Revocation List (which regular people rarely use) and TLS certificates always have an expiration date.

Note that CAs can certify data other than domain names, like the name of a company or individual. Commercial CAs generally charge a premium for this, but there are also non-profit CAs like cacert.org that certify personal details via in-person meetings.

How Mainstream Browsers Know Which CAs to Trust

Regular Joes (“normies”) do not keep track of all this, so where does that “list of trusted CAs” come from? Well, there is a CA and Browser forum with representatives from popular browser software makers and commercial CAs. They maintain a list of trusted CAs, and changes are voted on in public meetings with minutes published on their web page. Fedora installs this list in /usr/share/pki. Browsers may have their own copy. Users may add additional trusted CAs to /usr/share/pki or /etc/pki/ca-trust and browsers may have their own way of adding additional trusted CAs.

This all sounds well and good, BUT. The critical flaw could be called serial reliability. The trusted CAs are trusted for any domain. So any trusted CA (including any you add) can forge a certificate for any website. DNS vulnerabilities (cache poisoning and such) are beyond the scope of this article. But we will set up a private CA which you could use to forge any website cert and fool anyone you convince to trust your CA (and can hack their DNS and/or IP routing). The cabforum is very careful about their list. As part of hostilities, forum CAs stopped certifying .RU domains (ISO TLD for Russia). Russia promptly put up their own national CAs, which anyone can add to their browser trust store. Normies were warned NOT to do this, as the Russian CAs could then forge certs for any domain. But a moment’s thought reveals that ANY cabforum CA could go “rogue” and do the same thing. It only takes one.

There are solutions to this blanket trust problem, but that will require another article.

Create a private TLD with bind

For illustration, we will create the .FEDORA TLD. Everyone following along will create a different instance of that TLD, and hostnames under .FEDORA will resolve to different IPs (or NXDOMAIN) depending on whose DNS server you point that TLD at. This was the motivation for creating ICANN – a worldwide centralized DNS root (list of official TLDs). This provides a consistent namespace at the expense of absolute power (to cancel domains and TLDs) invested in ICANN. Before ICANN, admins all maintained their own DNS root, and periodically updated (manually or automatically) nameservers for well known TLDs like .COM etc. ISO defined an official list of TLDs, including country code TLDs (like .US). That worked well. The problem came with more obscure TLDs like .FREE. Companies trying to be “cool” were upset that not all customers got the same IPs for .FREE hostnames. Also admins liked having “someone else” maintain the DNS root. Hence, ICANN. There is also Opennic which likewise has “someone else” (volunteers) maintain a root zone, with fallback to ICANN, and has its own “forum” (existing TLDs vote) to approve new TLDs.

Here is a bind zonefile for .FEDORA:

$TTL 2H
; hello.fedora
@ IN SOA ns1 hostadmin.hello.fedora. (
2025122600 ; serial
1H ; refresh
15M ; retry
14D ; expire
6H ; default_ttl
)
@ IN NS ns1.fedora.
@ IN TXT "v=spf1 -all"
hello IN A 192.168.100.31
ns1 IN A 192.168.100.31
ca IN A 192.168.100.31

But that was a bait and switch. Setting up DNS for a private TLD is its own article. If you know how to add such a zone to your self hosted DNS – then do so. For the rest, we’ll use an even older hostname/IP map that predates DNS: as root, edit the file /etc/hosts on the system you will run step-ca on and append these lines:

# smallstep article
192.168.100.31 hello.fedora
192.168.100.31 ca.fedora

Replace 192.168.100.31 with the IP of the system you are trying all this out on. Step-ca must be able to lookup the hello.fedora hostname it is certifying to do the ACME protocol. We will use the /.well-known/acme-challenge method, which does not require real DNS. The system you run acme-tiny on also needs to lookup ca.fedora.

Run a private CA with step-ca

If the smallstep package is still under review when you read this, you’ll need to enable the copr repo (otherwise skip this step):

sudo dnf copr enable @fedora-review/fedora-review-2418762-smallstep

Create root CA

First, we need to create our root CA. In production, this should be on a separate offline machine. For small operations, the secondary CAs can be automated, and you sign the certificates for these secondaries manually with the root CA. I would keep the root CA password on paper – can’t be hacked (but watch out for cameras). Do NOT skip the password for the root CA. Some number of systems will trust that CA for any domain. If the private key leaks, you end up with a situation like Dell faced in 2015.

Let’s put the manual root CA in /etc/pki/CA and generate the root cert. Openssl will ask you for a key passwd, and what x509 calls “subject identifiers”. I left the state and email blank, and set city to Fedora City, organization to Fedora Project, organizational unit to ca, and common name to ca.example.org. The “-days 3650” sets the expiration to 10 years from now. The second command shows the “Issuer” information end-users will see when they ask for the issuer in an app like Firefox. The common name should normally be the hostname of the root CA, but it doesn’t really matter when the root CA is offline – and example.org is coincidentally offline by convention. 🙂

$ sudo mkdir /etc/pki/CA
$ cd /etc/pki/CA
$ sudo install --mode=644 /dev/stdin root_ca.fedora.ext <<EOF
subjectAltName=DNS:ca.example.org
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid:always,issuer
basicConstraints = critical, CA:true, pathlen:1
keyUsage = critical, digitalSignature, cRLSign, keyCertSign
EOF
$ sudo mkdir -m 0700 private
$ sudo openssl req -new -keyout private/root_ca.key -out root_ca.csr
...
$ sudo openssl x509 -req -in root_ca.csr -key private/root_ca.key -out root_ca.crt -days 3652 -sha256 -extfile root_ca.fedora.ext
Enter pass phrase for private/root_ca.key:
Certificate request self-signature ok
subject=C=US, L=Fedora City, O=Fedora Project, OU=ca, CN=ca.example.org

Create intermediate certificate and install smallstep

Then install the smallstep package with step-ca binary and supporting files:

$ sudo dnf install smallstep

The package installs a skeleton config for a step-ca service in /var/lib/step-ca. Let’s flesh out the config as step-ca and generate an intermediate cert request (“csr”).

$ cd /var/lib/step-ca
$ sudo -u step-ca bash -l
$ ls
certs config db secrets templates
$ cp /etc/pki/CA/root_ca.crt certs
$ openssl req -new -keyout secrets/intermediate_ca.key -out intermediate_ca.csr
...
$ nano config/ca.json
$ exit

Again, openssl will ask for subject identifiers. I used the same as for the root CA, but with the common name ca.fedora. Use your favorite text editor; “nano” is beginner friendly. Change MYCABAL to FEDORA and ca.mycabal.org to ca.fedora. If you provided a password for intermediate_ca.key, put it in the “password” field of ca.json. Do not set the password in ca.json to the empty string. This will make step-ca try to prompt for it at startup – which is not allowed under systemd, and fails with an error opening /dev/tty. For the intermediate cert, the common name is important. Smallstep will auto generate a host cert for “ca.fedora” (it is, after all, a certificate authority), and it must match the hostname ACME clients use to sign certs. Now we need to sign the intermediate cert with the root CA. 1825 days is 5 years. Intermediate certs should be shorter lived than the root CA. Not too short, if you are manually resigning the certs.

$ cd /etc/pki/CA
$ sudo install --mode=644 /dev/stdin ca.fedora.ext << EOF
subjectAltName=DNS:ca.fedora
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid:always,issuer
basicConstraints = critical, CA:true, pathlen:0
keyUsage = critical, digitalSignature, cRLSign, keyCertSign
EOF
$ sudo openssl x509 -req -in /var/lib/step-ca/intermediate_ca.csr -CA root_ca.crt -CAkey private/root_ca.key -CAcreateserial -out intermediate_ca.crt -days 1825 -sha256 -extfile ca.fedora.ext
$ sudo -u step-ca cp intermediate_ca.crt /var/lib/step-ca/certs
$ sudo systemctl start step-ca
$ sudo systemctl status step-ca
...
Mar 31 15:18:56 test.gathman.org step-ca[2814912]: 2026/03/31 15:18:56 Serving HTTPS on :9000 ...

Use httpd to serve hello.fedora web page

Running a web server was a prerequisite. I’ll use apache as an example, and hopefully users of nginx and others can translate. First, /etc/httpd/conf.d/hello.conf

<VirtualHost *:80>
ServerName hello.fedora
DocumentRoot "/var/www/html/hello"
#RedirectMatch ^((?!\/\.well-known\/).*)$ https://hello.fedora$1
<Location "/.well-known/acme-challenge/">
Options -Indexes
Order allow,deny
Allow from all
</Location>
<Location "/">
Options FollowSymLinks Indexes
Require all granted
</Location>
</VirtualHost>

The redirect is commented out until we have a signed cert. Assuming httpd is already running, use sudo apachectl graceful to load the changes. Then a simple document in /var/www/html/hello/index.html

<html>
<head>
<title> Hello Fedora </title>
</head>
<body>
<h1> Hello Fedora! </h1>
</body>
</html>

Use acme-tiny to sign a TLS cert with step-ca

Add private root CA

Acme-tiny needs to trust the root CA to use the ACME service. The step-ca service provides a handy API to fetch the root ca:

$ cd /etc/pki/ca-trust/source/anchors
$ sudo curl https://ca.fedora:9000/roots.pem -o fedora_ca.crt
curl: (60) SSL certificate problem: unable to get local issuer certificate

Ooops! Catch 22. You need the root CA to use the handy API that gets the root CA. So we’ll have to tell curl to accept the strange root cert. (Or use rsync, cp on the same machine, copy/paste between terminal windows, or other more secure method.)

$ sudo curl -k https://ca.fedora:9000/roots.pem -o fedora_ca.crt
$ sudo update-ca-trust extract

Now, we are ready to run acme-tiny. Once again, openssl req will prompt for subject identifiers. The only one browsers care about is Common Name, which should be “hello.fedora”. However, users may care about the other fields when they use browser features to inspect certs.

$ sudo dnf install acme-tiny
$ sudo apachectl graceful
$ cd /var/lib/acme
$ sudo -u acme bash -l
$ ls
certs csr private
$ /usr/libexec/acme-tiny/sign # NOTE: generates account.key if needed
$ ls private
account.key
$ openssl req -new -passout pass:'' -keyout private/hello.key -out csr/hello.csr
$ /usr/sbin/acme_tiny --account-key private/account.key --csr csr/hello.csr --acme-dir /var/www/challenges/ --ca https://ca.fedora:9000/acme/FEDORA >certs/hello.crt
$ exit
$ sudo nano /etc/httpd/conf.d/hello.conf

Now uncomment the Redirect Match and append the below SSL virtual host definition to hello.conf. Use apachectl graceful to load the changes.

<VirtualHost *:443>
ServerName hello.fedora:443
SSLEngine on
SSLProtocol all -SSLv2 -SSLv3
SSLCipherSuite HIGH:3DES:!aNULL:!MD5:!SEED:!IDEA
DocumentRoot "/var/www/html/hello"
SSLCertificateFile /var/lib/acme/certs/hello.crt
SSLCACertificateFile /var/lib/acme/certs/hello.crt
SSLCertificateKeyFile /var/lib/acme/private/hello.key
CustomLog logs/ssl_request_log \
"%t %h %{SSL_PROTOCOL}x %{SSL_CIPHER}x \"%r\" %b"
<Location "/">
Options FollowSymLinks Indexes
</Location>
</VirtualHost>

The current acme-tiny package auto-renews certs only for the letsencrypt.org CA. That should be extended soon. Meanwhile, feel free to add something hacky. (I’ll try to have it lookup tlds in /etc/sysconfig or something to get custom CA url.)

Use a browser to display the web page

On the machine with your web browser, you need 2 things: the new root CA and some way to lookup names in the .FEDORA TLD, either by pointing DNS to the server you set up with the private zone, or by appending the lines to /etc/hosts for ca.fedora and hello.fedora.

Now the curl should work without -k. And your browser should work to display https://hello.fedora, although you might have to restart it. If it doesn’t read Fedora ca-trust store on startup, you might need to find an option to import CA on the browser menu.

$ curl https://hello.fedora
<html>
<head>
<title> Hello Fedora </title>
</head>
<body>
<h1> Hello Fedora! </h1>
</body>
</html>

Now, that your root CA is up and running, don’t lose sight of what can be done by having it go rogue. Get lots of people to install it so they can access your cool new TLDS. Then start forging certs for arbitrary web sites, and conquer the world!! Bwa! ha! ha! (A future article can address PKCS#11 and restricting how you trust CAs in browsers and other software.)

Self hosting as much of my online presence as practical

Posted by Matthew Garrett on 2026-04-01 02:35:43 UTC

Because I am bad at giving up on things, I’ve been running my own email server for over 20 years. Some of that time it’s been a PC at the end of a DSL line, some of that time it’s been a Mac Mini in a data centre, and some of that time it’s been a hosted VM. Last year I decided to bring it in house, and since then I’ve been gradually consolidating as much of the rest of my online presence as possible on it. I mentioned this on Mastodon and a couple of people asked for more details, so here we are.

First: my ISP doesn’t guarantee a static IPv4 unless I’m on a business plan and that seems like it’d cost a bunch more, so I’m doing what I described here: running a Wireguard link between a box that sits in a cupboard in my living room and the smallest OVH instance I can, with an additional IP address allocated to the VM and NATted over the VPN link. The practical outcome of this is that my home IP address is irrelevant and can change as much as it wants - my DNS points at the OVH IP, and traffic to that all ends up hitting my server.

The server itself is pretty uninteresting. It’s a refurbished HP EliteDesk which idles at 10W or so, along 2TB of NVMe and 32GB of RAM that I found under a pile of laptops in my office. We’re not talking rackmount Xeon levels of performance, but it’s entirely adequate for everything I’m doing here.

So. Let’s talk about the services I’m hosting.

Web

This one’s trivial. I’m not really hosting much of a website right now, but what there is is served via Apache with a Let’s Encrypt certificate. Nothing interesting at all here, other than the proxying that’s going to be relevant later.

Email

Inbound email is easy enough. I’m running Postfix with a pretty stock configuration, and my MX records point at me. The same Let’s Encrypt certificate is there for TLS delivery. I’m using Dovecot as an IMAP server (again with the same cert). You can find plenty of guides on setting this up.

Outbound email? That’s harder. I’m on a residential IP address, so if I send email directly nobody’s going to deliver it. Going via my OVH address isn’t going to be a lot better. I have a Google Workspace, so in the end I just made use of Google’s SMTP relay service. There’s various commerical alternatives available, I just chose this one because it didn’t cost me anything more than I’m already paying.

Blog

My blog is largely static content generated by Hugo. Comments are Remark42 running in a Docker container. If you don’t want to handle even that level of dynamic content you can use a third party comment provider like Disqus.

Mastodon

I’m deploying Mastodon pretty much along the lines of the upstream compose file. Apache is proxying /api/v1/streaming to the websocket provided by the streaming container and / to the actual Mastodon service. The only thing I tripped over for a while was the need to set the “X-Forwarded-Proto” header since otherwise you get stuck in a redirect loop of Mastodon receiving a request over http (because TLS termination is being done by the Apache proxy) and redirecting to https, except that’s where we just came from.

Mastodon is easily the heaviest part of all of this, using around 5GB of RAM and 60GB of disk for an instance with 3 users. This is more a point of principle than an especially good idea.

Bluesky

I’m arguably cheating here. Bluesky’s federation model is quite different to Mastodon - while running a Mastodon service implies running the webview and other infrastructure associated with it, Bluesky has split that into multiple parts. User data is stored on Personal Data Servers, then aggregated from those by Relays, and then displayed on Appviews. Third parties can run any of these, but a user’s actual posts are stored on a PDS. There are various reasons to run the others, for instance to implement alternative moderation policies, but if all you want is to ensure that you have control over your data, running a PDS is sufficient. I followed these instructions, other than using Apache as the frontend proxy rather than nginx, and it’s all been working fine since then. In terms of ensuring that my data remains under my control, it’s sufficient.

Backups

I’m using borgmatic, backing up to a local Synology NAS and also to my parents’ home (where I have another HP EliteDesk set up with an equivalent OVH IPv4 fronting setup). At some point I’ll check that I’m actually able to restore them.

Conclusion

Most of what I post is now stored on a system that’s happily living under a TV, but is available to the rest of the world just as visibly as if I used a hosted provider. Is this necessary? No. Does it improve my life? In no practical way. Does it generate additional complexity? Absolutely. Should you do it? Oh good heavens no. But you can, and once it’s working it largely just keeps working, and there’s a certain sense of comfort in knowing that my online presence is carefully contained in a small box making a gentle whirring noise.

Fedora Code of Conduct Report 2023

Posted by Fedora Community Blog on 2026-03-31 12:04:28 UTC

The Fedora Project’s Code of Conduct and its reports are managed by the Fedora Code of Conduct Committee, the Fedora Community Architect, and the Fedora Project Leader. We publish this summary to demonstrate our commitment to community safety and our project’s social fabric.

This post covers the year of reports received in the 2023 calendar year. The 2023 and 2024 annual report posts are published with delays due to changes in membership in the Code of Conduct Committee and rebalancing existing work. The purpose of publishing the reports now is to provide transparency, insight, and awareness into the health signs of the community.

How’d it go in 2023

Reflecting on the 17 reports opened in 2023, the Fedora community saw a shift in incidents landscape compared to 2022. While the total number of reports decreased by approximately 19% (17 in 2023 vs. 21 in 2022), the severity of actions taken suggests a year focused on addressing persistent friction and high-impact behavioral issues.

The most notable trend in 2023 was the departure from the “zero-ban” status of 2022. The Committee moved toward more decisive actions including a permanent account closure for a slur and a suspension for aggressive ban evasion. Indicating a lower tolerance for behavior that directly threatens the safety and inclusivity of the community.

YearReports OpenedReports ClosedWarnings IssuedModerations IssuedSuspensions IssuedBans Issued
202317175311
202221246300
202123242101
202020168420

While the volume of reports from 2020 to 2022 were usually stabilized around 20 a year, there is a wide range to the level of severity for every case investigated by the Code of Conduct Committee. Some persistent challenges continue to underline the importance of soft skills like communication and collaboration. However, global world affairs, politics, and international conflicts often see a correlation between community conflicts. These cases often require more care and consideration than other reports.

Overall, the report shows a community safe enough for people to report incidents, including those involving high-profile members. The Code of Conduct Committee aims to humbly protect an environment and community culture where anyone can feel a part of the Friends Foundation of the Four Foundations, and feel safe to be their authentic and genuine self in the community. However, it is also showing that this is the first time since several years when reports went below 20 reports. This might show a sign of some stabilizing as years of backlog and process debt were improved and fixed in 2021, and the intense online pressure-cooker period of the global pandemic finally relents.

Looking forward to 2024

If you witness or are part of a situation that violates Fedora’s Code of Conduct, please open a private report on the [Code of Conduct repo] or email codeofconduct@fedoraproject.org. As always, your reports are confidential and only visible to the Code of Conduct Committee.

Remember that opening a CoC report does not automatically mean action will be taken. Sometimes things can be clarified, improved, or resolved entirely. Or, it could be something pretty small, but it definitely wasn’t okay, and you don’t want to make a big deal… open that report anyway, because it could show a pattern of behavior that is negatively impacting more people than yourself.

Here is a reminder to our Fedora community to be kind and considerate to each other in all our interactions. We all depend on each other to create a community that is healthy, safe, and happy. Most of all, we love seeing folks self-moderate and stand up for the right thing day-to-day in our community. Keep it up, and keep being awesome Fedora, we <3 you!

About the Committee

Fedora Project’s Code of Conduct and reports are managed by the Fedora Code of Conduct Committee (CoCC). The Fedora CoCC is made up of the Fedora Project Leader, Matthew Miller; the Fedora Community Architect, Justin Wheeler; the Red Hat legal team, as appropriate; and community nominated members.

The post Fedora Code of Conduct Report 2023 appeared first on Fedora Community Blog.

My new toy: Back to high-end audio

Posted by Peter Czanik on 2026-03-31 08:12:38 UTC

My AI mini workstation from HP has seen some non-AI workloads this weekend. I installed Capture One for photo editing and a couple of software synthesizers. And realized along the way that while built-in speakers are nice, high-end audio is a lot better! :-)

For months, I have been listening to music on devices that are designed for speech: a pair of Jabra headphones and the speakers of my various laptops. There were many reasons for this, including peer pressure, and some hearing loss at a way too loud concert. I was also lazy to use my high-end devices and tried to persuade myself that audio equipment designed for meetings is good enough for music too. Well…

This weekend, I installed various software synthesizers on my new computer. Not that I learned music or could play any instruments, but I still enjoy experimenting with music (well, with noise, actually :-) ). As I connected the machine to the big screen in the living room, I also connected it to my HiFi system. Suddenly, I realized how much better it sounds than my laptop or anything I’ve listened to in the past few months.

While making noise with a couple of software synths and listening to music from my TIDAL subscription, I also recharged my Focal headphones. My Focal Bathys is not as good as my HiFi, but also has a wonderful sound regardless.

So I guess that after a few months long detour, I am back to using high-end audio gear whenever it is technically possible. I love the extra detail I can hear on my Heed Enigma speakers or on my Focal headphones. Of course, nothing can replace listening to live music at concerts, but high-end gear is much better at approximating the vibe of various live events than anything below it.

AI in a miniature box :-)

This blog is part of a longer series about my adventures with my new machine and AI. You can reach me to discuss this blog on one of the contacts listed in the upper right corner. You can read the rest of the blogs under the toy tag.

OSPO Notes: Open Source Governance — Who Decides, and How

Posted by Chris Short on 2026-03-31 04:00:00 UTC
Open source governance answers one question: who decides, and how? A plain-language guide to every major model — and why mature projects layer several at once.

On the value of an automation platform

Posted by Fabio Alessandro Locati on 2026-03-31 00:00:00 UTC

Over the last 20+ years in IT, I’ve seen automation evolve from a nice-to-have to a non-negotiable part of how organizations operate. Every company I’ve worked with has some form of automation. The problem is that, in many cases, what they have is not an automation strategy but rather a collection of individual scripts, cron jobs, and one-off solutions that were built to solve immediate problems. This is what I usually call ad-hoc automation or point automation, and while it works in the short term, it creates significant issues over time. This does not happen only in small or particularly tech-adverse companies; it is a very common situation across sectors and sizes.

Loadouts For Genshin Impact v0.1.15 Released

Posted by Akashdeep Dhar on 2026-03-29 18:30:27 UTC
Loadouts For Genshin Impact v0.1.15 Released

Hello travelers!

Loadouts for Genshin Impact v0.1.15 is OUT NOW with the addition of support for recently released characters like Varka and for recently released weapons like Gest of the Mighty Wolf from Genshin Impact Luna V or v6.4 Phase 2. Take this FREE and OPEN SOURCE application for a spin using the links below to manage the custom equipment of artifacts and weapons for the playable characters.

Resources

Installation

Besides its availability as a repository package on PyPI and as an archived binary on PyInstaller, Loadouts for Genshin Impact is now available as an installable package on Fedora Linux. Travelers using Fedora Linux 42 and above can install the package on their operating system by executing the following command.

$ sudo dnf install gi-loadouts --assumeyes --setopt=install_weak_deps=False

Installation command for Fedora Linux

Changelog

  • Automated dependency updates for GI Loadouts by @renovate[bot] in #507
  • chore(deps): update actions/upload-artifact action to v7 by @renovate[bot] in #508
  • Add the recently added character Varka to the GI Loadouts roster by @sdglitched in #511
  • Add the recently added weapon Gest of the Mighty Wolf to the GI Loadouts roster by @sdglitched in #512
  • Stage the release v0.1.15 for Genshin Impact Luna V (v6.4 Phase 1) by @sdglitched in #513
  • Include gi_loadouts/pack directory in the package by @gridhead in #514
  • Update the latest test count in README.md by @gridhead in #515

Characters

One character has debuted in this version release.

Varka

Varka is a claymore-wielding Anemo character of five-star quality.

Weapons

One weapon has debuted in this version release.

Loadouts For Genshin Impact v0.1.15 Released
Gest of the Mighty Wolf - Workspace

Appeal

While allowing you to experiment with various builds and share them for later, Loadouts for Genshin Impact lets you take calculated risks by showing you the potential of your characters with certain artifacts and weapons equipped that you might not even own. Loadouts for Genshin Impact has been and always will be a free and open source software project, and we are committed to delivering a quality experience with every release we make.

Disclaimer

With an extensive suite of over 1550 diverse functionality tests and impeccable 100% source code coverage, we proudly invite auditors and analysts from MiHoYo and other organizations to review our free and open source codebase. This thorough transparency underscores our unwavering commitment to maintaining the fairness and integrity of the game.

The users of this ecosystem application can have complete confidence that their accounts are safe from warnings, suspensions or terminations when using this project. The ecosystem application ensures complete compliance with the terms of services and the regulations regarding third-party software established by MiHoYo for Genshin Impact.

All rights to Genshin Impact assets used in this project are reserved by MiHoYo Ltd. and Cognosphere Pte., Ltd. Other properties belong to their respective owners.

Converting Dovecot password schemes on the fly without (too much) cursing

Posted by Evgeni Golov on 2026-03-28 22:11:57 UTC

I finally upgraded my mail server to Debian 13 and, as expected, the Dovecot part was quite a ride.

The configuration syntax changed between Dovecot 2.3 (Debian 12) and Dovecot 2.4 (Debian 13), so I started first with diffing my configuration against a vanilla Debian 12 one (this setup is slightly old) and then applied the same (logical) changes to a vanilla Debian 13 one. This mostly went well. Mostly because my user database is stored in SQL and while the Dovecot Configuration Upgrader says it can convert old dovecot-auth-sql.conf.ext files to the new syntax, it only does so for the structure, not the SQL queries themselves. While I don't expect it to be able to parse the queries and adopt them correctly, at least a hint that the field names in userdb changed and might require adjustment would've been cool.

Once I got that all sorted, Dovecot would still refuse to let me in:

Error: sql: Invalid password in passdb: Weak password scheme 'MD5-CRYPT' used and refused

Yeah, right. Did I mention that this setup is old?

The quick cure against this is a auth_allow_weak_schemes = yes in /etc/dovecot/conf.d/10-auth.conf, but long term I really should upgrade the password hashes in the database to something more modern.

And this is what this post is about.

My database only contains hashed (and salted) passwords, so I can't just update them without changing the password. And while there are only 9 users in total, I wanted to play nice and professional. (LOL)

There is a Converting Password Schemes howto in the Dovecot documentation, but it uses a rather odd looking PHP script, wrapped in a shell script which leaks the plaintext password to the process list, and I really didn't want to remember how to write PHP to complete this task.

Luckily, I know Python.

The general idea is:

  • As we're using plaintext authentication (auth_mechanisms = plain login), the plaintext password is available during login.
  • After Dovecot's imap-login has verified the password against the old (insecure) hash in the database, we can execute a post-login script, which will connect to the database and update it with a new hash of the plaintext password.

To make the plaintext password available to the post-login script, we add '%{password}' as userdb_plain_pass to the SELECT statement of our passdb query. The original howto also says to add a prefetch userdb, which we do. The sql userdb remains, as otherwise Postfix can't use Dovecot to deliver mail.

Now comes the interesting part. We need to write a script that is executed by Dovecot's script-login and that will update the database for us. Thanks to Python's passlib and mysqlclient, the database and hashing parts are relatively straight forward:

#!/usr/bin/env python3

import os

import MySQLdb
import passlib.hash

DB_SETTINGS = {"host": "127.0.0.1", "user": "user", "password": "password", "database": "mail"}
SELECT_QUERY = "SELECT password_enc FROM mail_users WHERE username=%(username)s"
UPDATE_QUERY = "UPDATE mail_users SET password_enc=%(pwhash)s WHERE username=%(username)s"

SCHEME = "bcrypt"
EXPECTED_PREFIX = "$2b$"


def main():
    # https://doc.dovecot.org/2.4.3/core/config/post_login_scripting.html
    # https://doc.dovecot.org/2.4.3/howto/convert_password_schemes.html
    user = os.environ.get("USER")

    plain_pass = os.environ.get("PLAIN_PASS")
    if plain_pass is not None:
        db = MySQLdb.connect(**DB_SETTINGS)
        cursor = db.cursor()
        cursor.execute(SELECT_QUERY, {"username": user})
        result = cursor.fetchone()
        current_pwhash = result[0]

        if not current_pwhash.startswith(EXPECTED_PREFIX):
            hash_module = getattr(passlib.hash, SCHEME)
            pwhash = hash_module.hash(plain_pass)
            data = {"pwhash": pwhash, "username": user}
            cursor.execute(UPDATE_QUERY, data)
        cursor.close()
        db.close()


if __name__ == "__main__":
    main()

But if we add that as executable = script-login /etc/dovecot/dpsu.py to our imap-postlogin service, as the howto suggests, the users won't be able to login anymore:

Error: Post-login script denied access to user

WAT?

Remember that shell script I wanted to avoid? It ends with exec "$@".

Turns out the script-login "API" is rather interesting. It's not "pass in a list of scripts to call and I'll call all of them". It's "pass a list of scripts, I'll execv the first item and pass the rest as args, and every item is expected to execv the next one again". 🤯

With that (cursed) knowledge, the script becomes:

#!/usr/bin/env python3

import os
import sys

import MySQLdb
import passlib.hash

DB_SETTINGS = {"host": "127.0.0.1", "user": "user", "password": "password", "database": "mail"}
SELECT_QUERY = "SELECT password_enc FROM mail_users WHERE username=%(username)s"
UPDATE_QUERY = "UPDATE mail_users SET password_enc=%(pwhash)s WHERE username=%(username)s"

SCHEME = "bcrypt"
EXPECTED_PREFIX = "$2b$"


def main():
    # https://doc.dovecot.org/2.4.3/core/config/post_login_scripting.html
    # https://doc.dovecot.org/2.4.3/howto/convert_password_schemes.html
    user = os.environ.get("USER")

    plain_pass = os.environ.get("PLAIN_PASS")
    if plain_pass is not None:
        db = MySQLdb.connect(**DB_SETTINGS)
        cursor = db.cursor()
        cursor.execute(SELECT_QUERY, {"username": user})
        result = cursor.fetchone()
        current_pwhash = result[0]

        if not current_pwhash.startswith(EXPECTED_PREFIX):
            hash_module = getattr(passlib.hash, SCHEME)
            pwhash = hash_module.hash(plain_pass)
            data = {"pwhash": pwhash, "username": user}
            cursor.execute(UPDATE_QUERY, data)
        cursor.close()
        db.close()

    os.execv(sys.argv[1], sys.argv[1:])


if __name__ == "__main__":
    main()

And the passwords are getting gradually updated as the users log in. Once all are updated, we can remove the post-login script and drop the auth_allow_weak_schemes = yes.

Switching email providers, again

Posted by Vít Smolík on 2026-03-28 20:00:13 UTC

Renewal time. That’s usually when I start questioning my life choices, at least the email-related ones.

I’ve been a Tuta user for almost a year now. However, there are some things that have been bugging me, and since I have to decide soon anyway, I think it’s a good time now to look back and reassess.

Background

Before we dive in, some context - I switched from Gmail to Tuta in April of 2025. Then, I was choosing between Proton, Tuta, Fastmail, Mailfence and Zoho. Some (like Zoho and Fastmail) aren’t hosted in countries that respect my privacy, so I crossed them off. At the end, I was choosing between Tuta and Proton (and ultimately chose Tuta).

misc fedora bits last week of march 2026

Posted by Kevin Fenzi on 2026-03-28 16:51:28 UTC
Scrye into the crystal ball

secure boot signing

Last week we finally got the new secure boot setup fully switched over. We are now signing aarch64 grub2/kernel/fwupd as we are the x86_64 versions. The aarch64 signed artifacts are in rawhide now, but will move to stable releases as testing permits.

Sadly my Lenovo slim7x doesn't boot correctly with the signed artifacts, I think due to needing a firmware update or manually enrolling the microsoft certs. I'll try and test more with it when I can, but many other folks are seeing it work fine.

It's been a 7 year journey to get this done. Why so long? A few of the reasons in no particular order:

  • At first we were not even sure MS would sign others on aarch64

  • Our old x86_64 setup was smart cards in 2 builders, and we didn't have any easy way to install more in aarch64 builders.

  • They stopped making the smart cards we were using.

  • There were a number of things that made the fedora aarch64 kernel not work with secure boot. Many around the 'lockdown' patches.

  • Lack of time from everyone involved.

  • Need for someone to write a way to use our normal signing server to sign these things (so we wouldn't need cards in builders).

  • Lack of capacity in old smart cards to add new certs.

And probibly many more things I have forgotten about.

Feels great to get us in a better place and have signed aarch64 builds!

mass update/reboots

We had a mass update/reboot cycle this last week. It went pretty smoothly this time as we were not applying firmware updates or doing any other work.

We should be all caught up for the freeze next week....

final freeze coming up

Next tuesday starts the Fedora 44 Final freeze. This is the weeks running up to the Fedora 44 linux final release. So, if you need to get anything in, do so before tuesday.

solar fun

So the reason I was off line thursday was because I was getting solar and battery and inverter installed here. It's already pretty awesome. Look for a long blog post on it next weekish or so.

whats next?

During this freeze I am hoping to get started on some projects I was meaning to do already, but got busy with the signing stuff: revamping our backups and moving more stuff to rhel10 (will do staging in freeze).

comments? additions? reactions?

As always, comment on mastodon: https://fosstodon.org/@nirik/116308267360944066

🎲 PHP version 8.4.20RC1 and 8.5.5RC1

Posted by Remi Collet on 2026-03-27 06:38:00 UTC

Release Candidate versions are available in the testing repository for Fedora and Enterprise Linux (RHEL / CentOS / Alma / Rocky and other clones) to allow more people to test them. They are available as Software Collections, for parallel installation, the perfect solution for such tests, and as base packages.

RPMs of PHP version 8.5.5RC1 are available

  • as base packages in the remi-modular-test for Fedora 42-44 and Enterprise Linux ≥ 8
  • as SCL in remi-test repository

RPMs of PHP version 8.4.20RC1 are available

  • as base packages in the remi-modular-test for Fedora 42-44 and Enterprise Linux ≥ 8
  • as SCL in remi-test repository

ℹ️ The packages are available for x86_64 and aarch64.

ℹ️ PHP version 8.3 is now in security mode only, so no more RC will be released.

ℹ️ Installation: follow the wizard instructions.

ℹ️ Announcements:

Parallel installation of version 8.5 as Software Collection:

yum --enablerepo=remi-test install php85

Parallel installation of version 8.4 as Software Collection:

yum --enablerepo=remi-test install php84

Update of system version 8.5:

dnf module switch-to php:remi-8.5
dnf --enablerepo=remi-modular-test update php\*

Update of system version 8.4:

dnf module switch-to php:remi-8.4
dnf --enablerepo=remi-modular-test update php\*

ℹ️ Notice:

  • version 8.5.4RC1 is in Fedora rawhide for QA
  • EL-10 packages are built using RHEL-10.1 and EPEL-10.1
  • EL-9 packages are built using RHEL-9.7 and EPEL-9
  • EL-8 packages are built using RHEL-8.10 and EPEL-8
  • oci8 extension uses the RPM of the Oracle Instant Client version 23.9 on x86_64 and aarch64
  • intl extension uses libicu 74.2
  • RC version is usually the same as the final version (no change accepted after RC, exception for security fix).
  • versions 8.4.19 and 8.5.4 are planed for March 12th, in 2 weeks.

Software Collections (php84, php85)

Base packages (php)

TOML ↔ YAML Converter

Posted by Chris Short on 2026-03-27 04:00:00 UTC
Convert between TOML and YAML configuration formats directly in your browser. No data is sent to any server.

Friday Links 26-11

Posted by Christof Damian on 2026-03-26 23:00:00 UTC

Two great podcasts this week, with the creator of Claude Code, and another one with Werner Herzog.

And check out the links from other people, they are much better than mine.

Sorry about destroying your productivity with the 100 jumps game.

New addition to these posts:

Quote of the Week
A profusion of ideas spewed forth from his mind. There was no such thing as a bad idea, apparently. But, perhaps more important, there was no such thing as a good idea either, until it had been tried and coolly evaluated.
Reamde
Neal Stephenson

Version 7 de mon journal : refonte complète et nouveau socle technique

Posted by Guillaume Kulakowski on 2026-03-26 18:54:00 UTC
Vous l’aurez peut-être remarqué, mais le blog vient d’atteindre sa 7ème itération en 22 ans d’existence. J’en avais un peu marre de mon ancien thème basé sur des widgets, que je trouvais difficile à faire évoluer. Sa structure reposait sur une base Timber / webpack vieillissante et particulièrement lourde. Et puisqu’on parle de lourdeur, l’interface […]

My new toy: Open WebUI first steps

Posted by Peter Czanik on 2026-03-26 12:42:45 UTC

Once I got hardware-accelerated AI working under Linux on my AI mini workstation from HP, my next goal was to make it easier to use. From this blog, you can read about my initial experiments with Open WebUI on Fedora Linux.

Open WebUI talking about central log collection :-)

Everything in containers

As Open WebUI is not yet available as a package in Fedora, my initial approach was to use containers. I found a Docker compose setup which was tested on Fedora Linux 43 according to its documentation: https://github.com/jesuswasrasta/ollama-rocm-webui-docker. As I (also) use Fedora 43, it sounded like a good choice.

It worked; however, I quickly realized that hardware acceleration for AI was not working. Instead of that, most CPUs were running close to 100%. It was a good test for cooling: I could hear the miniature box from the next room through closed doors :-)

ollama eating CPU :-)

As it turned out, the content of the HSA_OVERRIDE_GFX_VERSION environment variable was incorrect. When I set it according to the docs, hardware acceleration still did not work. Removing the environment variable ollama found the hardware, but never answered a prompt anymore.

Ollama from the system

My next experiment was that I kept using Open WebUI from the container, but I installed ollama from the Fedora package repository directly on the system. The good news? Some smaller models ran really fast, using hardware acceleration. The bad news: most models failed to load with an error message that the given model format is unknown.

Update to Fedora 44 beta

I guessed that ollama was too old in Fedora 43. Solution? Update the whole system to Fedora 44 beta. It seems to have helped. A lot more models work now, including the largest freely available Granite models from IBM.

Why Granite?

First of all: I’m an IBM Champion, and thus using IBM technologies is for granted. But also because I learned some background stories from a friend working at IBM on LSF, which makes it also a personal choice.

What I’ve been showing here is AI inferencing on my HP AI system. But before the model can be used (for inferencing), it needs to be trained. These models are trained on large, GPU rich conpute clusters. To get an idea of the scale of such clusters, you can learn more in this research paper (https://arxiv.org/abs/2407.05467). It duscusses the IBM Blue Vela system which supports IBMs’ GenAI mission. What’s interesting is the Blue Vela uses a more traditional HPC software stack including IBM LSF for workload management and Storage Scale (GPFS) for rapid access to large data sets.

AI in a miniature box :-)

This blog is part of a longer series about my adventures with my new machine and AI. You can reach me to discuss this blog on one of the contacts listed in the upper right corner. You can read the rest of the blogs under the toy tag.

Matrix server maintenance

Posted by Fedora Infrastructure Status on 2026-03-26 11:50:00 UTC

Element Matrix Services is performing scheduled maintenance on our matrix server (fedora.im).

Affected Services:

  • chat.fedoraproject.org
  • fedora.im
  • matrix services

Setting up SSO on all of my self-hosted services

Posted by Vít Smolík on 2026-03-26 07:00:00 UTC

Recently, my mom came up to me, she forgot her NextCloud password again. It’s not a problem, just jump into the admin user, and reset the password. But that got me thinking, would it be possible to set up an SSO, so everyone would just have one password and account for all of my self-hosted services?

Motivation

Apart from the already mentioned password problem, there are a few more things.

Updates and reboots on Fedora infrastructure

Posted by Fedora Infrastructure Status on 2026-03-25 22:00:00 UTC

Fedora Infrastructure team will be applying updates to servers and rebooting them.

Many services will be affected, most should only be down for a short time as their particular resources are rebooted HOWEVER some may be down for a non-trivial amount of time due to RHEL-9 to RHEL-10 upgrades.

Compiling syslog-ng on an old Mac

Posted by Peter Czanik on 2026-03-25 14:43:49 UTC

I have an aging, but fully functional MacBook. I bought it for syslog-ng testing, but I also use for watching movies. Homebrew no more fully supports old, Intel-based Macs. This blog helps to compile the latest syslog-ng release on these old, but otherwise functional machines.

Read more at https://www.syslog-ng.com/community/b/blog/posts/compiling-syslog-ng-on-an-old-mac

syslog-ng logo

My new toy: first steps with AI on Linux

Posted by Peter Czanik on 2026-03-25 11:48:34 UTC

Ever since I bought my AI mini workstation from HP, my goal was to run hardware accelerated artificial intelligence workloads in a Linux environment. Read more to learn how things turned out on Ubuntu and Fedora!

I have been using various AI tools for a while now. Generating pictures about some impossible situations, like a dinosaur climbing the Hungarian parliament building, finding information where a simple web search is useless, or explaining syslog-ng code to me. All these are nice, sometimes even useful, however I prefer to know what is behind the magic. Well, at least part of it :-) I want to get a bottom up view of various components and processes, and getting my hands dirty. Hopefully this miniature but powerful box will help me in getting known with AI better.

AI in a miniature box :-)

Testing AI on Ubuntu

As mentioned in my installing Ubuntu blog, the 24.04 LTS installer did not work on this machine. I found a nice tutorial about AI on the Ryzen AI Max+ 395 which mentioned using 25.10, so I installed that version instead of the LTS. It installed without any troubles, 3D graphics worked out of the box.

However, AI is a different story. ROCm, hardware acceleration for AI workloads on AMD chips, is only packaged for Ubuntu LTS releases. The workaround described in the tutorial was to use distrobox. Unfortunately, the steps described in the tutorial did not work. Containerization brought in various problems with permissions, software availability, and so on. Most likely an experienced distrobox user could resolve these. In my case, after reading the distrobox documentation for hours, I just gave up.

Getting started with hardware accelerated AI on Fedora

Next, I turned to Fedora Linux 43. The wiki page of the Fedora Heterogeneous Computing Special Interest Group proved to be a good starting point. Fedora has ROCm packaged as part of the distro, and the wiki page gives clear instructions how to get started.

Once I set up user rights and installed the necessary packages, I was able to get some info about my hardware. You can see the output of rocminfo and rocm-clinfo at the bottom of this blog. I did not want to shorten those, but given the many lines of output, I was not sure if anyone would read the rest of my blog :-)

Testing with llama

Of course, seeing info about the hardware is nice, but it’s even better to see it in action. The Ubuntu ROCm tutorial mentioned llama, so I started with that one. Luckily Fedora includes it as a ready to install package, so I did not have to compile it from source. I also installed huggingface-hub, also from a package:

dnf install python3-huggingface-hub llama-cpp

This allowed me to download the model mentioned in the tutorial, and ask a few questions from the downloaded LLM. For now I just used the sample command line, but based on the output llama found the hardware and used it. Next up: learn more about the available models.

You can find the output of the following command at the end of this blog:

llama-cli   -m ~/models/llama-2-7b.Q4_K_M.gguf   --no-mmap   -ngl 99   -p "Explain quantum computing in simple terms:"   -n 256

Testing with pytorch

When I mentioned a friend that hardware accelerated AI seems to work on my Linux box, he suggested to me to try it with PyTorch. Luckily this was available as a ready to install package for Fedora as well:

dnf install python3-torch

I was quite a bit surprised, as the above command installed 8 GB worth of RPM packages (texlive accounting for a good part of it). I do not know much about PyTorch, but did a quick test anyway. Here is the really complex Pyhon code I built based on the documentation:

import torch
x = torch.rand(5, 3)
print(x)
print('Is hw AI accel available')
print(torch.cuda.is_available())

And here is the output from the above code:

tensor([[0.1034, 0.0183, 0.1233],
        [0.1787, 0.0097, 0.8426],
        [0.2872, 0.6351, 0.8468],
        [0.8226, 0.2991, 0.8539],
        [0.2061, 0.6422, 0.8146]])
Is hw AI accel available
True

It’s simple, but looks promising :-)

Outputs

Ooutput of rocminfo and rocm-clinfo

czanik@fedora:~$ rocminfo 
ROCk module is loaded
=====================    
HSA System Attributes    
=====================    
Runtime Version:         1.1
Runtime Ext Version:     1.7
System Timestamp Freq.:  1000.000000MHz
Sig. Max Wait Duration:  18446744073709551615 (0xFFFFFFFFFFFFFFFF) (timestamp count)
Machine Model:           LARGE                              
System Endianness:       LITTLE                             
Mwaitx:                  DISABLED
XNACK enabled:           NO
DMAbuf Support:          YES
VMM Support:             YES

==========               
HSA Agents               
==========               
*******                  
Agent 1                  
*******                  
  Name:                    AMD RYZEN AI MAX+ PRO 395 w/ Radeon 8060S
  Uuid:                    CPU-XX                             
  Marketing Name:          AMD RYZEN AI MAX+ PRO 395 w/ Radeon 8060S
  Vendor Name:             CPU                                
  Feature:                 None specified                     
  Profile:                 FULL_PROFILE                       
  Float Round Mode:        NEAR                               
  Max Queue Number:        0(0x0)                             
  Queue Min Size:          0(0x0)                             
  Queue Max Size:          0(0x0)                             
  Queue Type:              MULTI                              
  Node:                    0                                  
  Device Type:             CPU                                
  Cache Info:              
    L1:                      49152(0xc000) KB                   
  Chip ID:                 0(0x0)                             
  ASIC Revision:           0(0x0)                             
  Cacheline Size:          64(0x40)                           
  Max Clock Freq. (MHz):   5187                               
  BDFID:                   0                                  
  Internal Node ID:        0                                  
  Compute Unit:            32                                 
  SIMDs per CU:            0                                  
  Shader Engines:          0                                  
  Shader Arrs. per Eng.:   0                                  
  WatchPts on Addr. Ranges:1                                  
  Memory Properties:       
  Features:                None
  Pool Info:               
    Pool 1                   
      Segment:                 GLOBAL; FLAGS: FINE GRAINED        
      Size:                    131136832(0x7d0fd40) KB            
      Allocatable:             TRUE                               
      Alloc Granule:           4KB                                
      Alloc Recommended Granule:4KB                                
      Alloc Alignment:         4KB                                
      Accessible by all:       TRUE                               
    Pool 2                   
      Segment:                 GLOBAL; FLAGS: EXTENDED FINE GRAINED
      Size:                    131136832(0x7d0fd40) KB            
      Allocatable:             TRUE                               
      Alloc Granule:           4KB                                
      Alloc Recommended Granule:4KB                                
      Alloc Alignment:         4KB                                
      Accessible by all:       TRUE                               
    Pool 3                   
      Segment:                 GLOBAL; FLAGS: KERNARG, FINE GRAINED
      Size:                    131136832(0x7d0fd40) KB            
      Allocatable:             TRUE                               
      Alloc Granule:           4KB                                
      Alloc Recommended Granule:4KB                                
      Alloc Alignment:         4KB                                
      Accessible by all:       TRUE                               
    Pool 4                   
      Segment:                 GLOBAL; FLAGS: COARSE GRAINED      
      Size:                    131136832(0x7d0fd40) KB            
      Allocatable:             TRUE                               
      Alloc Granule:           4KB                                
      Alloc Recommended Granule:4KB                                
      Alloc Alignment:         4KB                                
      Accessible by all:       TRUE                               
  ISA Info:                
*******                  
Agent 2                  
*******                  
  Name:                    gfx1151                            
  Uuid:                    GPU-XX                             
  Marketing Name:          Radeon 8060S Graphics              
  Vendor Name:             AMD                                
  Feature:                 KERNEL_DISPATCH                    
  Profile:                 BASE_PROFILE                       
  Float Round Mode:        NEAR                               
  Max Queue Number:        128(0x80)                          
  Queue Min Size:          64(0x40)                           
  Queue Max Size:          131072(0x20000)                    
  Queue Type:              MULTI                              
  Node:                    1                                  
  Device Type:             GPU                                
  Cache Info:              
    L1:                      32(0x20) KB                        
    L2:                      2048(0x800) KB                     
    L3:                      32768(0x8000) KB                   
  Chip ID:                 5510(0x1586)                       
  ASIC Revision:           0(0x0)                             
  Cacheline Size:          128(0x80)                          
  Max Clock Freq. (MHz):   2900                               
  BDFID:                   50432                              
  Internal Node ID:        1                                  
  Compute Unit:            40                                 
  SIMDs per CU:            2                                  
  Shader Engines:          2                                  
  Shader Arrs. per Eng.:   2                                  
  WatchPts on Addr. Ranges:4                                  
  Coherent Host Access:    FALSE                              
  Memory Properties:       APU
  Features:                KERNEL_DISPATCH 
  Fast F16 Operation:      TRUE                               
  Wavefront Size:          32(0x20)                           
  Workgroup Max Size:      1024(0x400)                        
  Workgroup Max Size per Dimension:
    x                        1024(0x400)                        
    y                        1024(0x400)                        
    z                        1024(0x400)                        
  Max Waves Per CU:        32(0x20)                           
  Max Work-item Per CU:    1024(0x400)                        
  Grid Max Size:           4294967295(0xffffffff)             
  Grid Max Size per Dimension:
    x                        4294967295(0xffffffff)             
    y                        4294967295(0xffffffff)             
    z                        4294967295(0xffffffff)             
  Max fbarriers/Workgrp:   32                                 
  Packet Processor uCode:: 34                                 
  SDMA engine uCode::      18                                 
  IOMMU Support::          None                               
  Pool Info:               
    Pool 1                   
      Segment:                 GLOBAL; FLAGS: COARSE GRAINED      
      Size:                    65568416(0x3e87ea0) KB             
      Allocatable:             TRUE                               
      Alloc Granule:           4KB                                
      Alloc Recommended Granule:2048KB                             
      Alloc Alignment:         4KB                                
      Accessible by all:       FALSE                              
    Pool 2                   
      Segment:                 GLOBAL; FLAGS: EXTENDED FINE GRAINED
      Size:                    65568416(0x3e87ea0) KB             
      Allocatable:             TRUE                               
      Alloc Granule:           4KB                                
      Alloc Recommended Granule:2048KB                             
      Alloc Alignment:         4KB                                
      Accessible by all:       FALSE                              
    Pool 3                   
      Segment:                 GROUP                              
      Size:                    64(0x40) KB                        
      Allocatable:             FALSE                              
      Alloc Granule:           0KB                                
      Alloc Recommended Granule:0KB                                
      Alloc Alignment:         0KB                                
      Accessible by all:       FALSE                              
  ISA Info:                
    ISA 1                    
      Name:                    amdgcn-amd-amdhsa--gfx1151         
      Machine Models:          HSA_MACHINE_MODEL_LARGE            
      Profiles:                HSA_PROFILE_BASE                   
      Default Rounding Mode:   NEAR                               
      Default Rounding Mode:   NEAR                               
      Fast f16:                TRUE                               
      Workgroup Max Size:      1024(0x400)                        
      Workgroup Max Size per Dimension:
        x                        1024(0x400)                        
        y                        1024(0x400)                        
        z                        1024(0x400)                        
      Grid Max Size:           4294967295(0xffffffff)             
      Grid Max Size per Dimension:
        x                        4294967295(0xffffffff)             
        y                        4294967295(0xffffffff)             
        z                        4294967295(0xffffffff)             
      FBarrier Max Size:       32                                 
    ISA 2                    
      Name:                    amdgcn-amd-amdhsa--gfx11-generic   
      Machine Models:          HSA_MACHINE_MODEL_LARGE            
      Profiles:                HSA_PROFILE_BASE                   
      Default Rounding Mode:   NEAR                               
      Default Rounding Mode:   NEAR                               
      Fast f16:                TRUE                               
      Workgroup Max Size:      1024(0x400)                        
      Workgroup Max Size per Dimension:
        x                        1024(0x400)                        
        y                        1024(0x400)                        
        z                        1024(0x400)                        
      Grid Max Size:           4294967295(0xffffffff)             
      Grid Max Size per Dimension:
        x                        4294967295(0xffffffff)             
        y                        4294967295(0xffffffff)             
        z                        4294967295(0xffffffff)             
      FBarrier Max Size:       32                                 
*******                  
Agent 3                  
*******                  
  Name:                    aie2                               
  Uuid:                    AIE-XX                             
  Marketing Name:          AIE-ML                             
  Vendor Name:             AMD                                
  Feature:                 AGENT_DISPATCH                     
  Profile:                 BASE_PROFILE                       
  Float Round Mode:        NEAR                               
  Max Queue Number:        1(0x1)                             
  Queue Min Size:          64(0x40)                           
  Queue Max Size:          64(0x40)                           
  Queue Type:              SINGLE                             
  Node:                    0                                  
  Device Type:             DSP                                
  Cache Info:              
    L2:                      2048(0x800) KB                     
    L3:                      32768(0x8000) KB                   
  Chip ID:                 0(0x0)                             
  ASIC Revision:           0(0x0)                             
  Cacheline Size:          0(0x0)                             
  Max Clock Freq. (MHz):   0                                  
  BDFID:                   0                                  
  Internal Node ID:        0                                  
  Compute Unit:            0                                  
  SIMDs per CU:            0                                  
  Shader Engines:          0                                  
  Shader Arrs. per Eng.:   0                                  
  WatchPts on Addr. Ranges:0                                  
  Memory Properties:       
  Features:                AGENT_DISPATCH
  Pool Info:               
    Pool 1                   
      Segment:                 GLOBAL; FLAGS: KERNARG, COARSE GRAINED
      Size:                    131136832(0x7d0fd40) KB            
      Allocatable:             TRUE                               
      Alloc Granule:           4KB                                
      Alloc Recommended Granule:4KB                                
      Alloc Alignment:         4KB                                
      Accessible by all:       TRUE                               
    Pool 2                   
      Segment:                 GLOBAL; FLAGS: COARSE GRAINED      
      Size:                    65536(0x10000) KB                  
      Allocatable:             TRUE                               
      Alloc Granule:           4KB                                
      Alloc Recommended Granule:0KB                                
      Alloc Alignment:         4KB                                
      Accessible by all:       TRUE                               
    Pool 3                   
      Segment:                 GLOBAL; FLAGS: COARSE GRAINED      
      Size:                    131136832(0x7d0fd40) KB            
      Allocatable:             TRUE                               
      Alloc Granule:           4KB                                
      Alloc Recommended Granule:4KB                                
      Alloc Alignment:         4KB                                
      Accessible by all:       TRUE                               
  ISA Info:                
*** Done ***             

and

czanik@fedora:~$ rocm-clinfo 
Number of platforms:				 1
  Platform Profile:				 FULL_PROFILE
  Platform Version:				 OpenCL 2.1 AMD-APP (3649.0)
  Platform Name:				 AMD Accelerated Parallel Processing
  Platform Vendor:				 Advanced Micro Devices, Inc.
  Platform Extensions:				 cl_khr_icd cl_amd_event_callback 


  Platform Name:				 AMD Accelerated Parallel Processing
Number of devices:				 1
  Device Type:					 CL_DEVICE_TYPE_GPU
  Vendor ID:					 1002h
  Board name:					 Radeon 8060S Graphics
  Device Topology:				 PCI[ B#197, D#0, F#0 ]
  Max compute units:				 20
  Max work items dimensions:			 3
    Max work items[0]:				 1024
    Max work items[1]:				 1024
    Max work items[2]:				 1024
  Max work group size:				 256
  Preferred vector width char:			 4
  Preferred vector width short:			 2
  Preferred vector width int:			 1
  Preferred vector width long:			 1
  Preferred vector width float:			 1
  Preferred vector width double:		 1
  Native vector width char:			 4
  Native vector width short:			 2
  Native vector width int:			 1
  Native vector width long:			 1
  Native vector width float:			 1
  Native vector width double:			 1
  Max clock frequency:				 2900Mhz
  Address bits:					 64
  Max memory allocation:			 57070749280
  Image support:				 Yes
  Max number of images read arguments:		 128
  Max number of images write arguments:		 8
  Max image 2D width:				 16384
  Max image 2D height:				 16384
  Max image 3D width:				 16384
  Max image 3D height:				 16384
  Max image 3D depth:				 8192
  Max samplers within kernel:			 16
  Max size of kernel argument:			 1024
  Alignment (bits) of base address:		 2048
  Minimum alignment (bytes) for any datatype:	 128
  Single precision floating point capability
    Denorms:					 Yes
    Quiet NaNs:					 Yes
    Round to nearest even:			 Yes
    Round to zero:				 Yes
    Round to +ve and infinity:			 Yes
    IEEE754-2008 fused multiply-add:		 Yes
  Cache type:					 Read/Write
  Cache line size:				 128
  Cache size:					 32768
  Global memory size:				 67142057984
  Constant buffer size:				 57070749280
  Max number of constant args:			 8
  Local memory type:				 Local
  Local memory size:				 65536
  Max pipe arguments:				 16
  Max pipe active reservations:			 16
  Max pipe packet size:				 1236174432
  Max global variable size:			 57070749280
  Max global variable preferred total size:	 67142057984
  Max read/write image args:			 64
  Max on device events:				 1024
  Queue on device max size:			 8388608
  Max on device queues:				 1
  Queue on device preferred size:		 262144
  SVM capabilities:				 
    Coarse grain buffer:			 Yes
    Fine grain buffer:				 Yes
    Fine grain system:				 No
    Atomics:					 No
  Preferred platform atomic alignment:		 0
  Preferred global atomic alignment:		 0
  Preferred local atomic alignment:		 0
  Kernel Preferred work group size multiple:	 32
  Error correction support:			 0
  Unified memory for Host and Device:		 1
  Profiling timer resolution:			 1
  Device endianess:				 Little
  Available:					 Yes
  Compiler available:				 Yes
  Execution capabilities:				 
    Execute OpenCL kernels:			 Yes
    Execute native function:			 No
  Queue on Host properties:				 
    Out-of-Order:				 No
    Profiling :					 Yes
  Queue on Device properties:				 
    Out-of-Order:				 Yes
    Profiling :					 Yes
  Platform ID:					 0x7ffb97d11d80
  Name:						 gfx1151
  Vendor:					 Advanced Micro Devices, Inc.
  Device OpenCL C version:			 OpenCL C 2.0 
  Driver version:				 3649.0 (HSA1.1,LC)
  Profile:					 FULL_PROFILE
  Version:					 OpenCL 2.0 
  Extensions:					 cl_khr_fp64 cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_int64_base_atomics cl_khr_int64_extended_atomics cl_khr_3d_image_writes cl_khr_byte_addressable_store cl_khr_fp16 cl_khr_gl_sharing cl_amd_device_attribute_query cl_amd_media_ops cl_amd_media_ops2 cl_khr_image2d_from_buffer cl_khr_subgroups cl_khr_depth_images cl_amd_copy_buffer_p2p cl_amd_assembly_program 

Output from llama

root@fedora:~# llama-cli   -m ~/models/llama-2-7b.Q4_K_M.gguf   --no-mmap   -ngl 99   -p "Explain quantum computing in simple terms:"   -n 256
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 ROCm devices:
  Device 0: Radeon 8060S Graphics, gfx1151 (0x1151), VMM: no, Wave Size: 32
build: 0 (unknown) with HIP version: 6.4.43484-9999 for x86_64-redhat-linux-gnu
main: llama backend init
main: load the model and apply lora adapter, if any
llama_model_load_from_file_impl: using device ROCm0 (Radeon 8060S Graphics) - 64031 MiB free
llama_model_loader: loaded meta data with 19 key-value pairs and 291 tensors from /root/models/llama-2-7b.Q4_K_M.gguf (version GGUF V2)
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = LLaMA v2
llama_model_loader: - kv   2:                       llama.context_length u32              = 4096
llama_model_loader: - kv   3:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv   4:                          llama.block_count u32              = 32
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 11008
llama_model_loader: - kv   6:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv   7:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   8:              llama.attention.head_count_kv u32              = 32
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  10:                          general.file_type u32              = 15
llama_model_loader: - kv  11:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  12:                      tokenizer.ggml.tokens arr[str,32000]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv  13:                      tokenizer.ggml.scores arr[f32,32000]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  14:                  tokenizer.ggml.token_type arr[i32,32000]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv  15:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  16:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  17:            tokenizer.ggml.unknown_token_id u32              = 0
llama_model_loader: - kv  18:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type q4_K:  193 tensors
llama_model_loader: - type q6_K:   33 tensors
print_info: file format = GGUF V2
print_info: file type   = Q4_K - Medium
print_info: file size   = 3.80 GiB (4.84 BPW) 
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: special tokens cache size = 3
load: token to piece cache size = 0.1684 MB
print_info: arch             = llama
print_info: vocab_only       = 0
print_info: n_ctx_train      = 4096
print_info: n_embd           = 4096
print_info: n_layer          = 32
print_info: n_head           = 32
print_info: n_head_kv        = 32
print_info: n_rot            = 128
print_info: n_swa            = 0
print_info: is_swa_any       = 0
print_info: n_embd_head_k    = 128
print_info: n_embd_head_v    = 128
print_info: n_gqa            = 1
print_info: n_embd_k_gqa     = 4096
print_info: n_embd_v_gqa     = 4096
print_info: f_norm_eps       = 0.0e+00
print_info: f_norm_rms_eps   = 1.0e-05
print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 0.0e+00
print_info: f_attn_scale     = 0.0e+00
print_info: n_ff             = 11008
print_info: n_expert         = 0
print_info: n_expert_used    = 0
print_info: causal attn      = 1
print_info: pooling type     = 0
print_info: rope type        = 0
print_info: rope scaling     = linear
print_info: freq_base_train  = 10000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn  = 4096
print_info: rope_finetuned   = unknown
print_info: model type       = 7B
print_info: model params     = 6.74 B
print_info: general.name     = LLaMA v2
print_info: vocab type       = SPM
print_info: n_vocab          = 32000
print_info: n_merges         = 0
print_info: BOS token        = 1 '<s>'
print_info: EOS token        = 2 '</s>'
print_info: UNK token        = 0 '<unk>'
print_info: LF token         = 13 '<0x0A>'
print_info: EOG token        = 2 '</s>'
print_info: max token length = 48
load_tensors: loading model tensors, this can take a while... (mmap = false)
load_tensors: offloading 32 repeating layers to GPU
load_tensors: offloading output layer to GPU
load_tensors: offloaded 33/33 layers to GPU
load_tensors:        ROCm0 model buffer size =  3820.94 MiB
load_tensors:          CPU model buffer size =    70.31 MiB
..................................................................................................
llama_context: constructing llama_context
llama_context: n_seq_max     = 1
llama_context: n_ctx         = 4096
llama_context: n_ctx_per_seq = 4096
llama_context: n_batch       = 2048
llama_context: n_ubatch      = 512
llama_context: causal_attn   = 1
llama_context: flash_attn    = 0
llama_context: freq_base     = 10000.0
llama_context: freq_scale    = 1
llama_context:  ROCm_Host  output buffer size =     0.12 MiB
llama_kv_cache_unified:      ROCm0 KV buffer size =  2048.00 MiB
llama_kv_cache_unified: size = 2048.00 MiB (  4096 cells,  32 layers,  1 seqs), K (f16): 1024.00 MiB, V (f16): 1024.00 MiB
llama_kv_cache_unified: LLAMA_SET_ROWS=0, using old ggml_cpy() method for backwards compatibility
llama_context:      ROCm0 compute buffer size =   288.00 MiB
llama_context:  ROCm_Host compute buffer size =    16.01 MiB
llama_context: graph nodes  = 1158
llama_context: graph splits = 2
common_init_from_params: setting dry_penalty_last_n to ctx_size = 4096
common_init_from_params: warming up the model with an empty run - please wait ... (--no-warmup to disable)
main: llama threadpool init, n_threads = 16

system_info: n_threads = 16 (n_threads_batch = 16) / 32 | ROCm : NO_VMM = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : LLAMAFILE = 1 | REPACK = 1 | 

sampler seed: 2232334333
sampler params: 
	repeat_last_n = 64, repeat_penalty = 1.000, frequency_penalty = 0.000, presence_penalty = 0.000
	dry_multiplier = 0.000, dry_base = 1.750, dry_allowed_length = 2, dry_penalty_last_n = 4096
	top_k = 40, top_p = 0.950, min_p = 0.050, xtc_probability = 0.000, xtc_threshold = 0.100, typical_p = 1.000, top_n_sigma = -1.000, temp = 0.800
	mirostat = 0, mirostat_lr = 0.100, mirostat_ent = 5.000
sampler chain: logits -> logit-bias -> penalties -> dry -> top-n-sigma -> top-k -> typical -> top-p -> min-p -> xtc -> temp-ext -> dist 
generate: n_ctx = 4096, n_batch = 2048, n_predict = 256, n_keep = 1

 Explain quantum computing in simple terms: what is it, how does it work, and what are its potential benefits?
This is a difficult question to answer because quantum computing is not yet a well-defined field of study, and many of the potential applications are still being researched. However, we can say that quantum computing is a type of computation that relies on the principles of quantum mechanics (the branch of physics that describes the behaviour of particles such as electrons and photons).
These particles obey a set of rules that are different from those obeyed by classical computers, which rely on the principles of classical mechanics. Quantum computing uses a particle’s quantum state (such as its spin) to store information. This means that quantum computers can perform computations that are not possible on classical computers.
In the simplest terms, quantum computing is a type of computation that takes advantage of the unique properties of quantum mechanics. These properties include superposition, entanglement, and non-locality. Superposition is the ability of a quantum system to exist in multiple states simultaneously.
This means that a quantum system can be in two different places at the same time, or have two different properties at the same time. Entanglement is the ability of two quantum systems to be inter

llama_perf_sampler_print:    sampling time =       4.27 ms /   265 runs   (    0.02 ms per token, 62075.43 tokens per second)
llama_perf_context_print:        load time =     631.46 ms
llama_perf_context_print: prompt eval time =      63.57 ms /     9 tokens (    7.06 ms per token,   141.57 tokens per second)
llama_perf_context_print:        eval time =    7110.09 ms /   255 runs   (   27.88 ms per token,    35.86 tokens per second)
llama_perf_context_print:       total time =    7184.25 ms /   264 tokens

Closing words

These are just my first steps. Most of the time I was not even fully aware what I was doing, just reused some sample command lines and code. These experiments were good enough to see that AI works on Linux as well, not just on Windows.

This blog is part of a longer series about my adventures with my new machine and AI. You can reach me to discuss this blog on one of the contacts listed in the upper right corner. You can read the rest of the blogs under the toy tag.