Element Matrix Services is performing scheduled maintenance on our matrix server (fedora.im).
Affected Services:
- chat.fedoraproject.org
- fedora.im
- matrix services
/rss20.xml">
Element Matrix Services is performing scheduled maintenance on our matrix server (fedora.im).
Affected Services:
The Fedora Project’s Code of Conduct and its reports are managed by the Fedora Code of Conduct Committee, the Fedora Community Architect, and the Fedora Project Leader. We publish this summary to demonstrate our commitment to community safety and our project’s social fabric.
This post covers the year of reports received in the 2024 calendar year. The 2023 and 2024 annual report posts are published with delays due to changes in membership in the Code of Conduct Committee and rebalancing existing work. The purpose of publishing the reports now is to provide transparency, insight, and awareness into the health signs of the community.
The Fedora community continues to see a mix of hurdles in collaborations within the community, off-platform brand management, and a significant focus on moderator accountability.
2024 included reports about external social media posts made outside of our core community spaces. The Fedora Code of Conduct Committee (CoCC) were no longer just “putting out fires” of individual indifferences; we actively set expectations for how contributors represent Fedora on the web and its communities. To support this mission and bring in fresh perspectives to our work, we expanded our committee by welcoming three new members Jona Azizaj, Dave Cantrell, and Dorka Volavkova.
Overall, the 2024 data shows a significant decrease in new reports opened from the previous years. Additionally, fewer warnings and moderations were issued than previous years. The data matches the experience of the Code of Conduct Committee, in that the case load from new reports was finally beginning to decrease in volume. The incidents we received in 2024 were typically less intense and time-consuming than prior years. This supports a hypothesis made by the Committee that reports will decrease as time goes on from the global pandemic. The 2021 initiative of modernizing the Fedora Code of Conduct for sustainability was a successful effort.
| Year | Reports Opened | Reports Closed | Warnings Issued | Moderations Issued | Suspensions Issued | Bans Issued |
| 2024 | 11 | 11 | 1 | 0 | 1 | 0 |
| 2023 | 17 | 17 | 5 | 3 | 1 | 1 |
| 2022 | 21 | 24 | 6 | 3 | 0 | 0 |
| 2021 | 23 | 24 | 2 | 1 | 0 | 1 |
| 2020 | 20 | 16 | 8 | 4 | 2 | 0 |
If you witness or are part of a situation that violates Fedora’s Code of Conduct, please open a private report on the Code of Conduct repo or email codeofconduct@fedoraproject.org. As always, your reports are confidential and only visible to the Code of Conduct Committee.
Remember that opening a CoC report does not automatically mean action will be taken. Sometimes things can be clarified, improved, or resolved entirely. Or, it could be something pretty small, but it definitely wasn’t okay, and you don’t want to make a big deal… open that report anyway, because it could show a pattern of behavior that is negatively impacting more people than yourself.
Here is a reminder to our Fedora community to be kind and considerate to each other in all our interactions. We all depend on each other to create a community that is healthy, safe, and happy. Most of all, we love seeing folks self-moderate and stand up for the right thing day-to-day in our community. Keep it up, and keep being awesome Fedora, we <3 you!
Fedora Project’s Code of Conduct and reports are managed by the Fedora Code of Conduct Committee (CoCC). The Fedora CoCC is made up of the Fedora Project Leader, Matthew Miller; the Fedora Community Architect, Justin Wheeler; the Red Hat legal team, as appropriate; and community nominated members. In 2024, the Fedora Code of Conduct Committee (CoCC) expanded its membership by adding three new members. Jona Azizaj, David Cantrell, Dorka Volavkova came in this year.
The post Fedora Code of Conduct Report 2024 appeared first on Fedora Community Blog.
Disclaimer: I work at Microsoft on upstream Linux in Azure. These are my personal notes and opinions.
I kept thinking about the LWN article $ and the basic analysis I did yesterday. I kept coming back to one of the central themes of the mailing list conversation: false positives. Sashiko’s false positive rate is debated, but, I’m gathering, is pretty good by LLM standards. Still, there was a complaint about the number of false positives focused on the burden that false positives put on contributors and maintainers.
I wanted to understand if the false positive rate, and by extension the burden, was higher from an LLM than from human reviewers. To run that experiment, I needed to define what a false positive actually is. That turns out to be the interesting part.
My initial naïve definition of a false positive was any substantial comment that doesn’t yield a code change. If you said something and the code wasn’t changed, then even if it generated future work, it wasn’t applicable to this change now. The obvious hole is a comment that raises a future code change coming in a different patch set. But it felt like this number could be directionally accurate for understanding if we get more false positives or not.
The deeper problem is that “comment that doesn’t change code” isn’t really what false positive means in review. The act of questioning code can lead to greater confidence in the patch being proposed. It can reveal unrelated changes that are required or surface features that should also be considered. Not a negative outcome, but potentially not relevant to the actual patch set under discussion. So I tried reframing from false positives to burden: any comment that doesn’t result in a code change and was actually read by the contributor or maintainer is burdensome. It doesn’t matter whether a human or LLM reviewer raised the comment. If it didn’t result in a change, it was work or thought they didn’t need to do. For example, a back-and-forth conversation to prove the correctness of something that was already correct.
But that definition fails too, and the reason it fails is the real insight.
If two humans are engaged in a review process and there’s a back-and-forth conversation that does not result in a code change, most likely neither human would describe this as unnecessary burden. They would probably describe it as work they had to do or effort they expended, but both humans have likely come out of that conversation changed. Greater understanding of different parts of the system. Better ability to express oneself so the questions aren’t raised next time. Increased confidence in the correctness of a solution. There is a change assumed to have happened to one or both of the people.
A review conversation that doesn’t change code but changes the people having it isn’t a false positive. It only looks like one when the reviewer is a machine that won’t be changed.
For what it’s worth, I did look at existing studies of human review false positive rates. In my brief and non-exhaustive look, I’ve come to believe they aren’t useful here, not only because the question is moot when both parties come out changed, but because many are flawed or non-comparable. Some are in domains where reviewers are generalists talking to a specialist, unlikely in the kernel. Others misclassify trivial exchanges like “LGTM” or “thanks” as false positives. And none have been conducted over the kernel.
When a finding or probing question is raised by an LLM agent, the assumption that both parties come out changed breaks down.
Probing questions may not even be welcome from an LLM agent. One could never really be sure whether this was a “humans normally say this kind of thing in this context” situation versus an “I see something that maybe is wrong” situation.
But the more important part is this: if a human has to read a false positive, they have to put in their side of the work to validate, verify, explore, or test the question, and ultimately determine that it’s not an issue. They are unlikely to be changed in the absence of an exchange. And we know for a fact that the machine is not going to be changed.
In theory, we could wire up a training loop for Sashiko to take these back-and-forth exchanges and learn from them to reduce the incidence of false positives. I suspect it would have very little impact overall. First, the analysis showed that there’s almost no situation where the same bug is being surfaced over and over again. The machine is unlikely to run into the same finding and then have learned that finding isn’t valid. Second, the machine is not arguing from a position of true reasoning, therefore it is never clear if it backed down because it decided to be an agreeable sycophant or because the additional commentary made the correctness argument airtight.
At its true core, I think the conversation around false positives, based on what I read in the article, is likely a social problem, like most truly intractable problems in computer science.
If an LLM agent reviews my contribution and the maintainer insists that I address the review, I am not only forced to do what turns out, in the case of a false positive, to be unnecessary work, but forced to performatively defend myself against a machine. Or worse, argue with the machine performatively. The combination of unnecessary work that generates no value, plus being forced to do so performatively in the face of knowing it generates no value, but now having to do more work to show that I generated the work that did no value is a line too far for most of our psyches.
Setting aside the separate question of whether LLM ability will continue improving and therefore the number of false positives will go down, the core question of how to deal with false positives needs to be addressed at a social level.
In a space like the kernel, I would argue it may be appropriate to allow those whose code has been reviewed to react to LLM-generated findings with something along the lines of “smells like bullshit” and not have to go through the performative exercise of proving it’s bullshit, because we trust their instinct.
That said, it is probably worth creating some kind of long-term profile or scoreboard, both of those being the wrong words, for a contributor, so that they can over time understand if their intuition has blind spots. If an LLM is consistently raising a certain kind of feedback that they are dismissing, but we later discover a bug and have to fix it, or if human reviewers come back and their synthesis of their own experience plus what the LLM provided leads them to believe there’s a real, demonstrable problem, that’s a learning opportunity for the contributor.
The challenge is that there are no systems I’m aware of in modern use where these kinds of profiles are ever not used abusively against those profiled. Which is yet another social problem.
Disclaimer: I work at Microsoft on upstream Linux in Azure. These are my personal notes and opinions. And, yes, I’m aware of the date. The data is real - and in 40 minutes it won’t be April 1 anymore, at least where I live.
Daroc Alden’s LWN article on Sashiko $ captures a real tension in the Linux kernel community. Andrew Morton wants to make Sashiko - an LLM-based patch reviewer - a mandatory part of the memory management workflow. Lorenzo Stoakes and others say it’s too noisy and adds burden to already-overworked maintainers. Morton points to a ~60% hit rate on actual bugs. Stoakes points out that’s per-review, not per-comment, so the individual false positive rate is worse.
Reading the thread, I kept wondering about two specific mechanisms that could be driving maintainer frustration beyond the false positive question.
Hypothesis 1: Reviewers are getting told about bugs they didn’t create. Sashiko’s review protocol explicitly instructs the LLM to read surrounding code, not just the diff. That’s good review practice - but it means the tool might flag pre-existing bugs in code the patch author merely touched, putting those problems in their inbox.
Hypothesis 2: The same pre-existing bugs surface repeatedly. If a known issue in a subsystem doesn’t get fixed between review runs, every patch touching nearby code could trigger the same finding. That would create a steady drip of duplicate noise across the mailing list.
I pulled data from Sashiko’s public API and tested both.
I fetched all 406 patchsets from the linux-mm mailing list and a 500-patchset sample from LKML as of April 1, 2026. Of the 252 linux-mm reviews with findings, 204 had full review text available for analysis.
I had an LLM write Python scripts to classify the 466 extracted findings into three categories using deterministic regex pattern matching - roughly 50 weighted patterns that look for specific language in the review text. The classification code runs the same way every time on the same input. An LLM wrote it, but the scanning itself involves no inferencing.
The three categories:
When a finding matched multiple categories, the most specific won: pre-existing > interaction > patch-specific. About 7% of findings didn’t match any pattern and were excluded from further analysis.
For duplication, the scripts computed pairwise text similarity across reviews within the same subsystem. Again - deterministic comparison, LLM-authored code.
The full methodology, including the code used, a cached copy of the reviews, and the classification patterns and caveats, is in the analysis document in github.com/bexelbie/sashiko-analysis.
Hypothesis 2 is dead. Cross-review duplication was essentially zero. Across 16 LKML subsystems with 5+ reviewed patches each, only one pair of findings exceeded the similarity threshold - and that was the same author submitting similar patches, not the same bug recurring. Whatever is driving maintainer frustration, it’s not the same findings appearing over and over. While it is possible this would surface in a larger sample size, I personally find it unlikely.
Hypothesis 1 is partially supported, but the story is in the distribution. About 9% of findings explicitly discuss pre-existing issues. Averaged across all reviews, that’s roughly 12 words per review - barely noticeable.
But the average is misleading. The distribution is bimodal: 81% of reviews contain zero pre-existing findings. The other 19% contain pre-existing findings that constitute 28% of the review on average, adding roughly 19 lines to what the patch author reads. A few reviews are 75-82% pre-existing content.
Here’s the breakdown of what an average review with findings contains:
| Category | % of findings | Avg words |
|---|---|---|
| About the submitted patch | 72% | 74 |
| Patch × existing code interactions | 12% | 103 |
| Pre-existing issues | 9% | 62 |
| Unclassified | 8% | 47 |
The interaction findings (category 2) are worth calling out. They’re the longest - 103 words on average, 39% more than patch-specific findings - because explaining how new code breaks against existing behavior requires describing that behavior. These are arguably the hardest findings for a human reviewer to produce and exactly where a tool with codebase-wide context adds value.
The sharpest question the data raises isn’t statistical. It’s social.
When you submit a patch to linux-mm and get a Sashiko review, there’s roughly a 1-in-5 chance that a meaningful chunk of that review describes a bug you didn’t write - a race, a leak, a use-after-free in the code you’re modifying. Some of these are trivial (typos in nearby comments). Some are substantive.
Either way, the review has put it in your inbox. You are now the person who has been told about it.
Morton’s position - “don’t add bugs” as Rule #1 - makes sense if the tool’s output is mostly about your patch. And it is: ~85% of findings concern either the submitted change or its direct interactions with existing code. But 1 in 5 reviewees is also getting handed someone else’s problem, with an implicit expectation to respond.
Stoakes’s concern about maintainer burden lands differently when you see the bimodal distribution. The average review is manageable. The tail is not.
This analysis classifies scope - whether a finding is about the submitted patch, its interactions, or pre-existing code. It does not measure correctness. The core Morton/Stoakes disagreement is about false positive rates within on-topic findings - how often Sashiko flags something in your patch that turns out to be wrong. That question requires domain expertise to evaluate each finding individually, and this data doesn’t go there.
The classification also has limits. The regex patterns achieve ~93% coverage but aren’t semantic - borderline cases between categories get decided by pattern specificity, not understanding. The proportions are directionally sound but not precise.
The full data, methodology, and API references are in the repository, github.com/bexelbie/sashiko-analysis if anyone wants to reproduce or extend this.
Almost 15 years ago, Balabit had a campaign, stating that syslog-ng could process 650k messages a second. Now I am happy to present 7 million EPS (events per second). Timing the announcement to April 1 is not a coincidence :-)
While the 650k EPS measurement was true, it was misleading. This value was measured right after syslog-ng 3.2 introduced multi-threading, in lab environment, under optimal circumstances, using synthetic log messages. However, there was no fine print explaining this, just the statement that syslog-ng could process 650k EPS. It was fixed after a while, but it took years to recover from the effects of this marketing campaign, and engineers ten years later still had a nervous breakdown when someone mentioned “650k”. Why? Because from that moment, everyone expected syslog-ng to collect logs at that message rate in a production environment with complex configurations. Which was of course not the case.
Fast-forward to today, I’m happy to share that:
Is this measurement value valid? Yes.
Does it apply to real world? No.
Does it sound good? Definitely :-)

I love playing with various non-x86 systems. I have various ARM, POWER, MIPS systems at home, and sometimes I access other architectures, like RISC-V remotely. And, of course, not just different architectures, but different operating systems: various Linux distributions, MacOS, FreeBSD, sometimes also other BSD variants. I’m a server guy, and for the past 15+ years: a syslog-ng guy. Sometimes I had access to an exotic system on the other side of the world only for less than an hour, but I almost always tested syslog-ng.
For many years I had a bunch of shell scripts and configs to benchmark syslog-ng performance. Not for real world production loads, but rather for comparing architectures and operating systems. I needed a script which could do measurements with minimal dependencies and do it quickly, in one go. This is how sngbench was born, based on my previous ugly scripts. It has quite a few advantages and shortcomings:
Minimal dependencies: bash and syslog-ng
No complex setup: everything runs on the same host
network bandwidth is not a limiting factor
loggen and syslog-ng processes are competing for resources
Two bundled configurations: a performance tuned and the default syslog-ng.conf from openSUSE with minimal modifications to add a TCP source
By default, very short (20 seconds) measurements, so disk I/O is not a limiting factor
Many different test scenarios: from a single TCP connection to 4 * 128
Of course this describes just the “factory defaults”. You can easily change the test scenarios and configurations too.
I was testing syslog-ng code which was not yet even merged to the development branch. First, I tested these patches with various settings. Along the way I remembered that Splunk guidelines mention so-rcvbuf tuning also for TCP connections. Previously I only used that for optimizing UDP performance. Now I have done it for TCP. Wonders happened :-)
But, of course, the main question is: can you achieve this performance in production? TL;DR: No.
My tests are run from localhost. Network bandwidth is not an issue. Tests are run in short bursts. This is peak performance; when it comes to writing logs to files or forwarding to a cluster of Splunk or Elasticsearch endpoints around the clock, that would be slower. Also, in my fastest test case, logs came from four different loggen instances, over 32 TCP connections each, at a constant rate. In the real world, logs come in bursts and connections are opened and closed regularly.
I used my AI mini workstation with Fedora Linux 44 Beta. First, I took a base line with stock syslog-ng 4.11.0 included in the distribution. Then I used my syslog-ng git snapshot packages for Fedora from https://copr.fedorainfracloud.org/coprs/czanik/syslog-ng-githead/. Initially it also had jemalloc support compiled in. Later I disabled it and purely focused on the yet to be merged parallelize() optimizations from GitHub. I experimented with enabling and disabling parallelize(), adding various batch_size() values, and finally also so-recbuf().

This blog is part of a longer series about my adventures with my new machine and AI. You can reach me to discuss this blog on one of the contacts listed in the upper right corner. You can read the rest of the blogs under the toy tag.
About seven years ago, a ticket was filed noting aarch64 systems were shipping with Secure Boot enabled, and that Fedora should start signing its boot path to support these devices out of the box.
I’m pleased to say that today’s Fedora Rawhide images - what will be Fedora 45 - finally does this thanks to the work of a whole bunch of people.
This means you can grab the latest Rawhide images and boot them on your favorite aarch64 laptop
without turning off Secure Boot, or launch VMs in any of the major clouds with Secure Boot on. For
example, I’m able to start a VM in Azure with the TrustedLaunch security type:
❯ az group create --name "jcline-aarch64-secureboot" --location "eastus2"
❯ az vm create --location eastus2 --name fedora \
--resource-group jcline-aarch64-secureboot \
--image /CommunityGalleries/Fedora-5e266ba4-2250-406d-adad-5d73860d958f/Images/Fedora-Cloud-Rawhide-Arm64/Versions/latest \
--security-type TrustedLaunch \
--size Standard_D2plds_v6 \
--accept-term \
--ssh-key-values @/home/jcline/.ssh/id_ed25519.pub
❯ ssh jcline@20.12.69.183
[jcline@fedora ~]$ mokutil --sb-state
SecureBoot enabled
The way Fedora used to sign UEFI applications for Secure Boot was delightfully simple (for some
value of simple). The keys were in a smart card, plugged into a special build host, and anything
that needed a signature was routed to be built on that host. pesign, one of the common utilities
to sign PE applications, has a mode where it can run as a daemon and sign anything provided to it
over a Unix socket. That Unix socket is threaded into the build environment, where builds can access
it to sign PE applications with pesign-client.
Unfortunately, that host was x86_64 so when aarch64 started shipping with Secure Boot enabled, an alternative approach was needed.
Ultimately we moved the smart card to the signing server we use for RPMs and other things. The tricky bit about the whole process is that Fedora signs each bit of the boot chain during the build. Each time any of the UEFI applications in the boot chain is built it needs to be signed. One way to do this is to build the application in Fedora’s infrastructure, and then have a second build which uses the output of the first build along with a signature as input to construct a signed final version. However, this means you’ve got two specfiles which you have to keep in sync, and there’s probably other painful aspects I’ve not considered. In any case, that’s not what Fedora does.
Instead, Fedora signs the UEFI applications during the build. Since we want the signing key to be
stored in a remote server, this implies some sort of networking, but builds aren’t permitted network
access. Nor can the build environment provide the necessary secrets to authenticate with the signing
service. In order to handle this, I wrote a small service that pretends to be the pesign Unix
socket, and that can be exposed to the build environment in the same way. However, it just shovels
anything it gets to the signing server and returns whatever the signing server does.
That service got deployed last week, and after a little bit of debugging it even worked. In fact, everything was signed for aarch64 last week, except for the fallback UEFI application that adds a boot entry for Fedora if it’s not there, which happens on first boot. Without that, booting new images would fail unless you explicitly added the correct Fedora boot entry manually. Yesterday, shim got rebuilt and everything works.
It’s possible this will eventually work in Fedora 44 Cloud images. Shim in Fedora 44 hasn’t (yet) been rebuilt and we’re in the final freeze for Fedora 44, so unfortunately we just missed it, but if it does get rebuilt later, Cloud images will be updated and will start working.
For Fedora 43 and older, the version of shim shipped doesn’t include the version signed for aarch64. I’m not sure it’s worth the risk to update it, as much as I’d like it to work there, as well.
Anyway, Fedora 45 will be upon us before you know it, and after seven years, six more months isn’t so bad, right?
Thoughts, comments, or feedback greatly welcomed on Mastodon
In this article you will learn how TLS (Transport Layer Security) and SSH (Secure SHell) use public/private key-pairs to authenticate web servers you visit and linux machines you log in to. You will also learn how the TLS framework installed by default in mainstream web browsers fails to prevent MITM (Man In The Middle) attacks in critical ways. Then we will walk through setting up a private .FEDORA TLD (Top Level Domain), setting up your own private CA with the smallstep package, and using the acme-tiny package to issue certificates for a website under that private TLD.
I will not cover setting up a simple “Hello World” website using your favorite web server packaged with Fedora. This needs to be up and running on HTTP to follow along. For this article, the website will be named hello.fedora.
Sadly, we will also explain how this does not completely solve the MITM problem – but this is already a big article. Click here to skip the background and motivation and go directly to the HowTo.
While NSA director Admiral Bobby revealed that intel agencies were aware of two key, or public-key cryptography since the 1960s, the first unclassified paper was published by Whitfield Diffie and Martin E. Hellman in 1976. In college, I remember playing with cryptosystems based on the knapsack problem. These had various vulnerabilities. What revolutionized the field was publication of the RSA algorithm in 1977. I vividly remember where I sat in the college library when I read the paper. There was some controversy over “you can’t patent algorithms”. However RSA patented their implementation (which is already protected by copyright – but that is another discussion). Yes, you can whip up a 1 line Perl implementation in a few minutes (we all did) – but a secure implementation that does not leak the private key through various side channels is NOT trivial.
The original concept of public keys was to look up a recipient’s pubkey in a directory, and use it to encrypt a message that only the possessor of the corresponding private key can decrypt. This can also be used to authenticate a correspondent via a protocol that proves they hold the corresponding private key. The basic idea is to encrypt a random token with a pubkey, the recipient decrypts the token and sends it back encrypted by your pubkey. The details are not trivial. The primary concern is MITM attacks. SSH and TLS support several widely accepted algorithms for authentication and key exchange.
If you think about it, that “directory” is all important. Suppose you have a “secure” phone app (without naming names) that uses a public directory to map telephone number to pubkey. Whoever runs that directory can return their own pubkey (likely a different one for each telephone number), decrypt the data, and send it on, re-encrypted to the real pubkey of the intended recipient (and the same for the other direction). I.e. – the classic MITM attack. This is why such secure applications usually provide a way to verify you have the real pubkey via an in-person meeting or alternate medium.
So how do you know the real pubkey for a secure (https) website? Websites provide a “certificate” saying “this pubkey is for these domain names” (and other information we are not concerned with here). Well, anyone can create such a certificate – in fact we will do so in this article – so how do you know it is truthful? The certificate is “signed” by a Certificate Authority (CA). Pubkeys can be used to sign data. For RSA, the basic concept is to compute a secure “hash” (e.g. SHA256) of the certificate data, and “decrypt” it using the private key of the CA. The signature can be verified by using the pubkey of the CA to “encrypt” the result, – which should match the hash of the signed data. RSA is nice in that decryption and encryption are symmetrical – verifying a signature is the same operation as encrypting the signature to the owner of the privkey for the pubkey . So now, instead of every web user maintaining a private database of pubkeys for domain names, the browser has a list of trusted CAs which sign website certificates after verifying them in some way. In case a private key is compromised, CAs publish a Revocation List (which regular people rarely use) and TLS certificates always have an expiration date.
Note that CAs can certify data other than domain names, like the name of a company or individual. Commercial CAs generally charge a premium for this, but there are also non-profit CAs like cacert.org that certify personal details via in-person meetings.
Regular Joes (“normies”) do not keep track of all this, so where does that “list of trusted CAs” come from? Well, there is a CA and Browser forum with representatives from popular browser software makers and commercial CAs. They maintain a list of trusted CAs, and changes are voted on in public meetings with minutes published on their web page. Fedora installs this list in /usr/share/pki. Browsers may have their own copy. Users may add additional trusted CAs to /usr/share/pki or /etc/pki/ca-trust and browsers may have their own way of adding additional trusted CAs.
This all sounds well and good, BUT. The critical flaw could be called serial reliability. The trusted CAs are trusted for any domain. So any trusted CA (including any you add) can forge a certificate for any website. DNS vulnerabilities (cache poisoning and such) are beyond the scope of this article. But we will set up a private CA which you could use to forge any website cert and fool anyone you convince to trust your CA (and can hack their DNS and/or IP routing). The cabforum is very careful about their list. As part of hostilities, forum CAs stopped certifying .RU domains (ISO TLD for Russia). Russia promptly put up their own national CAs, which anyone can add to their browser trust store. Normies were warned NOT to do this, as the Russian CAs could then forge certs for any domain. But a moment’s thought reveals that ANY cabforum CA could go “rogue” and do the same thing. It only takes one.
There are solutions to this blanket trust problem, but that will require another article.
For illustration, we will create the .FEDORA TLD. Everyone following along will create a different instance of that TLD, and hostnames under .FEDORA will resolve to different IPs (or NXDOMAIN) depending on whose DNS server you point that TLD at. This was the motivation for creating ICANN – a worldwide centralized DNS root (list of official TLDs). This provides a consistent namespace at the expense of absolute power (to cancel domains and TLDs) invested in ICANN. Before ICANN, admins all maintained their own DNS root, and periodically updated (manually or automatically) nameservers for well known TLDs like .COM etc. ISO defined an official list of TLDs, including country code TLDs (like .US). That worked well. The problem came with more obscure TLDs like .FREE. Companies trying to be “cool” were upset that not all customers got the same IPs for .FREE hostnames. Also admins liked having “someone else” maintain the DNS root. Hence, ICANN. There is also Opennic which likewise has “someone else” (volunteers) maintain a root zone, with fallback to ICANN, and has its own “forum” (existing TLDs vote) to approve new TLDs.
Here is a bind zonefile for .FEDORA:
$TTL 2H
; hello.fedora
@ IN SOA ns1 hostadmin.hello.fedora. (
2025122600 ; serial
1H ; refresh
15M ; retry
14D ; expire
6H ; default_ttl
)
@ IN NS ns1.fedora.
@ IN TXT "v=spf1 -all"
hello IN A 192.168.100.31
ns1 IN A 192.168.100.31
ca IN A 192.168.100.31
But that was a bait and switch. Setting up DNS for a private TLD is its own article. If you know how to add such a zone to your self hosted DNS – then do so. For the rest, we’ll use an even older hostname/IP map that predates DNS: as root, edit the file /etc/hosts on the system you will run step-ca on and append these lines:
# smallstep article
192.168.100.31 hello.fedora
192.168.100.31 ca.fedora
Replace 192.168.100.31 with the IP of the system you are trying all this out on. Step-ca must be able to lookup the hello.fedora hostname it is certifying to do the ACME protocol. We will use the /.well-known/acme-challenge method, which does not require real DNS. The system you run acme-tiny on also needs to lookup ca.fedora.
If the smallstep package is still under review when you read this, you’ll need to enable the copr repo (otherwise skip this step):
sudo dnf copr enable @fedora-review/fedora-review-2418762-smallstep
First, we need to create our root CA. In production, this should be on a separate offline machine. For small operations, the secondary CAs can be automated, and you sign the certificates for these secondaries manually with the root CA. I would keep the root CA password on paper – can’t be hacked (but watch out for cameras). Do NOT skip the password for the root CA. Some number of systems will trust that CA for any domain. If the private key leaks, you end up with a situation like Dell faced in 2015.
Let’s put the manual root CA in /etc/pki/CA and generate the root cert. Openssl will ask you for a key passwd, and what x509 calls “subject identifiers”. I left the state and email blank, and set city to Fedora City, organization to Fedora Project, organizational unit to ca, and common name to ca.example.org. The “-days 3650” sets the expiration to 10 years from now. The second command shows the “Issuer” information end-users will see when they ask for the issuer in an app like Firefox. The common name should normally be the hostname of the root CA, but it doesn’t really matter when the root CA is offline – and example.org is coincidentally offline by convention. 
$ sudo mkdir /etc/pki/CA
$ cd /etc/pki/CA
$ sudo install --mode=644 /dev/stdin root_ca.fedora.ext <<EOF
subjectAltName=DNS:ca.example.org
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid:always,issuer
basicConstraints = critical, CA:true, pathlen:1
keyUsage = critical, digitalSignature, cRLSign, keyCertSign
EOF
$ sudo mkdir -m 0700 private
$ sudo openssl req -new -keyout private/root_ca.key -out root_ca.csr
...
$ sudo openssl x509 -req -in root_ca.csr -key private/root_ca.key -out root_ca.crt -days 3652 -sha256 -extfile root_ca.fedora.ext
Enter pass phrase for private/root_ca.key:
Certificate request self-signature ok
subject=C=US, L=Fedora City, O=Fedora Project, OU=ca, CN=ca.example.org
Then install the smallstep package with step-ca binary and supporting files:
$ sudo dnf install smallstep
The package installs a skeleton config for a step-ca service in /var/lib/step-ca. Let’s flesh out the config as step-ca and generate an intermediate cert request (“csr”).
$ cd /var/lib/step-ca
$ sudo -u step-ca bash -l
$ ls
certs config db secrets templates
$ cp /etc/pki/CA/root_ca.crt certs
$ openssl req -new -keyout secrets/intermediate_ca.key -out intermediate_ca.csr
...
$ nano config/ca.json
$ exit
Again, openssl will ask for subject identifiers. I used the same as for the root CA, but with the common name ca.fedora. Use your favorite text editor; “nano” is beginner friendly. Change MYCABAL to FEDORA and ca.mycabal.org to ca.fedora. If you provided a password for intermediate_ca.key, put it in the “password” field of ca.json. Do not set the password in ca.json to the empty string. This will make step-ca try to prompt for it at startup – which is not allowed under systemd, and fails with an error opening /dev/tty. For the intermediate cert, the common name is important. Smallstep will auto generate a host cert for “ca.fedora” (it is, after all, a certificate authority), and it must match the hostname ACME clients use to sign certs. Now we need to sign the intermediate cert with the root CA. 1825 days is 5 years. Intermediate certs should be shorter lived than the root CA. Not too short, if you are manually resigning the certs.
$ cd /etc/pki/CA
$ sudo install --mode=644 /dev/stdin ca.fedora.ext << EOF
subjectAltName=DNS:ca.fedora
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid:always,issuer
basicConstraints = critical, CA:true, pathlen:0
keyUsage = critical, digitalSignature, cRLSign, keyCertSign
EOF
$ sudo openssl x509 -req -in /var/lib/step-ca/intermediate_ca.csr -CA root_ca.crt -CAkey private/root_ca.key -CAcreateserial -out intermediate_ca.crt -days 1825 -sha256 -extfile ca.fedora.ext
$ sudo -u step-ca cp intermediate_ca.crt /var/lib/step-ca/certs
$ sudo systemctl start step-ca
$ sudo systemctl status step-ca
...
Mar 31 15:18:56 test.gathman.org step-ca[2814912]: 2026/03/31 15:18:56 Serving HTTPS on :9000 ...
Running a web server was a prerequisite. I’ll use apache as an example, and hopefully users of nginx and others can translate. First, /etc/httpd/conf.d/hello.conf
<VirtualHost *:80>
ServerName hello.fedora
DocumentRoot "/var/www/html/hello"
#RedirectMatch ^((?!\/\.well-known\/).*)$ https://hello.fedora$1
<Location "/.well-known/acme-challenge/">
Options -Indexes
Order allow,deny
Allow from all
</Location>
<Location "/">
Options FollowSymLinks Indexes
Require all granted
</Location>
</VirtualHost>
The redirect is commented out until we have a signed cert. Assuming httpd is already running, use sudo apachectl graceful to load the changes. Then a simple document in /var/www/html/hello/index.html
<html>
<head>
<title> Hello Fedora </title>
</head>
<body>
<h1> Hello Fedora! </h1>
</body>
</html>
Acme-tiny needs to trust the root CA to use the ACME service. The step-ca service provides a handy API to fetch the root ca:
$ cd /etc/pki/ca-trust/source/anchors
$ sudo curl https://ca.fedora:9000/roots.pem -o fedora_ca.crt
curl: (60) SSL certificate problem: unable to get local issuer certificate
Ooops! Catch 22. You need the root CA to use the handy API that gets the root CA. So we’ll have to tell curl to accept the strange root cert. (Or use rsync, cp on the same machine, copy/paste between terminal windows, or other more secure method.)
$ sudo curl -k https://ca.fedora:9000/roots.pem -o fedora_ca.crt $ sudo update-ca-trust extract
Now, we are ready to run acme-tiny. Once again, openssl req will prompt for subject identifiers. The only one browsers care about is Common Name, which should be “hello.fedora”. However, users may care about the other fields when they use browser features to inspect certs.
$ sudo dnf install acme-tiny
$ sudo apachectl graceful
$ cd /var/lib/acme
$ sudo -u acme bash -l
$ ls
certs csr private
$ /usr/libexec/acme-tiny/sign # NOTE: generates account.key if needed
$ ls private
account.key
$ openssl req -new -passout pass:'' -keyout private/hello.key -out csr/hello.csr
$ /usr/sbin/acme_tiny --account-key private/account.key --csr csr/hello.csr --acme-dir /var/www/challenges/ --ca https://ca.fedora:9000/acme/FEDORA >certs/hello.crt
$ exit
$ sudo nano /etc/httpd/conf.d/hello.conf
Now uncomment the Redirect Match and append the below SSL virtual host definition to hello.conf. Use apachectl graceful to load the changes.
<VirtualHost *:443>
ServerName hello.fedora:443
SSLEngine on
SSLProtocol all -SSLv2 -SSLv3
SSLCipherSuite HIGH:3DES:!aNULL:!MD5:!SEED:!IDEA
DocumentRoot "/var/www/html/hello"
SSLCertificateFile /var/lib/acme/certs/hello.crt
SSLCACertificateFile /var/lib/acme/certs/hello.crt
SSLCertificateKeyFile /var/lib/acme/private/hello.key
CustomLog logs/ssl_request_log \
"%t %h %{SSL_PROTOCOL}x %{SSL_CIPHER}x \"%r\" %b"
<Location "/">
Options FollowSymLinks Indexes
</Location>
</VirtualHost>
The current acme-tiny package auto-renews certs only for the letsencrypt.org CA. That should be extended soon. Meanwhile, feel free to add something hacky. (I’ll try to have it lookup tlds in /etc/sysconfig or something to get custom CA url.)
On the machine with your web browser, you need 2 things: the new root CA and some way to lookup names in the .FEDORA TLD, either by pointing DNS to the server you set up with the private zone, or by appending the lines to /etc/hosts for ca.fedora and hello.fedora.
Now the curl should work without -k. And your browser should work to display https://hello.fedora, although you might have to restart it. If it doesn’t read Fedora ca-trust store on startup, you might need to find an option to import CA on the browser menu.
$ curl https://hello.fedora
<html>
<head>
<title> Hello Fedora </title>
</head>
<body>
<h1> Hello Fedora! </h1>
</body>
</html>
Now, that your root CA is up and running, don’t lose sight of what can be done by having it go rogue. Get lots of people to install it so they can access your cool new TLDS. Then start forging certs for arbitrary web sites, and conquer the world!! Bwa! ha! ha! (A future article can address PKCS#11 and restricting how you trust CAs in browsers and other software.)
Because I am bad at giving up on things, I’ve been running my own email server for over 20 years. Some of that time it’s been a PC at the end of a DSL line, some of that time it’s been a Mac Mini in a data centre, and some of that time it’s been a hosted VM. Last year I decided to bring it in house, and since then I’ve been gradually consolidating as much of the rest of my online presence as possible on it. I mentioned this on Mastodon and a couple of people asked for more details, so here we are.
First: my ISP doesn’t guarantee a static IPv4 unless I’m on a business plan and that seems like it’d cost a bunch more, so I’m doing what I described here: running a Wireguard link between a box that sits in a cupboard in my living room and the smallest OVH instance I can, with an additional IP address allocated to the VM and NATted over the VPN link. The practical outcome of this is that my home IP address is irrelevant and can change as much as it wants - my DNS points at the OVH IP, and traffic to that all ends up hitting my server.
The server itself is pretty uninteresting. It’s a refurbished HP EliteDesk which idles at 10W or so, along 2TB of NVMe and 32GB of RAM that I found under a pile of laptops in my office. We’re not talking rackmount Xeon levels of performance, but it’s entirely adequate for everything I’m doing here.
So. Let’s talk about the services I’m hosting.
This one’s trivial. I’m not really hosting much of a website right now, but what there is is served via Apache with a Let’s Encrypt certificate. Nothing interesting at all here, other than the proxying that’s going to be relevant later.
Inbound email is easy enough. I’m running Postfix with a pretty stock configuration, and my MX records point at me. The same Let’s Encrypt certificate is there for TLS delivery. I’m using Dovecot as an IMAP server (again with the same cert). You can find plenty of guides on setting this up.
Outbound email? That’s harder. I’m on a residential IP address, so if I send email directly nobody’s going to deliver it. Going via my OVH address isn’t going to be a lot better. I have a Google Workspace, so in the end I just made use of Google’s SMTP relay service. There’s various commerical alternatives available, I just chose this one because it didn’t cost me anything more than I’m already paying.
My blog is largely static content generated by Hugo. Comments are Remark42 running in a Docker container. If you don’t want to handle even that level of dynamic content you can use a third party comment provider like Disqus.
I’m deploying Mastodon pretty much along the lines of the upstream compose file. Apache is proxying /api/v1/streaming to the websocket provided by the streaming container and / to the actual Mastodon service. The only thing I tripped over for a while was the need to set the “X-Forwarded-Proto” header since otherwise you get stuck in a redirect loop of Mastodon receiving a request over http (because TLS termination is being done by the Apache proxy) and redirecting to https, except that’s where we just came from.
Mastodon is easily the heaviest part of all of this, using around 5GB of RAM and 60GB of disk for an instance with 3 users. This is more a point of principle than an especially good idea.
I’m arguably cheating here. Bluesky’s federation model is quite different to Mastodon - while running a Mastodon service implies running the webview and other infrastructure associated with it, Bluesky has split that into multiple parts. User data is stored on Personal Data Servers, then aggregated from those by Relays, and then displayed on Appviews. Third parties can run any of these, but a user’s actual posts are stored on a PDS. There are various reasons to run the others, for instance to implement alternative moderation policies, but if all you want is to ensure that you have control over your data, running a PDS is sufficient. I followed these instructions, other than using Apache as the frontend proxy rather than nginx, and it’s all been working fine since then. In terms of ensuring that my data remains under my control, it’s sufficient.
I’m using borgmatic, backing up to a local Synology NAS and also to my parents’ home (where I have another HP EliteDesk set up with an equivalent OVH IPv4 fronting setup). At some point I’ll check that I’m actually able to restore them.
Most of what I post is now stored on a system that’s happily living under a TV, but is available to the rest of the world just as visibly as if I used a hosted provider. Is this necessary? No. Does it improve my life? In no practical way. Does it generate additional complexity? Absolutely. Should you do it? Oh good heavens no. But you can, and once it’s working it largely just keeps working, and there’s a certain sense of comfort in knowing that my online presence is carefully contained in a small box making a gentle whirring noise.
The Fedora Project’s Code of Conduct and its reports are managed by the Fedora Code of Conduct Committee, the Fedora Community Architect, and the Fedora Project Leader. We publish this summary to demonstrate our commitment to community safety and our project’s social fabric.
This post covers the year of reports received in the 2023 calendar year. The 2023 and 2024 annual report posts are published with delays due to changes in membership in the Code of Conduct Committee and rebalancing existing work. The purpose of publishing the reports now is to provide transparency, insight, and awareness into the health signs of the community.
Reflecting on the 17 reports opened in 2023, the Fedora community saw a shift in incidents landscape compared to 2022. While the total number of reports decreased by approximately 19% (17 in 2023 vs. 21 in 2022), the severity of actions taken suggests a year focused on addressing persistent friction and high-impact behavioral issues.
The most notable trend in 2023 was the departure from the “zero-ban” status of 2022. The Committee moved toward more decisive actions including a permanent account closure for a slur and a suspension for aggressive ban evasion. Indicating a lower tolerance for behavior that directly threatens the safety and inclusivity of the community.
| Year | Reports Opened | Reports Closed | Warnings Issued | Moderations Issued | Suspensions Issued | Bans Issued |
| 2023 | 17 | 17 | 5 | 3 | 1 | 1 |
| 2022 | 21 | 24 | 6 | 3 | 0 | 0 |
| 2021 | 23 | 24 | 2 | 1 | 0 | 1 |
| 2020 | 20 | 16 | 8 | 4 | 2 | 0 |
While the volume of reports from 2020 to 2022 were usually stabilized around 20 a year, there is a wide range to the level of severity for every case investigated by the Code of Conduct Committee. Some persistent challenges continue to underline the importance of soft skills like communication and collaboration. However, global world affairs, politics, and international conflicts often see a correlation between community conflicts. These cases often require more care and consideration than other reports.
Overall, the report shows a community safe enough for people to report incidents, including those involving high-profile members. The Code of Conduct Committee aims to humbly protect an environment and community culture where anyone can feel a part of the Friends Foundation of the Four Foundations, and feel safe to be their authentic and genuine self in the community. However, it is also showing that this is the first time since several years when reports went below 20 reports. This might show a sign of some stabilizing as years of backlog and process debt were improved and fixed in 2021, and the intense online pressure-cooker period of the global pandemic finally relents.
If you witness or are part of a situation that violates Fedora’s Code of Conduct, please open a private report on the [Code of Conduct repo] or email codeofconduct@fedoraproject.org. As always, your reports are confidential and only visible to the Code of Conduct Committee.
Remember that opening a CoC report does not automatically mean action will be taken. Sometimes things can be clarified, improved, or resolved entirely. Or, it could be something pretty small, but it definitely wasn’t okay, and you don’t want to make a big deal… open that report anyway, because it could show a pattern of behavior that is negatively impacting more people than yourself.
Here is a reminder to our Fedora community to be kind and considerate to each other in all our interactions. We all depend on each other to create a community that is healthy, safe, and happy. Most of all, we love seeing folks self-moderate and stand up for the right thing day-to-day in our community. Keep it up, and keep being awesome Fedora, we <3 you!
Fedora Project’s Code of Conduct and reports are managed by the Fedora Code of Conduct Committee (CoCC). The Fedora CoCC is made up of the Fedora Project Leader, Matthew Miller; the Fedora Community Architect, Justin Wheeler; the Red Hat legal team, as appropriate; and community nominated members.
The post Fedora Code of Conduct Report 2023 appeared first on Fedora Community Blog.
My AI mini workstation from HP has seen some non-AI workloads this weekend. I installed Capture One for photo editing and a couple of software synthesizers. And realized along the way that while built-in speakers are nice, high-end audio is a lot better! :-)
For months, I have been listening to music on devices that are designed for speech: a pair of Jabra headphones and the speakers of my various laptops. There were many reasons for this, including peer pressure, and some hearing loss at a way too loud concert. I was also lazy to use my high-end devices and tried to persuade myself that audio equipment designed for meetings is good enough for music too. Well…
This weekend, I installed various software synthesizers on my new computer. Not that I learned music or could play any instruments, but I still enjoy experimenting with music (well, with noise, actually :-) ). As I connected the machine to the big screen in the living room, I also connected it to my HiFi system. Suddenly, I realized how much better it sounds than my laptop or anything I’ve listened to in the past few months.
While making noise with a couple of software synths and listening to music from my TIDAL subscription, I also recharged my Focal headphones. My Focal Bathys is not as good as my HiFi, but also has a wonderful sound regardless.
So I guess that after a few months long detour, I am back to using high-end audio gear whenever it is technically possible. I love the extra detail I can hear on my Heed Enigma speakers or on my Focal headphones. Of course, nothing can replace listening to live music at concerts, but high-end gear is much better at approximating the vibe of various live events than anything below it.

This blog is part of a longer series about my adventures with my new machine and AI. You can reach me to discuss this blog on one of the contacts listed in the upper right corner. You can read the rest of the blogs under the toy tag.
Over the last 20+ years in IT, I’ve seen automation evolve from a nice-to-have to a non-negotiable part of how organizations operate. Every company I’ve worked with has some form of automation. The problem is that, in many cases, what they have is not an automation strategy but rather a collection of individual scripts, cron jobs, and one-off solutions that were built to solve immediate problems. This is what I usually call ad-hoc automation or point automation, and while it works in the short term, it creates significant issues over time. This does not happen only in small or particularly tech-adverse companies; it is a very common situation across sectors and sizes.

Hello travelers!
Loadouts for Genshin Impact v0.1.15 is OUT NOW with the addition of support for recently released characters like Varka and for recently released weapons like Gest of the Mighty Wolf from Genshin Impact Luna V or v6.4 Phase 2. Take this FREE and OPEN SOURCE application for a spin using the links below to manage the custom equipment of artifacts and weapons for the playable characters.
Besides its availability as a repository package on PyPI and as an archived binary on PyInstaller, Loadouts for Genshin Impact is now available as an installable package on Fedora Linux. Travelers using Fedora Linux 42 and above can install the package on their operating system by executing the following command.
$ sudo dnf install gi-loadouts --assumeyes --setopt=install_weak_deps=FalseInstallation command for Fedora Linux
Varka to the GI Loadouts roster by @sdglitched in #511Gest of the Mighty Wolf to the GI Loadouts roster by @sdglitched in #512gi_loadouts/pack directory in the package by @gridhead in #514One character has debuted in this version release.
Varka is a claymore-wielding Anemo character of five-star quality.


Varka - Workspace and Results
One weapon has debuted in this version release.

While allowing you to experiment with various builds and share them for later, Loadouts for Genshin Impact lets you take calculated risks by showing you the potential of your characters with certain artifacts and weapons equipped that you might not even own. Loadouts for Genshin Impact has been and always will be a free and open source software project, and we are committed to delivering a quality experience with every release we make.
With an extensive suite of over 1550 diverse functionality tests and impeccable 100% source code coverage, we proudly invite auditors and analysts from MiHoYo and other organizations to review our free and open source codebase. This thorough transparency underscores our unwavering commitment to maintaining the fairness and integrity of the game.
The users of this ecosystem application can have complete confidence that their accounts are safe from warnings, suspensions or terminations when using this project. The ecosystem application ensures complete compliance with the terms of services and the regulations regarding third-party software established by MiHoYo for Genshin Impact.
All rights to Genshin Impact assets used in this project are reserved by MiHoYo Ltd. and Cognosphere Pte., Ltd. Other properties belong to their respective owners.
I finally upgraded my mail server to Debian 13 and, as expected, the Dovecot part was quite a ride.
The configuration syntax changed between Dovecot 2.3 (Debian 12) and Dovecot 2.4 (Debian 13),
so I started first with diffing my configuration against a vanilla Debian 12 one (this setup is slightly old) and then applied the same (logical) changes to a vanilla Debian 13 one.
This mostly went well.
Mostly because my user database is stored in SQL and while the Dovecot Configuration Upgrader says it can convert old dovecot-auth-sql.conf.ext files to the new syntax,
it only does so for the structure, not the SQL queries themselves.
While I don't expect it to be able to parse the queries and adopt them correctly,
at least a hint that the field names in userdb changed and might require adjustment would've been cool.
Once I got that all sorted, Dovecot would still refuse to let me in:
Error: sql: Invalid password in passdb: Weak password scheme 'MD5-CRYPT' used and refused
Yeah, right. Did I mention that this setup is old?
The quick cure against this is a auth_allow_weak_schemes = yes in /etc/dovecot/conf.d/10-auth.conf,
but long term I really should upgrade the password hashes in the database to something more modern.
And this is what this post is about.
My database only contains hashed (and salted) passwords, so I can't just update them without changing the password. And while there are only 9 users in total, I wanted to play nice and professional. (LOL)
There is a Converting Password Schemes howto in the Dovecot documentation, but it uses a rather odd looking PHP script, wrapped in a shell script which leaks the plaintext password to the process list, and I really didn't want to remember how to write PHP to complete this task.
Luckily, I know Python.
The general idea is:
auth_mechanisms = plain login),
the plaintext password is available during login.imap-login has verified the password against the old (insecure) hash in the database,
we can execute a post-login script,
which will connect to the database and update it with a new hash of the plaintext password.To make the plaintext password available to the post-login script,
we add '%{password}' as userdb_plain_pass to the SELECT statement of our passdb query.
The original howto also says to add a prefetch userdb, which we do.
The sql userdb remains, as otherwise Postfix can't use Dovecot to deliver mail.
Now comes the interesting part.
We need to write a script that is executed by Dovecot's script-login and that will update the database for us.
Thanks to Python's passlib and mysqlclient,
the database and hashing parts are relatively straight forward:
#!/usr/bin/env python3 import os import MySQLdb import passlib.hash DB_SETTINGS = {"host": "127.0.0.1", "user": "user", "password": "password", "database": "mail"} SELECT_QUERY = "SELECT password_enc FROM mail_users WHERE username=%(username)s" UPDATE_QUERY = "UPDATE mail_users SET password_enc=%(pwhash)s WHERE username=%(username)s" SCHEME = "bcrypt" EXPECTED_PREFIX = "$2b$" def main(): # https://doc.dovecot.org/2.4.3/core/config/post_login_scripting.html # https://doc.dovecot.org/2.4.3/howto/convert_password_schemes.html user = os.environ.get("USER") plain_pass = os.environ.get("PLAIN_PASS") if plain_pass is not None: db = MySQLdb.connect(**DB_SETTINGS) cursor = db.cursor() cursor.execute(SELECT_QUERY, {"username": user}) result = cursor.fetchone() current_pwhash = result[0] if not current_pwhash.startswith(EXPECTED_PREFIX): hash_module = getattr(passlib.hash, SCHEME) pwhash = hash_module.hash(plain_pass) data = {"pwhash": pwhash, "username": user} cursor.execute(UPDATE_QUERY, data) cursor.close() db.close() if __name__ == "__main__": main()
But if we add that as executable = script-login /etc/dovecot/dpsu.py to our imap-postlogin service,
as the howto suggests, the users won't be able to login anymore:
Error: Post-login script denied access to user
WAT?
Remember that shell script I wanted to avoid?
It ends with exec "$@".
Turns out the script-login "API" is rather interesting.
It's not "pass in a list of scripts to call and I'll call all of them".
It's "pass a list of scripts, I'll execv the first item and pass the rest as args, and every item is expected to execv the next one again". 🤯
With that (cursed) knowledge, the script becomes:
#!/usr/bin/env python3 import os import sys import MySQLdb import passlib.hash DB_SETTINGS = {"host": "127.0.0.1", "user": "user", "password": "password", "database": "mail"} SELECT_QUERY = "SELECT password_enc FROM mail_users WHERE username=%(username)s" UPDATE_QUERY = "UPDATE mail_users SET password_enc=%(pwhash)s WHERE username=%(username)s" SCHEME = "bcrypt" EXPECTED_PREFIX = "$2b$" def main(): # https://doc.dovecot.org/2.4.3/core/config/post_login_scripting.html # https://doc.dovecot.org/2.4.3/howto/convert_password_schemes.html user = os.environ.get("USER") plain_pass = os.environ.get("PLAIN_PASS") if plain_pass is not None: db = MySQLdb.connect(**DB_SETTINGS) cursor = db.cursor() cursor.execute(SELECT_QUERY, {"username": user}) result = cursor.fetchone() current_pwhash = result[0] if not current_pwhash.startswith(EXPECTED_PREFIX): hash_module = getattr(passlib.hash, SCHEME) pwhash = hash_module.hash(plain_pass) data = {"pwhash": pwhash, "username": user} cursor.execute(UPDATE_QUERY, data) cursor.close() db.close() os.execv(sys.argv[1], sys.argv[1:]) if __name__ == "__main__": main()
And the passwords are getting gradually updated as the users log in.
Once all are updated, we can remove the post-login script and drop the auth_allow_weak_schemes = yes.
Renewal time. That’s usually when I start questioning my life choices, at least the email-related ones.
I’ve been a Tuta user for almost a year now. However, there are some things that have been bugging me, and since I have to decide soon anyway, I think it’s a good time now to look back and reassess.
Before we dive in, some context - I switched from Gmail to Tuta in April of 2025. Then, I was choosing between Proton, Tuta, Fastmail, Mailfence and Zoho. Some (like Zoho and Fastmail) aren’t hosted in countries that respect my privacy, so I crossed them off. At the end, I was choosing between Tuta and Proton (and ultimately chose Tuta).
Last week we finally got the new secure boot setup fully switched over. We are now signing aarch64 grub2/kernel/fwupd as we are the x86_64 versions. The aarch64 signed artifacts are in rawhide now, but will move to stable releases as testing permits.
Sadly my Lenovo slim7x doesn't boot correctly with the signed artifacts, I think due to needing a firmware update or manually enrolling the microsoft certs. I'll try and test more with it when I can, but many other folks are seeing it work fine.
It's been a 7 year journey to get this done. Why so long? A few of the reasons in no particular order:
At first we were not even sure MS would sign others on aarch64
Our old x86_64 setup was smart cards in 2 builders, and we didn't have any easy way to install more in aarch64 builders.
They stopped making the smart cards we were using.
There were a number of things that made the fedora aarch64 kernel not work with secure boot. Many around the 'lockdown' patches.
Lack of time from everyone involved.
Need for someone to write a way to use our normal signing server to sign these things (so we wouldn't need cards in builders).
Lack of capacity in old smart cards to add new certs.
And probibly many more things I have forgotten about.
Feels great to get us in a better place and have signed aarch64 builds!
We had a mass update/reboot cycle this last week. It went pretty smoothly this time as we were not applying firmware updates or doing any other work.
We should be all caught up for the freeze next week....
Next tuesday starts the Fedora 44 Final freeze. This is the weeks running up to the Fedora 44 linux final release. So, if you need to get anything in, do so before tuesday.
So the reason I was off line thursday was because I was getting solar and battery and inverter installed here. It's already pretty awesome. Look for a long blog post on it next weekish or so.
During this freeze I am hoping to get started on some projects I was meaning to do already, but got busy with the signing stuff: revamping our backups and moving more stuff to rhel10 (will do staging in freeze).
Release Candidate versions are available in the testing repository for Fedora and Enterprise Linux (RHEL / CentOS / Alma / Rocky and other clones) to allow more people to test them. They are available as Software Collections, for parallel installation, the perfect solution for such tests, and as base packages.
RPMs of PHP version 8.5.5RC1 are available
RPMs of PHP version 8.4.20RC1 are available
ℹ️ The packages are available for x86_64 and aarch64.
ℹ️ PHP version 8.3 is now in security mode only, so no more RC will be released.
ℹ️ Installation: follow the wizard instructions.
ℹ️ Announcements:
Parallel installation of version 8.5 as Software Collection:
yum --enablerepo=remi-test install php85
Parallel installation of version 8.4 as Software Collection:
yum --enablerepo=remi-test install php84
Update of system version 8.5:
dnf module switch-to php:remi-8.5 dnf --enablerepo=remi-modular-test update php\*
Update of system version 8.4:
dnf module switch-to php:remi-8.4 dnf --enablerepo=remi-modular-test update php\*
ℹ️ Notice:
Software Collections (php84, php85)
Base packages (php)
Two great podcasts this week, with the creator of Claude Code, and another one with Werner Herzog.
And check out the links from other people, they are much better than mine.
Sorry about destroying your productivity with the 100 jumps game.
New addition to these posts:
Once I got hardware-accelerated AI working under Linux on my AI mini workstation from HP, my next goal was to make it easier to use. From this blog, you can read about my initial experiments with Open WebUI on Fedora Linux.

As Open WebUI is not yet available as a package in Fedora, my initial approach was to use containers. I found a Docker compose setup which was tested on Fedora Linux 43 according to its documentation: https://github.com/jesuswasrasta/ollama-rocm-webui-docker. As I (also) use Fedora 43, it sounded like a good choice.
It worked; however, I quickly realized that hardware acceleration for AI was not working. Instead of that, most CPUs were running close to 100%. It was a good test for cooling: I could hear the miniature box from the next room through closed doors :-)

As it turned out, the content of the HSA_OVERRIDE_GFX_VERSION environment variable was incorrect. When I set it according to the docs, hardware acceleration still did not work. Removing the environment variable ollama found the hardware, but never answered a prompt anymore.
My next experiment was that I kept using Open WebUI from the container, but I installed ollama from the Fedora package repository directly on the system. The good news? Some smaller models ran really fast, using hardware acceleration. The bad news: most models failed to load with an error message that the given model format is unknown.
I guessed that ollama was too old in Fedora 43. Solution? Update the whole system to Fedora 44 beta. It seems to have helped. A lot more models work now, including the largest freely available Granite models from IBM.
First of all: I’m an IBM Champion, and thus using IBM technologies is for granted. But also because I learned some background stories from a friend working at IBM on LSF, which makes it also a personal choice.
What I’ve been showing here is AI inferencing on my HP AI system. But before the model can be used (for inferencing), it needs to be trained. These models are trained on large, GPU rich conpute clusters. To get an idea of the scale of such clusters, you can learn more in this research paper (https://arxiv.org/abs/2407.05467). It duscusses the IBM Blue Vela system which supports IBMs’ GenAI mission. What’s interesting is the Blue Vela uses a more traditional HPC software stack including IBM LSF for workload management and Storage Scale (GPFS) for rapid access to large data sets.

This blog is part of a longer series about my adventures with my new machine and AI. You can reach me to discuss this blog on one of the contacts listed in the upper right corner. You can read the rest of the blogs under the toy tag.
Element Matrix Services is performing scheduled maintenance on our matrix server (fedora.im).
Affected Services:
Recently, my mom came up to me, she forgot her NextCloud password again. It’s not a problem, just jump into the admin user, and reset the password. But that got me thinking, would it be possible to set up an SSO, so everyone would just have one password and account for all of my self-hosted services?
Apart from the already mentioned password problem, there are a few more things.
Fedora Infrastructure team will be applying updates to servers and rebooting them.
Many services will be affected, most should only be down for a short time as their particular resources are rebooted HOWEVER some may be down for a non-trivial amount of time due to RHEL-9 to RHEL-10 upgrades.
I have an aging, but fully functional MacBook. I bought it for syslog-ng testing, but I also use for watching movies. Homebrew no more fully supports old, Intel-based Macs. This blog helps to compile the latest syslog-ng release on these old, but otherwise functional machines.
Read more at https://www.syslog-ng.com/community/b/blog/posts/compiling-syslog-ng-on-an-old-mac

Ever since I bought my AI mini workstation from HP, my goal was to run hardware accelerated artificial intelligence workloads in a Linux environment. Read more to learn how things turned out on Ubuntu and Fedora!
I have been using various AI tools for a while now. Generating pictures about some impossible situations, like a dinosaur climbing the Hungarian parliament building, finding information where a simple web search is useless, or explaining syslog-ng code to me. All these are nice, sometimes even useful, however I prefer to know what is behind the magic. Well, at least part of it :-) I want to get a bottom up view of various components and processes, and getting my hands dirty. Hopefully this miniature but powerful box will help me in getting known with AI better.

As mentioned in my installing Ubuntu blog, the 24.04 LTS installer did not work on this machine. I found a nice tutorial about AI on the Ryzen AI Max+ 395 which mentioned using 25.10, so I installed that version instead of the LTS. It installed without any troubles, 3D graphics worked out of the box.
However, AI is a different story. ROCm, hardware acceleration for AI workloads on AMD chips, is only packaged for Ubuntu LTS releases. The workaround described in the tutorial was to use distrobox. Unfortunately, the steps described in the tutorial did not work. Containerization brought in various problems with permissions, software availability, and so on. Most likely an experienced distrobox user could resolve these. In my case, after reading the distrobox documentation for hours, I just gave up.
Next, I turned to Fedora Linux 43. The wiki page of the Fedora Heterogeneous Computing Special Interest Group proved to be a good starting point. Fedora has ROCm packaged as part of the distro, and the wiki page gives clear instructions how to get started.
Once I set up user rights and installed the necessary packages, I was able to get some info about my hardware. You can see the output of rocminfo and rocm-clinfo at the bottom of this blog. I did not want to shorten those, but given the many lines of output, I was not sure if anyone would read the rest of my blog :-)
Of course, seeing info about the hardware is nice, but it’s even better to see it in action. The Ubuntu ROCm tutorial mentioned llama, so I started with that one. Luckily Fedora includes it as a ready to install package, so I did not have to compile it from source. I also installed huggingface-hub, also from a package:
dnf install python3-huggingface-hub llama-cpp
This allowed me to download the model mentioned in the tutorial, and ask a few questions from the downloaded LLM. For now I just used the sample command line, but based on the output llama found the hardware and used it. Next up: learn more about the available models.
You can find the output of the following command at the end of this blog:
llama-cli -m ~/models/llama-2-7b.Q4_K_M.gguf --no-mmap -ngl 99 -p "Explain quantum computing in simple terms:" -n 256
When I mentioned a friend that hardware accelerated AI seems to work on my Linux box, he suggested to me to try it with PyTorch. Luckily this was available as a ready to install package for Fedora as well:
dnf install python3-torch
I was quite a bit surprised, as the above command installed 8 GB worth of RPM packages (texlive accounting for a good part of it). I do not know much about PyTorch, but did a quick test anyway. Here is the really complex Pyhon code I built based on the documentation:
import torch
x = torch.rand(5, 3)
print(x)
print('Is hw AI accel available')
print(torch.cuda.is_available())
And here is the output from the above code:
tensor([[0.1034, 0.0183, 0.1233],
[0.1787, 0.0097, 0.8426],
[0.2872, 0.6351, 0.8468],
[0.8226, 0.2991, 0.8539],
[0.2061, 0.6422, 0.8146]])
Is hw AI accel available
True
It’s simple, but looks promising :-)
czanik@fedora:~$ rocminfo
ROCk module is loaded
=====================
HSA System Attributes
=====================
Runtime Version: 1.1
Runtime Ext Version: 1.7
System Timestamp Freq.: 1000.000000MHz
Sig. Max Wait Duration: 18446744073709551615 (0xFFFFFFFFFFFFFFFF) (timestamp count)
Machine Model: LARGE
System Endianness: LITTLE
Mwaitx: DISABLED
XNACK enabled: NO
DMAbuf Support: YES
VMM Support: YES
==========
HSA Agents
==========
*******
Agent 1
*******
Name: AMD RYZEN AI MAX+ PRO 395 w/ Radeon 8060S
Uuid: CPU-XX
Marketing Name: AMD RYZEN AI MAX+ PRO 395 w/ Radeon 8060S
Vendor Name: CPU
Feature: None specified
Profile: FULL_PROFILE
Float Round Mode: NEAR
Max Queue Number: 0(0x0)
Queue Min Size: 0(0x0)
Queue Max Size: 0(0x0)
Queue Type: MULTI
Node: 0
Device Type: CPU
Cache Info:
L1: 49152(0xc000) KB
Chip ID: 0(0x0)
ASIC Revision: 0(0x0)
Cacheline Size: 64(0x40)
Max Clock Freq. (MHz): 5187
BDFID: 0
Internal Node ID: 0
Compute Unit: 32
SIMDs per CU: 0
Shader Engines: 0
Shader Arrs. per Eng.: 0
WatchPts on Addr. Ranges:1
Memory Properties:
Features: None
Pool Info:
Pool 1
Segment: GLOBAL; FLAGS: FINE GRAINED
Size: 131136832(0x7d0fd40) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Recommended Granule:4KB
Alloc Alignment: 4KB
Accessible by all: TRUE
Pool 2
Segment: GLOBAL; FLAGS: EXTENDED FINE GRAINED
Size: 131136832(0x7d0fd40) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Recommended Granule:4KB
Alloc Alignment: 4KB
Accessible by all: TRUE
Pool 3
Segment: GLOBAL; FLAGS: KERNARG, FINE GRAINED
Size: 131136832(0x7d0fd40) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Recommended Granule:4KB
Alloc Alignment: 4KB
Accessible by all: TRUE
Pool 4
Segment: GLOBAL; FLAGS: COARSE GRAINED
Size: 131136832(0x7d0fd40) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Recommended Granule:4KB
Alloc Alignment: 4KB
Accessible by all: TRUE
ISA Info:
*******
Agent 2
*******
Name: gfx1151
Uuid: GPU-XX
Marketing Name: Radeon 8060S Graphics
Vendor Name: AMD
Feature: KERNEL_DISPATCH
Profile: BASE_PROFILE
Float Round Mode: NEAR
Max Queue Number: 128(0x80)
Queue Min Size: 64(0x40)
Queue Max Size: 131072(0x20000)
Queue Type: MULTI
Node: 1
Device Type: GPU
Cache Info:
L1: 32(0x20) KB
L2: 2048(0x800) KB
L3: 32768(0x8000) KB
Chip ID: 5510(0x1586)
ASIC Revision: 0(0x0)
Cacheline Size: 128(0x80)
Max Clock Freq. (MHz): 2900
BDFID: 50432
Internal Node ID: 1
Compute Unit: 40
SIMDs per CU: 2
Shader Engines: 2
Shader Arrs. per Eng.: 2
WatchPts on Addr. Ranges:4
Coherent Host Access: FALSE
Memory Properties: APU
Features: KERNEL_DISPATCH
Fast F16 Operation: TRUE
Wavefront Size: 32(0x20)
Workgroup Max Size: 1024(0x400)
Workgroup Max Size per Dimension:
x 1024(0x400)
y 1024(0x400)
z 1024(0x400)
Max Waves Per CU: 32(0x20)
Max Work-item Per CU: 1024(0x400)
Grid Max Size: 4294967295(0xffffffff)
Grid Max Size per Dimension:
x 4294967295(0xffffffff)
y 4294967295(0xffffffff)
z 4294967295(0xffffffff)
Max fbarriers/Workgrp: 32
Packet Processor uCode:: 34
SDMA engine uCode:: 18
IOMMU Support:: None
Pool Info:
Pool 1
Segment: GLOBAL; FLAGS: COARSE GRAINED
Size: 65568416(0x3e87ea0) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Recommended Granule:2048KB
Alloc Alignment: 4KB
Accessible by all: FALSE
Pool 2
Segment: GLOBAL; FLAGS: EXTENDED FINE GRAINED
Size: 65568416(0x3e87ea0) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Recommended Granule:2048KB
Alloc Alignment: 4KB
Accessible by all: FALSE
Pool 3
Segment: GROUP
Size: 64(0x40) KB
Allocatable: FALSE
Alloc Granule: 0KB
Alloc Recommended Granule:0KB
Alloc Alignment: 0KB
Accessible by all: FALSE
ISA Info:
ISA 1
Name: amdgcn-amd-amdhsa--gfx1151
Machine Models: HSA_MACHINE_MODEL_LARGE
Profiles: HSA_PROFILE_BASE
Default Rounding Mode: NEAR
Default Rounding Mode: NEAR
Fast f16: TRUE
Workgroup Max Size: 1024(0x400)
Workgroup Max Size per Dimension:
x 1024(0x400)
y 1024(0x400)
z 1024(0x400)
Grid Max Size: 4294967295(0xffffffff)
Grid Max Size per Dimension:
x 4294967295(0xffffffff)
y 4294967295(0xffffffff)
z 4294967295(0xffffffff)
FBarrier Max Size: 32
ISA 2
Name: amdgcn-amd-amdhsa--gfx11-generic
Machine Models: HSA_MACHINE_MODEL_LARGE
Profiles: HSA_PROFILE_BASE
Default Rounding Mode: NEAR
Default Rounding Mode: NEAR
Fast f16: TRUE
Workgroup Max Size: 1024(0x400)
Workgroup Max Size per Dimension:
x 1024(0x400)
y 1024(0x400)
z 1024(0x400)
Grid Max Size: 4294967295(0xffffffff)
Grid Max Size per Dimension:
x 4294967295(0xffffffff)
y 4294967295(0xffffffff)
z 4294967295(0xffffffff)
FBarrier Max Size: 32
*******
Agent 3
*******
Name: aie2
Uuid: AIE-XX
Marketing Name: AIE-ML
Vendor Name: AMD
Feature: AGENT_DISPATCH
Profile: BASE_PROFILE
Float Round Mode: NEAR
Max Queue Number: 1(0x1)
Queue Min Size: 64(0x40)
Queue Max Size: 64(0x40)
Queue Type: SINGLE
Node: 0
Device Type: DSP
Cache Info:
L2: 2048(0x800) KB
L3: 32768(0x8000) KB
Chip ID: 0(0x0)
ASIC Revision: 0(0x0)
Cacheline Size: 0(0x0)
Max Clock Freq. (MHz): 0
BDFID: 0
Internal Node ID: 0
Compute Unit: 0
SIMDs per CU: 0
Shader Engines: 0
Shader Arrs. per Eng.: 0
WatchPts on Addr. Ranges:0
Memory Properties:
Features: AGENT_DISPATCH
Pool Info:
Pool 1
Segment: GLOBAL; FLAGS: KERNARG, COARSE GRAINED
Size: 131136832(0x7d0fd40) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Recommended Granule:4KB
Alloc Alignment: 4KB
Accessible by all: TRUE
Pool 2
Segment: GLOBAL; FLAGS: COARSE GRAINED
Size: 65536(0x10000) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Recommended Granule:0KB
Alloc Alignment: 4KB
Accessible by all: TRUE
Pool 3
Segment: GLOBAL; FLAGS: COARSE GRAINED
Size: 131136832(0x7d0fd40) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Recommended Granule:4KB
Alloc Alignment: 4KB
Accessible by all: TRUE
ISA Info:
*** Done ***
and
czanik@fedora:~$ rocm-clinfo
Number of platforms: 1
Platform Profile: FULL_PROFILE
Platform Version: OpenCL 2.1 AMD-APP (3649.0)
Platform Name: AMD Accelerated Parallel Processing
Platform Vendor: Advanced Micro Devices, Inc.
Platform Extensions: cl_khr_icd cl_amd_event_callback
Platform Name: AMD Accelerated Parallel Processing
Number of devices: 1
Device Type: CL_DEVICE_TYPE_GPU
Vendor ID: 1002h
Board name: Radeon 8060S Graphics
Device Topology: PCI[ B#197, D#0, F#0 ]
Max compute units: 20
Max work items dimensions: 3
Max work items[0]: 1024
Max work items[1]: 1024
Max work items[2]: 1024
Max work group size: 256
Preferred vector width char: 4
Preferred vector width short: 2
Preferred vector width int: 1
Preferred vector width long: 1
Preferred vector width float: 1
Preferred vector width double: 1
Native vector width char: 4
Native vector width short: 2
Native vector width int: 1
Native vector width long: 1
Native vector width float: 1
Native vector width double: 1
Max clock frequency: 2900Mhz
Address bits: 64
Max memory allocation: 57070749280
Image support: Yes
Max number of images read arguments: 128
Max number of images write arguments: 8
Max image 2D width: 16384
Max image 2D height: 16384
Max image 3D width: 16384
Max image 3D height: 16384
Max image 3D depth: 8192
Max samplers within kernel: 16
Max size of kernel argument: 1024
Alignment (bits) of base address: 2048
Minimum alignment (bytes) for any datatype: 128
Single precision floating point capability
Denorms: Yes
Quiet NaNs: Yes
Round to nearest even: Yes
Round to zero: Yes
Round to +ve and infinity: Yes
IEEE754-2008 fused multiply-add: Yes
Cache type: Read/Write
Cache line size: 128
Cache size: 32768
Global memory size: 67142057984
Constant buffer size: 57070749280
Max number of constant args: 8
Local memory type: Local
Local memory size: 65536
Max pipe arguments: 16
Max pipe active reservations: 16
Max pipe packet size: 1236174432
Max global variable size: 57070749280
Max global variable preferred total size: 67142057984
Max read/write image args: 64
Max on device events: 1024
Queue on device max size: 8388608
Max on device queues: 1
Queue on device preferred size: 262144
SVM capabilities:
Coarse grain buffer: Yes
Fine grain buffer: Yes
Fine grain system: No
Atomics: No
Preferred platform atomic alignment: 0
Preferred global atomic alignment: 0
Preferred local atomic alignment: 0
Kernel Preferred work group size multiple: 32
Error correction support: 0
Unified memory for Host and Device: 1
Profiling timer resolution: 1
Device endianess: Little
Available: Yes
Compiler available: Yes
Execution capabilities:
Execute OpenCL kernels: Yes
Execute native function: No
Queue on Host properties:
Out-of-Order: No
Profiling : Yes
Queue on Device properties:
Out-of-Order: Yes
Profiling : Yes
Platform ID: 0x7ffb97d11d80
Name: gfx1151
Vendor: Advanced Micro Devices, Inc.
Device OpenCL C version: OpenCL C 2.0
Driver version: 3649.0 (HSA1.1,LC)
Profile: FULL_PROFILE
Version: OpenCL 2.0
Extensions: cl_khr_fp64 cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_int64_base_atomics cl_khr_int64_extended_atomics cl_khr_3d_image_writes cl_khr_byte_addressable_store cl_khr_fp16 cl_khr_gl_sharing cl_amd_device_attribute_query cl_amd_media_ops cl_amd_media_ops2 cl_khr_image2d_from_buffer cl_khr_subgroups cl_khr_depth_images cl_amd_copy_buffer_p2p cl_amd_assembly_program
root@fedora:~# llama-cli -m ~/models/llama-2-7b.Q4_K_M.gguf --no-mmap -ngl 99 -p "Explain quantum computing in simple terms:" -n 256
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 ROCm devices:
Device 0: Radeon 8060S Graphics, gfx1151 (0x1151), VMM: no, Wave Size: 32
build: 0 (unknown) with HIP version: 6.4.43484-9999 for x86_64-redhat-linux-gnu
main: llama backend init
main: load the model and apply lora adapter, if any
llama_model_load_from_file_impl: using device ROCm0 (Radeon 8060S Graphics) - 64031 MiB free
llama_model_loader: loaded meta data with 19 key-value pairs and 291 tensors from /root/models/llama-2-7b.Q4_K_M.gguf (version GGUF V2)
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = LLaMA v2
llama_model_loader: - kv 2: llama.context_length u32 = 4096
llama_model_loader: - kv 3: llama.embedding_length u32 = 4096
llama_model_loader: - kv 4: llama.block_count u32 = 32
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 11008
llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 7: llama.attention.head_count u32 = 32
llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 32
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 10: general.file_type u32 = 15
llama_model_loader: - kv 11: tokenizer.ggml.model str = llama
llama_model_loader: - kv 12: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv 13: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv 14: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv 15: tokenizer.ggml.bos_token_id u32 = 1
llama_model_loader: - kv 16: tokenizer.ggml.eos_token_id u32 = 2
llama_model_loader: - kv 17: tokenizer.ggml.unknown_token_id u32 = 0
llama_model_loader: - kv 18: general.quantization_version u32 = 2
llama_model_loader: - type f32: 65 tensors
llama_model_loader: - type q4_K: 193 tensors
llama_model_loader: - type q6_K: 33 tensors
print_info: file format = GGUF V2
print_info: file type = Q4_K - Medium
print_info: file size = 3.80 GiB (4.84 BPW)
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: special tokens cache size = 3
load: token to piece cache size = 0.1684 MB
print_info: arch = llama
print_info: vocab_only = 0
print_info: n_ctx_train = 4096
print_info: n_embd = 4096
print_info: n_layer = 32
print_info: n_head = 32
print_info: n_head_kv = 32
print_info: n_rot = 128
print_info: n_swa = 0
print_info: is_swa_any = 0
print_info: n_embd_head_k = 128
print_info: n_embd_head_v = 128
print_info: n_gqa = 1
print_info: n_embd_k_gqa = 4096
print_info: n_embd_v_gqa = 4096
print_info: f_norm_eps = 0.0e+00
print_info: f_norm_rms_eps = 1.0e-05
print_info: f_clamp_kqv = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale = 0.0e+00
print_info: f_attn_scale = 0.0e+00
print_info: n_ff = 11008
print_info: n_expert = 0
print_info: n_expert_used = 0
print_info: causal attn = 1
print_info: pooling type = 0
print_info: rope type = 0
print_info: rope scaling = linear
print_info: freq_base_train = 10000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn = 4096
print_info: rope_finetuned = unknown
print_info: model type = 7B
print_info: model params = 6.74 B
print_info: general.name = LLaMA v2
print_info: vocab type = SPM
print_info: n_vocab = 32000
print_info: n_merges = 0
print_info: BOS token = 1 '<s>'
print_info: EOS token = 2 '</s>'
print_info: UNK token = 0 '<unk>'
print_info: LF token = 13 '<0x0A>'
print_info: EOG token = 2 '</s>'
print_info: max token length = 48
load_tensors: loading model tensors, this can take a while... (mmap = false)
load_tensors: offloading 32 repeating layers to GPU
load_tensors: offloading output layer to GPU
load_tensors: offloaded 33/33 layers to GPU
load_tensors: ROCm0 model buffer size = 3820.94 MiB
load_tensors: CPU model buffer size = 70.31 MiB
..................................................................................................
llama_context: constructing llama_context
llama_context: n_seq_max = 1
llama_context: n_ctx = 4096
llama_context: n_ctx_per_seq = 4096
llama_context: n_batch = 2048
llama_context: n_ubatch = 512
llama_context: causal_attn = 1
llama_context: flash_attn = 0
llama_context: freq_base = 10000.0
llama_context: freq_scale = 1
llama_context: ROCm_Host output buffer size = 0.12 MiB
llama_kv_cache_unified: ROCm0 KV buffer size = 2048.00 MiB
llama_kv_cache_unified: size = 2048.00 MiB ( 4096 cells, 32 layers, 1 seqs), K (f16): 1024.00 MiB, V (f16): 1024.00 MiB
llama_kv_cache_unified: LLAMA_SET_ROWS=0, using old ggml_cpy() method for backwards compatibility
llama_context: ROCm0 compute buffer size = 288.00 MiB
llama_context: ROCm_Host compute buffer size = 16.01 MiB
llama_context: graph nodes = 1158
llama_context: graph splits = 2
common_init_from_params: setting dry_penalty_last_n to ctx_size = 4096
common_init_from_params: warming up the model with an empty run - please wait ... (--no-warmup to disable)
main: llama threadpool init, n_threads = 16
system_info: n_threads = 16 (n_threads_batch = 16) / 32 | ROCm : NO_VMM = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : LLAMAFILE = 1 | REPACK = 1 |
sampler seed: 2232334333
sampler params:
repeat_last_n = 64, repeat_penalty = 1.000, frequency_penalty = 0.000, presence_penalty = 0.000
dry_multiplier = 0.000, dry_base = 1.750, dry_allowed_length = 2, dry_penalty_last_n = 4096
top_k = 40, top_p = 0.950, min_p = 0.050, xtc_probability = 0.000, xtc_threshold = 0.100, typical_p = 1.000, top_n_sigma = -1.000, temp = 0.800
mirostat = 0, mirostat_lr = 0.100, mirostat_ent = 5.000
sampler chain: logits -> logit-bias -> penalties -> dry -> top-n-sigma -> top-k -> typical -> top-p -> min-p -> xtc -> temp-ext -> dist
generate: n_ctx = 4096, n_batch = 2048, n_predict = 256, n_keep = 1
Explain quantum computing in simple terms: what is it, how does it work, and what are its potential benefits?
This is a difficult question to answer because quantum computing is not yet a well-defined field of study, and many of the potential applications are still being researched. However, we can say that quantum computing is a type of computation that relies on the principles of quantum mechanics (the branch of physics that describes the behaviour of particles such as electrons and photons).
These particles obey a set of rules that are different from those obeyed by classical computers, which rely on the principles of classical mechanics. Quantum computing uses a particle’s quantum state (such as its spin) to store information. This means that quantum computers can perform computations that are not possible on classical computers.
In the simplest terms, quantum computing is a type of computation that takes advantage of the unique properties of quantum mechanics. These properties include superposition, entanglement, and non-locality. Superposition is the ability of a quantum system to exist in multiple states simultaneously.
This means that a quantum system can be in two different places at the same time, or have two different properties at the same time. Entanglement is the ability of two quantum systems to be inter
llama_perf_sampler_print: sampling time = 4.27 ms / 265 runs ( 0.02 ms per token, 62075.43 tokens per second)
llama_perf_context_print: load time = 631.46 ms
llama_perf_context_print: prompt eval time = 63.57 ms / 9 tokens ( 7.06 ms per token, 141.57 tokens per second)
llama_perf_context_print: eval time = 7110.09 ms / 255 runs ( 27.88 ms per token, 35.86 tokens per second)
llama_perf_context_print: total time = 7184.25 ms / 264 tokens
These are just my first steps. Most of the time I was not even fully aware what I was doing, just reused some sample command lines and code. These experiments were good enough to see that AI works on Linux as well, not just on Windows.
This blog is part of a longer series about my adventures with my new machine and AI. You can reach me to discuss this blog on one of the contacts listed in the upper right corner. You can read the rest of the blogs under the toy tag.
After a full year of preparation, the Community Linux Engineering (CLE) team is excited to announce that Fedora Forge, powered by Forgejo, is ready for use! We are proud of this modern Open Source platform and what it means for the future of Fedora Infrastructure. While pagure.io has been a vital part of our community for many years, the time has come to retire our homegrown forge and transition to this powerful new tool.
The final cutover is planned for Flock to Fedora 2026. We strongly encourage teams to migrate their projects well before the conference to ensure a smooth transition. The pagure.io migration is only the first step in a broader infrastructure modernization effort. By the 2027 Fedora 46 release, we plan to retire all remaining Pagure instances across the project, including the package source repositories on src.fedoraproject.org. Getting familiar with Fedora Forge now will help ensure your team is ready as the rest of the Fedora ecosystem transitions.
If you own a project at pagure.io, you must migrate out of it before June 2026. We’ve prepared a Migration Guide. If you’re unsure about what’s happening, please keep reading
Historically, the Fedora Project utilized pagure.io, which operated as a general-use public forge where Fedora repositories coexisted alongside personal projects, unrelated upstream software, and individual portfolios.
The Fedora Forge (powered by Forgejo) intentionally adopts a narrower scope. It is an internal piece of project infrastructure, explicitly provisioned to host the code, documentation, and tooling that directly build, manage, and govern the Fedora Project.
What belongs on Fedora Forge:
What does NOT belong:
Migrating now avoids the “last-minute bottleneck” and gives your team time to adapt to the new resource limits outlined in the Usage Policy:
We are aware that Forgejo is not a 1:1 clone of Pagure. Most notably, private issues within public repositories are not currently supported in the same way. The CLE team is actively working with the upstream Forgejo community to bridge these functional gaps.
The Fedora Council currently has a draft usage policy under consideration, aimed at filling in the details of usage of the new forge instances inside the Fedora Project. Please watch for an additional article here on the Fedora Community Blog that starts the formal feedback process ahead of a Council vote on the policy.
Need help? For technical issues, please open a ticket on the Fedora Infrastructure Tracker or ask in the #fedora-admin Matrix channel.
How do authentication and team management work?
Authentication is fully integrated with the Fedora Account System (FAS) via OIDC. Team membership is directly mapped to FAS groups; if you are in a group, your permissions will automatically map to the corresponding Organization/Team on the Forge.
What happens to my API tokens and automation scripts?
Pagure API tokens will not migrate. You must generate new tokens within your account or organization settings on the new Forge and update your scripts to point to the Forgejo API.
Will my local git remote URLs break?
Yes. Once your repository is migrated, pushes to Pagure.io will be rejected. Update your remotes to the new instance:
git remote set-url origin https://forge.fedoraproject.org/<organization>/<your-project>.git
Are Issues and PRs migrating with full fidelity?
Yes. As outlined in the documentation, our tools port Pull Requests, Issues, and Issue Dependencies/Assignments. Pagure-specific tags will be mapped to Forgejo Labels.
Where do I go if my project’s migration fails?
The CLE team is monitoring the #fedora-forge Matrix channel. Reach out there for help with permission desyncs, missing refs, or pipeline breakages.
The post The forge is our new home. appeared first on Fedora Community Blog.
I’ve seen “Warranty Void if Regenerated” going around, particularly among the subset of my friends who believe “LLMs are slop generators”. They typically characterize it as overly optimistic - hopeful, if not downright fantasy.
The “slop generator” position is, in my opinion, demonstrably false, as countless successful code generation outcomes contradict such a sweeping generalization. The dogged pursuit of this position clouds the issue of the real concerns with LLMs as built and used today. I believe there are legitimate company ethics, environmental, and license/copyright concerns worthy of consideration in this space. I also believe that we are still in a highly emotional place where those concerns tend to be both understated and overstated depending on who is talking.
The story consists of three vignettes told from the perspective of Tom, a post-transition specification repair person who works with farmers. In this universe, all code is generated from specs and average humans are making custom software constantly. Domain experts are needed to refine, debug, and in some cases wholesale write the specifications.
There is also a great discussion of the human impact of this post-transition existence. I encourage you to read it, but I’m not addressing that below - not because it isn’t important, but because I want to preserve focus on the “slop generator” drumbeat that feels so misguided.
All in all, I think the piece is well written and that Scott Werner did a great job. This isn’t a critique of the writing or the story itself. I also don’t know what Scott’s perspective is on LLMs, though their public pages and site lead me to believe they are not anti-generative AI.
I’d been harboring a delusion in the back of my mind about trying to write a story about a “machine whisperer”. Scott’s piece reminded me that I am likely still not a creative writer, and I’m glad for their work here.
My thesis here is simple: this story reads like a set of specification and contract failures. It does not read like evidence that code generation inherently produces “slop” or that opaque code from code generation is inherently a failed concept. To be clear here, this is not a critique of Scott’s view, but instead of the “slop generator” view point.
Margaret has generated software that pulls in various data sets from both their farm and external sources to predict the best time to harvest. Their latest harvest was harvested before it should have been, and Tom realizes that the specification failed to include a requirement that it raise an error if a data source’s structure or methodology changed. Instead, the system absorbed the data from an updated methodology and didn’t change how it used that data.
This is shown to be a specification problem. The spec as written didn’t suggest that changes were possible or that they should be monitored for, so the generated system didn’t do that.
While this happens with, I suspect, regularity in hand-coded systems, my point isn’t that this is normal. When it happens in a hand-coded system, it is wrong too. And, importantly, it is also a specification error.
There may never have been a specification in the first place and the developer was just expected to figure this out. Depending on their experience and other conditions, they either did … or they didn’t. A clearer spec or set of standards (a/k/a a system prompt) would have fixed this in both cases.
Scott introduces pit crews in this anecdote. These are people who monitor ongoing quality and concerns.
Today we often approximate this with monitoring systems that we hope are checking the right things, perhaps even with real end-to-end live tests running on a regular basis. We don’t generally dedicate human teams to it.
Whether we ever hit post-transition or not, this begs for a conversation: is QE/QA solely a pre-ship function, or should we be leveraging that knowledge to monitor delivered software in ways that go deeper than what we typically monitor today? What does the SRE practice in this space look like?
Framed that way, the pit crew in the story is less a bandage for sloppy generated code and more the missing extension of our specifications and contracts into how we watch systems evolve over time.
Ethan has generated a multitude of tools and they are all communicating with each other. Ethan is a microservice machine.
Ethan, much like Margaret, has a data feed problem. This time one of his own tools made a change in the methodology and calculated a value per-hundredweight instead of per-head. While not stated in the story, this unit for output was chosen at generation because it wasn’t in the specification and the specification also didn’t have a way (or likely even a requirement) to flag changes. The downstream tool didn’t get a read failure but began using this new data value as though it was still per-head. This resulted in poor market price prediction.
The story is similar to Margaret’s except it is more like when Team A breaks Team B in your own company.
For me it raises the interesting point that while we tend to believe otherwise, in many cases our APIs and data formats are our only true contracts. They operate only at the level where they exist. The internals of our dependencies, or the work of other teams, are opaque, and you could say that they may “regenerate” their code every day of the week and you just have to hope it still works for your consumption and use. You have to rely on them not breaking the contract and ensure the contract provides the guarantees you need.
A choreographer is a post-transition architect. It is, in my opinion, the thing we should all be if we are going to use LLMs to generate code.
Here a choreographer goes through Ethan’s systems and defines their interface contracts and layers. They also notice that some tools are unnecessary, while others have formed a sub-network that has no effect. The output of this person’s work is a cleaned up system that functions as a whole and not a set of discrete parts.
This is something we already have to do in large systems, and it’s something that people generating code still have to do. I suspect that some concepts like Gastown try to push parts of this work into a different layer of tooling. And it may even work.
LLM generation and reasoning capacity is getting higher, but none of this eliminates the need for this role or for specification correctness. This is something which we’ve basically never had. Even waterfall failed here.
In this sense, the story reads less like an indictment of generation and more like a warning about what happens when we refuse to name, own, and maintain those contracts across a growing system.
Carol’s farm illustrates the ugly mess of things we give automation and then complain about.
In this specific case there is a new irrigation system that is using all of the sensors it has to maintain a 60% moisture level across the farm. This results in under- and over-irrigation in some places because the moisture level in those places is influenced by external factors. The system is doing exactly what it was asked to do. The problem is that the target it was given is a bad fit for the actual farm not that the generated system is inherently bad.
Note: I am not a farmer, so I am taking this example at face value.
The short version is that drainage is funny in some places, other places are getting more wind, and still others need slightly differing levels based on the actual crop in that spot. None of this data has been provided to the system, and the story makes it clear that most of it is not in any system.
The farmer just understands their land and can look at it and tell you what is going to happen based on 30 years of real history and 30 years of experience. This is also not new. This is the art and practice of both coding and system administration, and we have failed to codify it usefully to date. We shouldn’t hold our new system accountable for that, but we also shouldn’t pretend that “just write a better spec” is an easy button when so much of the domain is still tacitly known and not shared beyond tribal means.
This is perhaps the one vignette that gives me pause. Even if we can find code generation (it doesn’t have to be LLMs) that writes to a specification, we may still be unsuccessful when our measurements, abstractions, and language can’t yet capture the thing we actually care about.
Right now we make surgical tweaks to the code to encode these lessons as we learn them. Specifying them in human language is often difficult, and maybe that is the core problem. The boundary here isn’t really “hand-written vs generated code”, it is between where, as technologists, we have experience stating precisely enough and where we don’t have a history of doing that well.
But we work in a precise space. In the case of Carol’s farm, Carol and Tom are able to describe the core problems pretty quickly, and I suspect, given time, could come up with data feeds, additional sensors, or equations that describe the issues sufficiently to fix the irrigation system.
It would be hyper-customized to Carol’s farm, but in many ways that is what she wants and needs - and it’s something we fail to deliver, in general, today. Even here, though, calling the outcome “slop” feels like a category error: the system is faithfully pursuing the narrow, naive target we gave it, not spewing random garbage.
I wrote this piece in part because the anti-LLM rhetoric of “they are slop generators” gets under my skin. There are a lot of valid reasons to be anti-LLM today. This is not one.
Reading the story reinforced that for me: what fails in these vignettes are specs, contracts, and incentives, not some inherent “slop” property of generated code. The story isn’t an indictment of generated code, it’s a parable about the timeless need for human wisdom, clear communication, and rigorous oversight, no matter how the code comes to be.
I’d like to see our LLM conversations stick closer to the concrete and demonstrably true. Let’s focus on what these systems do, where they fail, and how our specs and contracts are part of that story, instead of getting pulled into slogans like “slop generator” that, by being false, derail the conversation. This creates space for us to have the real conversations that matter around ethics, the environment, and training data usage.
Pretty much everything I deal with requires parsing ASN.1 encodings. ASN.1 definitions published as part of internet RFCs: certificates are encoded using DER, LDAP exchanges use BER, Kerberos packets are using DER as well. ASN.1 use is a never ending source of security issues in pretty much all applications. Having safer ASN.1 processing is important to any application developer.
In FreeIPA we are using three separate ASN.1 libraries: pyasn1 and x509 (part of PyCA) for Python code, and asn1c code generator for C code. In fact, we use more: LDAP server plugins also use OpenLDAP’s lber library, while Kerberos KDC plugins also use internal MIT Kerberos parsers.
The PyCA developers noted in their State of OpenSSL statement:
[…] when pyca/cryptography migrated X.509 certificate parsing from OpenSSL to our own Rust code, we got a 10x performance improvement relative to OpenSSL 3 (n.b., some of this improvement is attributable to advantages in our own code, but much is explainable by the OpenSSL 3 regressions). Later, moving public key parsing to our own Rust code made end-to-end X.509 path validation 60% faster — just improving key loading led to a 60% end-to-end improvement, that’s how extreme the overhead of key parsing in OpenSSL was.
That’s 16x performance improvement over the OpenSSL 3. OpenSSL did improve their performance since then but it still pays an overhead for a very flexible design to allow loading cryptographic implementations from dynamic modules (providers). Enablement for externally-provided modules is essential to allow adding new primitives and support for government-enforced standards (such as FIPS 140) where implementations have to be validated in advance and code changes cannot come without expensive and slow re-validation process.
Nevertheless, in FreeIPA we focus on integrating with Linux distributions. Fedora, CentOS Stream, and RHEL enforce crypto consolidation rules, where all packaged applications must be using the same crypto primitives provided by the operating system. We can process metadata ourselves but all cryptographic operations still have to go through OpenSSL and NSS. And paying large performance costs during metadata processing would be hurting to infrastructure components such as FreeIPA.
FreeIPA is a large beast. Aside from its management component, written in Python, it has more than a dozen plugins for 389-ds LDAP server, plugins for MIT Kerberos KDC, plugins for Samba, and tight integration with SSSD, all written in C. Its default certificate authority software, Dogtag PKI, is written in Java and relies on own stack of Java and C dependencies. We are using PyCA’s x509 module for certificate processing in Python code but we cannot use it and underlying ASN.1 libraries in C as those libraries aren’t exposed to C applications or intentionally limited in their functionality to PKI-related tasks.
For the 2026-2028, I’m focusing on enabling FreeIPA to handle post-quantum cryptography (PQC), as a part of the Quantum-Resistant Cryptography in Practice (QARC) project. The project is funded by the European Union under the Horizon Europe framework programme (Grant Agreement No. 101225691) and supported by the European Cybersecurity Competence Centre. One of well publicized aspects of moving to PQC certificates is their sizes. The following table 5 is from Post-Quantum Cryptography for Engineers IETF draft summarizes it well:
| PQ Security Level | Algorithm | Public key size (bytes) | Private key size (bytes) | Signature size(bytes) |
|---|---|---|---|---|
| Traditional | RSA2048 | 256 | 256 | 256 |
| Traditional | ECDSA-P256 | 64 | 32 | 64 |
| 1 | FN-DSA-512 | 897 | 1281 | 666 |
| 2 | ML-DSA-44 | 1312 | 2560 | 2420 |
| 3 | ML-DSA-65 | 1952 | 4032 | 3309 |
| 5 | FN-DSA-1024 | 1793 | 2305 | 1280 |
| 5 | ML-DSA-87 | 2592 | 4896 | 4627 |
Public keys for ML-DSA-65 certificates 7.6x bigger than RSA-2048 ones. You need to handle public keys in multiple situations: when performing certificates’ verification against known certificate authorities (CAs), when matching their properties for validation and identity derivation during authorization, when storing them. FreeIPA uses LDAP as a backend, so storing 7.6 times more data directly affects your scalability when number of users or machines (or Kerberos services) grow up. And since certificates are all ASN.1 encoded, I naturally wanted to establish a performance baseline to ASN.1 parsing.
I started with a small task: created a Rust library, synta, to decode and encode ASN.1 with the help of AI tooling. It quickly grew up to have its own ASN.1 schema parser and code generation tool. With those in place, I started generating more code, this time to process X.509 certificates, handle Kerberos packet structures, and so on. Throwing different tasks at Claude Code led to iterative improvements. Over couple months we progressed to a project with more than 60K lines of Rust code.
| Language | files | blank | comment | code |
|---|---|---|---|---|
| Rust | 207 | 9993 | 17492 | 67284 |
| Markdown | 52 | 5619 | 153 | 18059 |
| Python | 41 | 2383 | 2742 | 7679 |
| C | 17 | 852 | 889 | 4333 |
| Bourne Shell | 8 | 319 | 482 | 1640 |
| C/C++ Header | 4 | 319 | 1957 | 1138 |
| TOML | 20 | 196 | 97 | 896 |
| YAML | 1 | 20 | 46 | 561 |
| make | 4 | 166 | 256 | 493 |
| CMake | 3 | 36 | 25 | 150 |
| JSON | 6 | 0 | 0 | 38 |
| diff | 1 | 6 | 13 | 29 |
| SUM | 364 | 19909 | 24152 | 102300 |
I published some of the synta crates yesterday on crates.io, the whole project is available at codeberg.org/abbra/synta. In total, there are 11 crates, though only seven are published (and synta-python is also available at PyPI):
| Crate | Lines (src/ only) |
|---|---|
| synta | 10572 |
| synta-derive | 2549 |
| synta-codegen | 17578 |
| synta-certificate | 4549 |
| synta-python | 8953 |
| synta-ffi | 7843 |
| synta-krb5 | 2765 |
| synta-mtc | 7876 |
| synta-tools | 707 |
| synta-bench | 0 |
| synta-fuzz | 3551 |
Benchmarking, fuzzer, and tools aren’t published. They only needed for development purposes.
The numbers below were obtained on Lenovo ThinkPad P1 Gen 5, 12th Gen Intel(R) Core(TM) i7-12800H, 64 GB RAM, on Fedora 42. This is pretty much a 3-4 years old hardware.
Benchmarking is what brought this project to life, let’s look at the numbers. When dealing with certificates, ASN.1 encoding can be parsed in different ways: you can visit every structure or stop at outer shells and only visit the remaining nested structures when you really need them. The former is “parse+fields” and the latter is “parse-only” in the following table that summarizes comparison between synta and various Rust crates (and OpenSSL/NSS which were accessible through their Rust FFI bindings):
| Library | Parse-only | Parse+fields | vs synta (parse-only) | vs synta (parse+fields) |
|---|---|---|---|---|
| synta | 0.48 µs | 1.32 µs | — | — |
| cryptography-x509 | 1.45 µs | 1.43 µs | 3.0× slower | 1.1× slower |
| x509-parser | 2.01 µs | 1.99 µs | 4.2× slower | 1.5× slower |
| x509-cert | 3.16 µs | 3.15 µs | 6.6× slower | 2.4× slower |
| NSS | 7.90 µs | 7.99 µs | 16× slower | 6.1× slower |
| rust-openssl | 15.4 µs | 15.1 µs | 32× slower | 11× slower |
| ossl | 16.1 µs | 15.8 µs | 33× slower | 12× slower |
“Parse+fields” tests access every named field: serial number, issuer/subject DNs, signature algorithm OID, signature bytes, validity period, public key algorithm OID, public key bytes, and version. The “parse+fields” speedup is the fair end-to-end comparison: synta’s parse-only advantage is large because most fields are stored as zero-copy slices deferred until access, while other libraries must materialise all fields eagerly at parse time.
The dominant cost in X.509 parsing is Distinguished Name traversal: a certificate’s issuer and subject each contain a SEQUENCE OF SET OF SEQUENCE with per-attribute OID lookup. synta defers this entirely by storing the Name as a RawDer<'a> — a pointer+length into the original input with no decoding. cryptography-x509 takes a similar deferred approach. The nom-based and RustCrypto libraries decode Names eagerly. NSS goes further and formats them into C strings, which is the dominant fraction of its 16× parse overhead.
For benchmarking I used certificates from PyCA test vectors. There are few certificates with different properties, so we parse them multiple times and then average numbers:
| Certificate | synta | cryptography-x509 | x509-parser | x509-cert | NSS |
|---|---|---|---|---|---|
| cert_00 (NoPolicies) | 1333.7 ns | 1386.7 ns | 1815.9 ns | 2990.6 ns | 7940.3 ns |
| cert_01 (SamePolicies-1) | 1348.8 ns | 1441.0 ns | 2033.4 ns | 3174.3 ns | 7963.8 ns |
| cert_02 (SamePolicies-2) | 1338.6 ns | 1440.1 ns | 2120.1 ns | 3205.6 ns | 8206.8 ns |
| cert_03 (anyPolicy) | 1362.4 ns | 1468.3 ns | 2006.2 ns | 3194.5 ns | 7902.4 ns |
| cert_04 (AnyPolicyEE) | 1232.9 ns | 1424.7 ns | 1968.6 ns | 3168.1 ns | 7913.1 ns |
| Average | 1323 ns | 1432 ns | 1989 ns | 3147 ns | 7985 ns |
The gap between synta (1.32 µs) and cryptography-x509 (1.43 µs) is tighter here than in parse-only (3.0×) because synta’s field access includes two format_dn() calls (~800 ns combined) that cryptography-x509 does for effectively free (its offsets were computed at parse time). Synta leads by ~8% overall.
Now, when parsing PQC certificates, an interesting thing happens. First, it is faster to parse ML-DSA than traditional certificates.
| Certificate | synta | cryptography-x509 | x509-parser | x509-cert | NSS |
|---|---|---|---|---|---|
| ML-DSA-44 | 1030.9 ns | 1256.4 ns | 1732.2 ns | 2666.0 ns | 7286.9 ns |
| ML-DSA-65 | 1124.9 ns | 1237.5 ns | 1690.5 ns | 2664.2 ns | 7222.1 ns |
| ML-DSA-87 | 1102.6 ns | 1226.5 ns | 1727.2 ns | 2696.6 ns | 7284.6 ns |
| Average | 1086 ns | 1240 ns | 1717 ns | 2675 ns | 7265 ns |
synta’s ML-DSA parse+fields (1.09 µs) is faster than its traditional parse+fields (1.32 µs)
because ML-DSA test certificates have shorter Distinguished Names (one attribute each in issuer and subject vs multiple attributes in traditional certificates in the test above). The signature BIT STRING — which is 2,420–4,627 bytes for ML-DSA — is accessed as a zero-copy slice with no size-dependent cost.
Imaging your app needs to test whether the certificate presented by a client is known to you (e.g. belongs to a trusted CAs set). A library like OpenSSL looks at the client’s certificate, extracts identifiers of the certificate issuer, looks up whether such issuer is known in the CA database. That would require looking up properties of the certificates in the database. The fast we can do that, the better.
All those numbers in the previous section are for a single certificate being parsed millions of times. In a real app we often need to validate the certificate against a system-wide database of certificate authorities. The database used by Fedora and other Linux distributions comes from Firefox. It contains 180 self-signed root CA certificates for all public CAs with diverse key types (RSA 2048/4096, ECDSA P-256/P-384) and DN structures. The median cert by DER size is “Entrust.net Premium 2048 Secure Server CA” (1,070 bytes); the benchmark uses this cert for single-certificate and field-access sub-benchmarks to get stable results that are not sensitive to certificate-size outliers.
Another data I tried to benchmark against is 9,898 certificates from the Common CA Database (CCADB), covering the full multi-level hierarchy used by Mozilla, Chrome, Apple, and Microsoft:
| Depth | Count | Description |
|---|---|---|
| 0 | 919 | Root CAs (self-signed) |
| 1 | 6,627 | Intermediates issued directly by roots |
| 2 | 2,212 | Two levels deep |
| 3 | 137 | Three levels deep |
| 4 | 3 | Four levels deep |
Intermediate CA certificates tend to have more complex DNs and more extensions than the root CAs in the Mozilla store. The CCADB median cert is “Bayerische SSL-CA-2014-01” (10,432 bytes). These certificates from CCADB cover past 30 years of certificate issuance on the internet.
To see how those benchmarks would behave if CA roots database would be built with post quantum cryptography, I rebuilt the CCADB corpus as ML-DSA certificates. Nine CCADB certificates were skipped: OpenSSL’s x509 -x509toreq -copy_extensions copy step failed to convert them to CSR form, typically because those certs use non-standard DER encodings or critical extensions that the x509toreq pipeline cannot copy into a PKCS#10 request. (The failures are in OpenSSL’s cert→CSR conversion; synta parses all 9,898 original CCADB certs without error.) This leaves 9,889 of the original 9,898 certs in the synthetic database.
The median cert by DER size is “TrustCor Basic Secure Site (CA1)” (6,705 bytes). ML-DSA certs range from 5,530 B to 16,866 B; the distribution is shifted left relative to the CCADB RSA/ECDSA median (10,432 B) because the smallest CCADB certs (compact root CAs with few extensions) become the new median position after ML-DSA key replacement enlarges all certs uniformly.
| Benchmark | Library | Dataset | Time | Throughput |
|---|---|---|---|---|
synta_parse_all |
synta | Mozilla (180 certs) | 87.8 µs | 2.0 M/sec |
nss_parse_all |
NSS | Mozilla (180 certs) | 1.577 ms | 114 K/sec |
openssl_parse_all |
rust-openssl | Mozilla (180 certs) | 3.552 ms | 50.7 K/sec |
ossl_parse_all |
ossl | Mozilla (180 certs) | 3.617 ms | 49.8 K/sec |
synta_parse_and_access |
synta | Mozilla (180 certs) | 261 µs | 690 K/sec |
synta_build_trust_chain |
synta | Mozilla (180 certs) | 11.6 µs | — |
synta_parse_all |
synta | CCADB (9,898 certs) | 5.10 ms | 1.94 M/sec |
nss_parse_all |
NSS | CCADB (9,898 certs) | 106 ms | 93 K/sec |
openssl_parse_all |
rust-openssl | CCADB (9,898 certs) | 203 ms | 48.8 K/sec |
ossl_parse_all |
ossl | CCADB (9,898 certs) | 214 ms | 46.3 K/sec |
synta_parse_and_access |
synta | CCADB (9,898 certs) | 16.1 ms | 615 K/sec |
synta_parse_roots |
synta | CCADB (919 roots) | 457.7 µs | 2.01 M/sec |
synta_parse_intermediates |
synta | CCADB (8,979 intermediates) | 4.735 ms | 1.90 M/sec |
synta_build_dependency_tree |
synta | CCADB (9,898 certs) | 559 µs | — |
synta_parse_all |
synta | ML-DSA synth (9,889 certs) | 5.78 ms | 1.71 M/sec |
nss_parse_all |
NSS | ML-DSA synth (9,889 certs) | 103 ms | 96.4 K/sec |
openssl_parse_all |
rust-openssl | ML-DSA synth (9,889 certs) | 239 ms | 41.4 K/sec |
ossl_parse_all |
ossl | ML-DSA synth (9,889 certs) | 256 ms | 38.6 K/sec |
synta_parse_and_access |
synta | ML-DSA synth (9,889 certs) | 17.5 ms | 566 K/sec |
synta_parse_roots |
synta | ML-DSA synth (919 roots) | 463 µs | 1.98 M/sec |
synta_parse_intermediates |
synta | ML-DSA synth (8,970 ints.) | 5.10 ms | 1.76 M/sec |
synta_build_dependency_tree |
synta | ML-DSA synth (9,889 certs) | 549 µs | — |
NSS is 18–21× slower than synta across all three datasets; rust-openssl is 40–41× slower and ossl is 41–44× slower. All three C-backed libraries successfully parse ML-DSA certificates (NSS 3.120+ and OpenSSL 3.4+ support ML-DSA natively). NSS’s absolute parse time is nearly identical across CCADB traditional certs (106 ms) and ML-DSA synthetic certs (103 ms) — confirming that NSS’s dominant cost is eager DN formatting at parse time, which depends on DN attribute count rather than the signature algorithm. The slightly lower relative slowdown for NSS on ML-DSA (18× vs 21×) is entirely because synta is slower on ML-DSA (5.78 ms vs 5.10 ms), not because NSS is faster.
synta’s throughput is consistent at ~1.7–2.0 M certs/sec across all three datasets, confirming linear O(n) scaling. Parse rate is slightly lower for the ML-DSA synthetic hierarchy (1.71 M/sec) than for the CCADB traditional hierarchy (1.94 M/sec) because the larger ML-DSA SubjectPublicKeyInfo and signature BIT STRING fields add bytes to the tag+length-header scan that synta performs at parse time. The intermediates-only sub-benchmark is slightly lower than roots-only in each dataset (1.76 M/sec vs 1.98 M/sec for ML-DSA; 1.90 M/sec vs 2.01 M/sec for CCADB) because intermediate CAs tend to have more complex DNs and extension lists.
Finally, individual property access for a pre-parsed certificate, single field read, no allocation unless noted:
| Field | Mozilla (1,070 B) | CCADB (10,432 B) | ML-DSA (6,705 B) | Notes |
|---|---|---|---|---|
issuer_raw / subject_raw |
4.1 / 4.1 ns | 4.2 / 4.1 ns | 4.5 / 4.4 ns | Zero-copy slice |
public_key_bytes / signature_bytes |
4.1 / 4.1 ns | 4.2 / 4.2 ns | 4.6 / 4.4 ns | Zero-copy slice |
signature_algorithm / public_key_algorithm |
5.9 / 5.4 ns | 5.9 / 5.5 ns | 6.3 / 6.4 ns | OID → &'static str |
serial_number |
10.9 ns | 6.8 ns | 7.5 ns | Integer → i64, length-dependent |
validity |
180 ns | 206 ns | 231 ns | Two time-string allocations |
issuer_dn |
401 ns | 224 ns | 246 ns | format_dn() → String |
subject_dn |
404 ns | 292 ns | 324 ns | format_dn() → String |
Zero-copy fields (issuer_raw, subject_raw, public_key_bytes, signature_bytes) cost
~4–5 ns — the price of reading a pointer and length from a struct field. The slightly higher
cost for CCADB and ML-DSA fields vs Mozilla is within measurement noise.
identify_signature_algorithm() and identify_public_key_algorithm() match the OID
component array against a static table and return &'static str — no allocation, no string
formatting. The ~5–6 ns cost is a few comparisons and a pointer return.
serial_number cost depends on the integer’s byte length: the Entrust Mozilla cert carries
a 16-byte serial number (parsed via SmallVec<[u8; 16]>), while the CCADB and ML-DSA
synthetic medians have shorter serials. At 10.9, 6.8, and 7.5 ns respectively, all are
negligible.
validity (~180–231 ns) allocates two strings: UTCTime and GeneralizedTime are formatted
from their raw DER bytes into owned Strings. The two calls account for essentially all
of the cost; the YYMMDDHHMMSSZ to RFC 3339 formatting is the dominant work.
format_dn() is the most variable field: it walks the Name DER bytes, decodes each
SEQUENCE OF SET OF SEQUENCE, looks up each attribute OID by name, and formats the result
into an owned String. The Mozilla cert’s issuer DN is more complex (multiple attributes,
longer values: 401 ns) than the CCADB median (224 ns) or the ML-DSA synthetic median
(246 ns). The ML-DSA synthetic median’s subject DN (324 ns) is slightly more expensive
than the CCADB median (292 ns) because a different cert occupies the median position after
key replacement. format_dn() cost is proportional to the DN’s attribute count and string
lengths.
CERT_NewTempCertificate (NSS) and OpenSSL’s d2i_X509 perform significantly more work
per certificate than synta:
Eager DN formatting — NSS formats the issuer and subject Distinguished Names into
internal C strings during CERT_NewTempCertificate, even when the caller never reads
them. Distinguished Name formatting is the single most expensive operation in certificate
parsing; doing it unconditionally at parse time accounts for roughly 80% of NSS’s total
parse cost. OpenSSL decodes DN structure eagerly as well.
Arena and heap allocation — each NSS certificate allocates a PLArena block and
copies the full DER buffer into it (copyDER = 1). OpenSSL allocates from the C heap.
These allocations are additional work beyond decoding.
Library state and locking — NSS acquires internal locks on every
CERT_NewTempCertificate call to update the certificate cache, even when the resulting
certificate is marked as temporary. This serialises concurrent parsing in multi-threaded
applications.
FFI boundary costs — the rust-openssl and ossl measurements include the overhead
of crossing from Rust into the C library via extern "C" calls and pointer marshalling.
synta defers all of (1): issuer and subject are stored as RawDer<'a> (borrowed byte
spans) and decoded only when the caller calls format_dn(). There is no locking, no arena,
and no FFI boundary.
In these tests I also found out that PyCA’s cryptography-x509 doesn’t have optimizations for multiple accesses to the same fields. It is typically not a problem if you are just loading a certificate and use it once. If you have to return back to it multiple times, that becomes visible and hurts your performance. So I submitted a pull request to apply some of the optimizations I found with synta. The pull request had to be split into smaller ones and few of them were already merged, so performance to access issuer, subject, and public key in certificates and to some attributes in CSRs was improved 100x. The rest waits for improvements in PyO3 to save some of memory use.
When you’re looking at source code it can be helpful to have some evidence indicating who wrote it. Author tags give a surface level indication, but it turns out you can just lie and if someone isn’t paying attention when merging stuff there’s certainly a risk that a commit could be merged with an author field that doesn’t represent reality. Account compromise can make this even worse - a PR being opened by a compromised user is going to be hard to distinguish from the authentic user. In a world where supply chain security is an increasing concern, it’s easy to understand why people would want more evidence that code was actually written by the person it’s attributed to.
git has support for cryptographically signing commits and tags. Because git is about choice even if Linux isn’t, you can do this signing with OpenPGP keys, X.509 certificates, or SSH keys. You’re probably going to be unsurprised about my feelings around OpenPGP and the web of trust, and X.509 certificates are an absolute nightmare. That leaves SSH keys, but bare cryptographic keys aren’t terribly helpful in isolation - you need some way to make a determination about which keys you trust. If you’re using someting like GitHub you can extract that information from the set of keys associated with a user account1, but that means that a compromised GitHub account is now also a way to alter the set of trusted keys and also when was the last time you audited your keys and how certain are you that every trusted key there is still 100% under your control? Surely there’s a better way.
And, thankfully, there is. OpenSSH supports certificates, an SSH public key that’s been signed by some trusted party and so now you can assert that it’s trustworthy in some form. SSH Certificates also contain metadata in the form of Principals, a list of identities that the trusted party included in the certificate. These might simply be usernames, but they might also provide information about group membership. There’s also, unsurprisingly, native support in SSH for forwarding them (using the agent forwarding protocol), so you can keep your keys on your local system, ssh into your actual dev system, and have access to them without any additional complexity.
And, wonderfully, you can use them in git! Let’s find out how.
There’s two main parameters you need to set. First,
|
|
because unfortunately for historical reasons all the git signing config is
under the gpg namespace even if you’re not using OpenPGP. Yes, this makes
me sad. But you’re also going to need something else. Either
user.signingkey needs to be set to the path of your certificate, or you
need to set gpg.ssh.defaultKeyCommand to a command that will talk to an
SSH agent and find the certificate for you (this can be helpful if it’s
stored on a smartcard or something rather than on disk). Thankfully for you,
I’ve written one. It will
talk to an SSH agent (either whatever’s pointed at by the SSH_AUTH_SOCK
environment variable or with the -agent argument), find a certificate
signed with the key provided with the -ca argument, and then pass that
back to git. Now you can simply pass -S to git commit and various other
commands, and you’ll have a signature.
This is a bit more annoying. Using native git tooling ends up calling out to
ssh-keygen2, which validates signatures against a file in a format
that looks somewhat like authorized-keys. This lets you add something like:
|
|
which will match all principals (the wildcard) and succeed if the signature is made with a certificate that’s signed by the key following cert-authority. I recommend you don’t read the code that does this in git because I made that mistake myself, but it does work. Unfortunately it doesn’t provide a lot of granularity around things like “Does the certificate need to be valid at this specific time” and “Should the user only be able to modify specific files” and that kind of thing, but also if you’re using GitHub or GitLab you wouldn’t need to do this at all because they’ll just do this magically and put a “verified” tag against anything with a valid signature, right?
Haha. No.
Unfortunately while both GitHub and GitLab support using SSH certificates
for authentication (so a user can’t push to a repo unless they have a
certificate signed by the configured CA), there’s currently no way to say
“Trust all commits with an SSH certificate signed by this CA”. I am unclear
on why. So, I wrote my
own. It takes a range of
commits, and verifies that each one is signed with either a certificate
signed by the key in CA_PUB_KEY or (optionally) an OpenPGP key provided in
ALLOWED_PGP_KEYS. Why OpenPGP? Because even if you sign all of your own
commits with an SSH certificate, anyone using the API or web interface will
end up with their commits signed by an OpenPGP key, and if you want to have
those commits validate you’ll need to handle that.
In any case, this should be easy enough to integrate into whatever CI pipeline you have. This is currently very much a proof of concept and I wouldn’t recommend deploying it anywhere, but I am interested in merging support for additional policy around things like expiry dates or group membership.
Of course, certificates don’t buy you any additional security if an attacker is able to steal your private key material - they can steal the certificate at the same time. This can be avoided on almost all modern hardware by storing the private key in a separate cryptographic coprocessor - a Trusted Platform Module on PCs, or the Secure Enclave on Macs. If you’re on a Mac then Secretive has been around for some time, but things are a little harder on Windows and Linux - there’s various things you can do with PKCS#11 but you’ll hate yourself even more than you’ll hate me for suggesting it in the first place, and there’s ssh-tpm-agent except it’s Linux only and quite tied to Linux.
So, obviously, I wrote my own. This makes use of the go-attestation library my team at Google wrote, and is able to generate TPM-backed keys and export them over the SSH agent protocol. It’s also able to proxy requests back to an existing agent, so you can just have it take care of your TPM-backed keys and continue using your existing agent for everything else. In theory it should also work on Windows3 but this is all in preparation for a talk I only found out I was giving about two weeks beforehand, so I haven’t actually had time to test anything other than that it builds.
And, delightfully, because the agent protocol doesn’t care about where the keys are actually stored, this still works just fine with forwarding - you can ssh into a remote system and sign something using a private key that’s stored in your local TPM or Secure Enclave. Remote use can be as transparent as local use.
Ah yes you may be wondering why I’m using go-attestation and why the term “attestation” is in my agent’s name. It’s because when I’m generating the key I’m also generating all the artifacts required to prove that the key was generated on a particular TPM. I haven’t actually implemented the other end of that yet, but if implemented this would allow you to verify that a key was generated in hardware before you issue it with an SSH certificate - and in an age of agentic bots accidentally exfiltrating whatever they find on disk, that gives you a lot more confidence that a commit was signed on hardware you own.
Using SSH certificates for git commit signing is great - the tooling is a bit rough but otherwise they’re basically better than every other alternative, and also if you already have infrastructure for issuing SSH certificates then you can just reuse it4 and everyone wins.
Did you know you can just download people’s SSH pubkeys from github from https://github.com/<username>.keys? Now you do ↩︎
Yes it is somewhat confusing that the keygen command does things
other than generate keys ↩︎
This is more difficult than it sounds ↩︎
And if you don’t, by implementing this you now have infrastructure for issuing SSH certificates and can use that for SSH authentication as well. ↩︎
Things are just flying by and it seems to be saturday again, so here's another weekly recap.
Most of my week was consumed with work on our secure boot signing infrastructure. The old setup was using smart cards in specific builders. This had a lot of disadvantages, including:
space on the smart cards was pretty much full, preventing adding more certs
Those machines were 'special' and if they went down/broke things would be bad.
The smart cards in them are not even made anymore or supported, so we couldn't get more for adding more builders.
So, thanks to a bunch of work from Jeremy Cline we finally have things moved over to the new setup. This setup is:
Using our normal signing infrastructure (sigul, soon to be replaced by a rust re-write). We can easily decide in config which machines are used.
Using a new hardware on the vault end that has more space for more certs.
Allows us to easily add a aarch64 path to sign there.
The signed aarch64 grub2 build is in rawhide now, but for whatever reason it's not working on my slim7x. It is however working in vm's, cloud providers and other hardware, so I suspect it might be just a problem with this laptop. It also doesn't work with my Radxa Orion O6, but again could be something going on there. I think it's at least good enough to get more widespread testing.
We should hopefully have a signed kernel next week, but in the mean time if you have a arm device that supports secureboot, you can update to the latest grub2 and give it a try.
We seem to have dropped the ball on f44/f45 openh264 builds. :(
So, I looked at doing some this week. I ran into a linker issue on the i686 builds, but managed to work around that and get builds.
Now we just need to wait for cisco to publish them. I am hoping this process will go much quicker than it has in the past, since we have a better way to upload things for them now.
Time will tell.
I moved all our openshift clusters to 4.21.5 this week (from 4.20.15).
I really love how easy openshift upgrades are. Press button and wait usually. I did have to uprgade to the latest 4.20 first before it would let me move to 4.21, but both steps went fine.
Next week we will be catching up on updates all around and rebooting things. The week after we start Fedora 44 Final freeze so we want to have things all updated before that. No special stuff this time, just updates/reboots so I expect it to go smoothly.
As always, comment on mastodon: https://fosstodon.org/@nirik/116268414239551452
The Fedora CoreOS and QA teams are gearing up for Fedora 44, and we need your help! We are organizing a Test Week running from March 23 to March 27, 2026.
This event is a nice opportunity for the community to test Fedora CoreOS (FCOS) based on Fedora 44 content before it officially reaches the testing and stable streams. By participating, you help us ensure a smooth and reliable experience for all users.
How does a Test Week work?
A Test Week is an event where anyone can help verify that the upcoming release works as expected. If you’ve been looking for a way to get started with Fedora contribution, this is the perfect entry point.
To participate, you simply need to:
The Wiki Page is your primary source of information for this event. Once you have completed your tests, please log your results here! Your contribution, big or small, makes a huge difference. Let’s work together to make this release a great one. Happy testing!
Join the Live Sync Session
Want to chat with the team? We are hosting a virtual in-person session on Tuesday, March 24, from 3:00 PM – 4:30 PM UTC. Drop in to ask questions and get help with testing!
Video Meeting: meet.google.com/ufp-bwsb-zwh
This is a report created by CLE Team, which is a team containing community members working in various Fedora groups for example Infrastructure, Release Engineering, Quality etc. This team is also moving forward some initiatives inside Fedora project.
Week: 16 – 20 March 2026
This team is taking care of day to day business regarding Fedora Infrastructure.
It’s responsible for services running in Fedora infrastructure.
Ticket tracker
This team is taking care of day to day business regarding CentOS Infrastructure and CentOS Stream Infrastructure.
It’s responsible for services running in CentOS Infratrusture and CentOS Stream.
CentOS ticket tracker
CentOS Stream ticket tracker
This team is taking care of day to day business regarding Fedora releases.
It’s responsible for releases, retirement process of packages and package builds.
Ticket tracker
This is the summary of the work done regarding the RISC-V architecture in Fedora.
This is the summary of the work done regarding AI in Fedora.
This team is taking care of quality of Fedora. Maintaining CI, organizing test days
and keeping an eye on overall quality of Fedora releases.
This team is working on introduction of https://forge.fedoraproject.org to Fedora
and migration of repositories from pagure.io.
This team is working on keeping Epel running and helping package things.
This team is working on improving User experience. Providing artwork, user experience,
usability, and general design services to the Fedora project
If you have any questions or feedback, please respond to this report or contact us on #admin:fedoraproject.org channel on matrix.
The post Community Update – Week 12 2026 appeared first on Fedora Community Blog.
Thanks for stopping by the Fedora booth at Chemnitzer Linux-Tage
The podcast about culture is great, so is the one with the creator of Kotlin.
Tanith’s techno set is pretty great too.
The more you talk about culture, the less people believe you [Podcast] - I have seen some of the stuff they mention.
My preferred product management techniques and frameworks - “Talk to everyone”
Last week, I wrote about my initial FreeBSD experiences on my new toy, an AI workstation from HP. FreeBSD runs lightning fast on it, but the desktop was somewhat problematic. Well, I made lots of improvements this week!
While there are still some rough edges, there have been tons of improvements since last week. I do not have plans to use FreeBSD on the desktop in the long term, but still, I just could not believe that the FreeBSD GUI is this problematic on this device. I did some experimentation though and it helped a lot… :-)
The initial problem I realized while browsing the output of dmesg was that desktop-installer enabled the wrong kernel modules repository for me. The line leading there was this:
KLD amdgpu.ko: depends on kernel - not available or version mismatch
The next problem occurred when I fixed this problem: there was a kernel panic on boot, when amdgpu.ko was loaded.
I did a fresh FreeBSD install and instead of using the latest packages, I decided to go with the quarterly packages. This way, the desktop installer configured the right kmod repo – however, loading amdgpu.ko still caused a kernel panic. Another experiment I made was using the ATI driver instead of AMD. The installer says that AMD is for modern cards, and ATI is for older ones. Well, as it turned out, even if the chip is barely half a year old, it counts as “old”… :-)
I am still not convinced that proper hardware-based acceleration works: both X.org logs and the GNOME “About” page showed software rendering. However, I had no problem with graphics performance: TuxRacer worked perfectly well… :-) And the GNOME desktop also worked nicely and as stable, including video playback. The only pain point when using GNOME was that screen locking still did not work.
Even if it’s just software rendering, the graphics problem seems to be resolved. However, the screen locking problem still bothered me, as I’m an IT security guy with a healthy dose of paranoia (which means that I lock my screen even when I’m home alone… :-)).
So even if I haven’t tried KDE for the past 5+ years, I gave it a try now. After so many years on XFCE and GNOME, the interface looks a bit weird. However, everything I tried on it seems to work just fine, including screen locking.

This blog is part of a longer series about my adventures with my new machine and AI. You can reach me to discuss this blog on one of the contacts listed in the upper right corner. You can read the rest of the blogs under the toy tag.
I often hear, even at security conferences that “no central log collection here” or “we have something due to compliance”. Central logging is more than just compliance. It makes logs easier to use, available and secure, thus making your life easier in operations, security, development, but also in marketing, sales, and so on.
Most operating systems and applications keep track of what they are doing. They write log messages. A syslog message might look similar:
Mar 16 13:13:49 cent sshd[543817]: Accepted publickey for toor from 192.168.97.14 port 58246 ssh2: RSA SHA256:GeGHdsl1IZrnTniKUxxxX4NpP8Q
Applications might store their logs separately and have their own log format, like this Apache access log:
192.168.0.164 - - [16/Mar/2026:13:17:01 +0100] "HEAD /other/syslog-ng-insider-2026-03-4110-release-opensearch-elasticsearch/ HTTP/1.1" 200 3764 "-" "SkytabBot/1.0 (URL Resolution)"
Central log collection simply means that log messages are collected to a central location instead or in addition to saving them locally.
In this blog we take a look at what ease of use, availability, and security of central log collection mean for you.
If you have a single computer in your organization, finding a log message about an event on that computer takes some time. Once you have 2 computers, you have to check 2 computers to find that event. It might take twice as much time, but it is still easier than implementing central log collection. Not to mention, which one is the central computer. :-)
Once you have a network of 10 computers, logging in to each of them to find a log message about an event becomes a huge overhead. It is still doable, but implementing central log collection is a lot easier already in the short term, than looking at the logs on the machines where they were created.
On a network of 100 computers, it is practically impossible to find relevant logs by security or operations, unless logs are collected centrally.
Collecting logs centrally means that log messages are available even when the sending machine is down. If you want to know what happened, you do not have to get the machine up and running again, but you can check the logs at the central location. If you see signs of a hardware failure, you can go with a spare part immediately, reducing the time and effort needed to repair the machines.
When a computer is compromised, log messages are often altered or deleted completely. However, this tactic only works with logs stored locally. Collecting logs at a central location allows you to use the unmodified logs and to figure out how the compromise happened.
It is time to introduce central logging to your organization if you have not yet done it yet. Of course I am a bit biased, but syslog-ng is the perfect tool to do so. You can get started by reading / watching the syslog-ng tutorial on https://peter.czanik.hu/posts/syslog-ng-tutorial-toc/.

Originally published at https://www.syslog-ng.com/community/b/blog/posts/central-log-collection—more-than-just-compliance
We are happy to announce the general availability of Fedora Asahi Remix 43. This release brings Fedora Linux 43 to Apple Silicon Macs.
Fedora Asahi Remix is developed in close collaboration with the Fedora Asahi SIG and the Asahi Linux project. This release incorporates all the exciting improvements brought by Fedora Linux 43. Notably, package management is significantly upgraded with RPM 6.0 and the new DNF5 backend for PackageKit for Plasma Discover and GNOME Software ahead of Fedora Linux 44. It also continues to provide extensive device support. This includes newly added support for the Mac Pro, microphones in M2 Pro/Max MacBooks, and 120Hz refresh rate for the built-in displays for MacBook Pro 14/16 models.
Fedora Asahi Remix offers KDE Plasma 6.6 as our flagship desktop experience. It contains all of the new and exciting features brought by Fedora KDE Plasma Desktop 43. It also features a custom Calamares-based initial setup wizard. A GNOME variant is also available, featuring GNOME 49, with both desktop variants matching what Fedora Linux offers. Fedora Asahi Remix also provides a Fedora Server variant for server workloads and other types of headless deployments. Finally, we offer a Minimal image for users that wish to build their own experience from the ground up.
You can install Fedora Asahi Remix today by following our installation guide. Existing systems running Fedora Asahi Remix 41 or 42 should be updated following the usual Fedora upgrade process. Upgrades via GNOME’s Software application are unfortunately not supported. Either KDE’s Plasma Discover or DNF’s System Upgrade command must be used.
Please report any Remix-specific issues in our tracker, or reach out in our Discourse forum or our Matrix room for user support.
Last month, I wrote about how to define, build, and measure trust in your community. Here’s the challenge: you need to extend trust in order for someone to build trust. I touched on this in 2023 after an Ubuntu release included hate speech in translations. It came back to the fore earlier this month after an AI agent attacked a handful of high-profile GitHub repositories.
The agent took advantage of workflows that allowed an attacker to run malicious code via a variety of mechanisms, including the branch name. The attacking agent only needed to open a pull request to cause damage. Normally, tests run by CI infrastructure are a way to evaluate the trustworthiness of a pull request. Most pull requests, of course, are not malicious, but that doesn’t make them trustworthy. A change that fails linting, unit tests, or integration tests may not be worth a maintainer’s time to review.
So if automated CI tests are both a way to measure trust and a vector for attack, what’s the responsible maintainer to do?
The first step is to make sure your CI jobs are securely configured. Tools like zizmor can identify insecure configurations. You may also want to require that a maintainer manually approve workflows before running against pull requests from untrusted sources. This, of course, puts you into a position where you now have to at least give a cursory review to make sure the change is safe enough for your CI workflow. But that’s less work than a detailed review.
With the rise in AI-generated pull requests, this is a problem that will only add more toil for maintainers. Hopefully, platforms will provide tools that reduce the burden.
This post’s featured photo by 愚木混株 Yumu on Unsplash.
The post A trust paradox appeared first on Duck Alignment Academy.
RPMs of Valkey version 9.1 are available in the remi-modular repository for Fedora ≥ 42 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).
⚠️ Warning: this is a pre-release version not ready for production usage.
Packages are available in the valkey:remi-9.1 module stream.
# dnf install https://rpms.remirepo.net/enterprise/remi-release-<ver>.rpm # dnf module switch-to valkey:remi-9.1/common
# dnf install https://rpms.remirepo.net/fedora/remi-release-<ver>.rpm # dnf module reset valkey # dnf module enable valkey:remi-9.1 # dnf install valkey
The valkey-compat-redis compatibility package is not available in this stream. If you need the Redis commands, you can install the redis package.
Some optional modules are also available:
These packages are weak dependencies of Valkey, so they are installed by default (if install_weak_deps is not disabled in the dnf configuration).
The Modules are automatically loaded after installation and service (re)start.
Valkey also provides a set of modules, which may be submitted for the Fedora official repository.
ℹ️ Notices:
valkey
In the past few weeks, I installed five different operating systems on my latest toy: an AI workstation from HP. I love playing with OSes, but my main goal with the new machine is to learn various aspects of AI. I took my first steps in this adventure on Windows.
Of course, you might ask: why on Windows? Well, it’s easy: because it’s easy… :-) There is nothing to install or configure there, as Windows has multiple built-in apps that support AI and can utilize the NPU (hardware-accelerated AI) support of the AMD Ryzen 395 chip. The Windows Task Manager application already has a graph for NPU utilization and I found good use of it already: the Task Manager was open on the NPU tab while I conducted my experiments, allowing me to verify that it was indeed in use, as I ran a couple of generative AI tasks.
I find it somewhat surprising, but Paint now also comes with integrated AI support. Of course, you must pay for image generation but generating simple graphics (or “stickers”, as they are called in Paint) is free. If the prompt field is empty, its prompt text suggests “a cat wearing sunglasses”, so that’s what I generated. It has no practical use for me, but looks fun regardless… :-) You can see an example in the first figure.

It’s only slightly crazier that the simplest Windows app, Notepad, now also has AI features. I asked Notepad to write me a three-paragraph-long blog about writing a blog with AI in a boring, formal style. It completed the task successfully: the result is pretty boring and full of bullshit. Here it is:
The integration of artificial intelligence into the realm of content creation, particularly in the sphere of blog writing, presents a multitude of advantages that merit serious consideration. Firstly, AI-driven tools offer an unprecedented level of efficiency, allowing content creators to produce substantial volumes of text without the need for exhaustive manual input. This technological boon is particularly beneficial for those with time constraints or heavy workloads, as it liberates them from the mundane task of typing and editing.
Moreover, the precision and consistency of AI in language generation cannot be understated. By leveraging advanced algorithms, AI systems can maintain a uniform tone and style throughout a piece, which is a common challenge for human writers. This consistency ensures that the content is cohesive and that the voice of the piece remains unaltered, thereby enhancing the overall reading experience.
Lastly, the ability of AI to learn from vast datasets and adapt to various writing styles is a significant asset. It empowers writers to explore new topics and styles with confidence, knowing that the AI can provide a solid foundation upon which to build. This adaptability not only streamlines the writing process but also encourages creativity and innovation in content creation.

I also wanted to try a controversial Windows feature: Recall. Well, it does not work. When I started it, I got a nice error message stating that it needs Secure Boot. Linux requires it to be turned off, so I cannot test it now. But I must admit that I do not mind that… :-)

If everything goes well, I’ll make my first steps next week to enable hardware-accelerated AI under Linux.
This blog is part of a longer series about my adventures with my new machine and AI. You can reach me to discuss this blog on one of the contacts listed in the upper right corner. You can read the rest of the blogs under the toy tag.
So where has the last six months gone? I was planning on getting images done for Fedora 44 Beta but I was unwell and busy and ran out of time. So what better time to get them out than Pi Day!.
So compared to the last image what do we have now? Quite a lot more and I have more in the pipeline which should be in place in before freeze, plus a possible secret
, I just wanted to get something out sooner rather than later for people to play with. So the things that are working and tested are now:
Overall the devices are quire usable, but I will be working to improve it even more in the coming days.
The things that don’t work, but I’m hoping will be working RSN (pre 44) in no particular order:
One thing you do need to currently do manually once you’ve created an image is to add the following to the kernel command line (use the –args option to arm-image-installer): cma=256M@0M-1024M and without that accelerated graphics and some other things just won’t work, once you’re booted add it to /etc/kernel/cmdline so new kernels will get it too. I’ll hopefully have that issue fixed shortly, I know the problem, just still haven’t got the best solution!
You’ll also want to disable auto-suspend on the Desktop images.
So where can I get these images? Right here:
The Fedora 44 Minimal Image
The Fedora 44 KDE Image
The Fedora 44 GNOME Workstation Image
Happy Pi Day everyone!
Another saturday, another weekly recap.
Monday and Tuesday were all about the Fedora 44 Beta release. Things went mostly smoothly, aside the magazine article publishing early so some outlets announced the release before the website was updated and that caused a bit of confusion.
Hopefully everyone is trying out 44 Beta and reporting bugs and issues so we can have a good final release.
We were in infra freeze around the Beta release so a bunch of pull requests and changes pilled up waiting for that to end. With the beta out the door, we unfroze and I spent time this week (along with others) pushing out many of those changes. A short / incomplete list:
Merged pr for pkgs to perhaps fix sporadic core dumps ( https://forge.fedoraproject.org/infra/tickets/issues/12670 )
Merged pr to attempt to fix koji 502's ( https://forge.fedoraproject.org/infra/ansible/pulls/3173 )
Merged pr to fix a bunch of pagure/forge move links (mostly in comments) ( https://forge.fedoraproject.org/infra/ansible/pulls/3174 )
Merged pr to move fedoraloveskde from pagure to forge ( https://forge.fedoraproject.org/infra/ansible/pulls/3183 )
Created a pr to update our security.txt file ( https://forge.fedoraproject.org/infra/ansible/pulls/3210 )
Merged openshift-readonly pr ( https://forge.fedoraproject.org/infra/ansible/pulls/3188 )
new pr to drop haproxy for src.fp.o ( https://forge.fedoraproject.org/infra/ansible/pulls/3211 )
A pull request moving us to using lmdb instead of hash for postfix configutation (rhel10 drops bdb): ( https://forge.fedoraproject.org/infra/ansible/pulls/3120 )
and more. We got a lot moved forward and there were a number of pull requests from new folks or folks who don't normally submit them and thats been great to see!
Thursday morning we had a outage of kojipkgs servers. It all happened before I was awake, but I think I have a good idea of what happened:
Someone/scrapers/whoever requested some urls under our ostree tree via our cloudfront distribution.
These were for objects directories (the directories themselves)
These directories have around 32k object files in them.
So, dutifully, apache generated a pretty index of them for the client.
This required each request to stat all 32k files in order to display them in a index.
This took... minutes for each request
Requests filled up the request queue
haproxy then marked the backends as down
clients started getting 503's
I have no forbidden directory indexes on these directories, so hopefully that will prevent this from happening again.
Lets we forget that they are still around, scrapers made their presense known again toward the end of the week. Two things they were doing:
They started hitting over and over our hotspot.txt file. This is a small static file containing just "OK" that is used to detect if you are behind a captive portal or not. It's hard to imagine that they get any extracted value from their scraping when they are this mindnumbingly bad at writting a distributed crawler. I guess they make up for it with just having way more clients than they can use to bother with being efficent at all. This one is particularly anoying because we don't want it put it behind anubis or block it or it will break it's entire function.
They started hitting koji's 'search' endpoint with pretty exacting queries. These caused database load to go through the roof and caused the application to stop responding. I disabled search for friday, and just re-enabled it. I hope they have moved on to /dev/null now.
As always, comment on mastodon: https://fosstodon.org/@nirik/116228691881195787
This is a report created by CLE Team, which is a team containing community members working in various Fedora groups for example Infrastructure, Release Engineering, Quality etc. This team is also moving forward some initiatives inside Fedora project.
Week: 09 – 13 March 2026
This team is taking care of day to day business regarding Fedora Infrastructure.
It’s responsible for services running in Fedora infrastructure.
Ticket tracker
This team is taking care of day to day business regarding CentOS Infrastructure and CentOS Stream Infrastructure.
It’s responsible for services running in CentOS Infratrusture and CentOS Stream.
CentOS ticket tracker
CentOS Stream ticket tracker
This team is taking care of day to day business regarding Fedora releases.
It’s responsible for releases, retirement process of packages and package builds.
Ticket tracker

This is the summary of the work done regarding the RISC-V architecture in Fedora.
This team is taking care of quality of Fedora. Maintaining CI, organizing test days
and keeping an eye on overall quality of Fedora releases.
This team is working on introduction of https://forge.fedoraproject.org to Fedora
and migration of repositories from pagure.io.
This team is working on keeping Epel running and helping package things.
This team is working on improving User experience. Providing artwork, user experience,
usability, and general design services to the Fedora project
If you have any questions or feedback, please respond to this report or contact us on #admin:fedoraproject.org channel on matrix.
The post Community Update – Week 11 appeared first on Fedora Community Blog.
Writing a real-time audio plugin on Linux often conjures up images of a complex environment: C++, toolchains, CMake, CLAP / VST3 / LV2 SDK, ABI…
However, there is a much simpler approach : JSFX
This article offers a practical introduction to JSFX and YSFX on Fedora Linux: we’ll write some small examples, add a graphical VU meter, and then see how to use it as an CLAP / VST3 plugin in a native Linux workflow.
JSFX (JesuSonic Effects – created by REAPER [7]) allows you to write audio plugins in just a few lines, without compilation, with instant reloading and live editing.
Long associated with REAPER, they are now natively usable on Linux, thanks to YSFX [3], available on Fedora Linux in CLAP and VST3 formats via the Audinux repository ([4], [5]).
This means it’s possible to write a functional audio effect in ten lines, then immediately load it into Carla [8], Ardour [9], or any other compatible host, all within a PipeWire / JACK [11] environment.
A citation from [1] (check the [1] link for images):
In 2004, before we started developing REAPER, we created software designed for creating and modifying FX live, primarily for use with guitar processing.
The plan was that it could run on a minimal Linux distribution on dedicated hardware, for stage use. We built a couple of prototypes.
These hand-built prototypes used mini-ITX mainboards with either Via or Intel P-M CPUs, cheap consumer USB audio devices, and Atmel AVR microcontrollers via RS-232 for the footboard controls.
The cost for the parts used was around $600 each.
In the end, however, we concluded that we preferred to be in the software business, not the hardware business, and our research into adding multi-track capabilities in JSFX led us to develop REAPER. Since then, REAPER has integrated much of JSFX’s functionality, and improved on it.
So, as you can see, this technology is not that new. But the Linux support via YSFX [3] is rather new (Nov 2021, started by Jean-Pierre Cimalando).
A new programming language, but for what ? What would one would use JSFX for ?
This language is dedicated to audio and with it, you can write audio effects like an amplifier, a chorus, a delay, a compressor, or you can write synthesizers.
JSFX is good for rapid prototyping and, once everything is in place, you can then rewrite your project into a more efficient language like C, C++, or Rust.
Developing an audio plugin on Linux often involves a substantial technical environment. This complexity can be a hindrance when trying out an idea quickly.
JSFX (JesuSonic Effects) offers a different approach: writing audio effects in just a few lines of interpreted code, without compilation and with instant reloading.
Thanks to YSFX, available on Fedora Linux in CLAP and VST3 formats, these scripts can be used as true plugins within the Linux audio ecosystem.
This article will explore how to write a minimal amplifier in JSFX, add a graphical VU meter, and then load it into Carla as a CLAP / VST3 plugin.
The goal is simple: to demonstrate that it is possible to prototype real-time audio processing on Fedora Linux in just a few minutes.
No compilation environment is required: a text editor is all you need.
On Fedora Linux, YSFX comes in 3 flavours :
YSFX is available in the Audinux [5] repository. So, first, install the Audinux repository:
$ dnf copr enable ycollet/audinux
Then, you can install the version you want:
$ dnf install ysfx $ dnf install vst3-ysfx $ dnf install clap-ysfx
Here is a screenshot of YSFX as a VST3 plugin loaded in Carla Rack [8]:

You can :
Here is a screenshot of the Edit window:

The Variables column displays all the variables defined by the loaded file.
We will use the JSFX documentation available at [4].
JSFX code is always divided into section.
In this example, we will use a slider value to amplify the audio input.
desc:Simple Amplifier slider1:1<0,4,0.01>Gain @init gain = slider1; @slider gain = slider1; @sample spl0 *= gain; spl1 *= gain;
slider1, @init, @slider, @sample, spl0, spl1 are JSFX keywords [1].
Description:
Here is a view of the result :

This example will create a slider that will produce a gain in dB.
desc:Simple Amplifier (dB) slider1:0<-60,24,0.1>Gain (dB) @init gain = 10^(slider1/20); @slider gain = 10^(slider1/20); @sample spl0 *= gain; spl1 *= gain;
Only the way we compute the gain changes.
Here is a view of the result :

This example adds protection against clipping and uses a JSFX function for that.
desc:Simple Amplifier with Soft Clip slider1:0<-60,24,0.1>Gain (dB) @init gain = 10^(slider1/20); @slider gain = 10^(slider1/20); function softclip(x) ( x / (1 + abs(x)); ); @sample spl0 = softclip(spl0 * gain); spl1 = softclip(spl1 * gain);
Here is a view of the result :

This example is the same as the one above, we just add a printed value of the gain.
desc:Simple Amplifier with VU Meter
slider1:0<-60,24,0.1>Gain (dB)
@init
rms = 0;
coeff = 0.999; // RMS smoothing
gain = 10^(slider1/20);
@slider
gain = 10^(slider1/20);
@sample
// Apply the gain
spl0 *= gain;
spl1 *= gain;
// Compute RMS (mean value of the 2 channels)
mono = 0.5*(spl0 + spl1);
rms = sqrt((coeff * rms * rms) + ((1 - coeff) * mono * mono));
@gfx 300 200 // UI part
gfx_r = 0.1; gfx_g = 0.1; gfx_b = 0.1;
gfx_rect(0, 0, gfx_w, gfx_h);
// Convert to dB
rms_db = 20*log(rms)/log(10);
rms_db < -60 ? rms_db = -60;
// Normalisation for the display
meter = (rms_db + 60) / 60;
meter > 1 ? meter = 1;
// Green color
gfx_r = 0;
gfx_g = 1;
gfx_b = 0;
// Horizontal bar
gfx_rect(10, gfx_h/2 - 10, meter*(gfx_w-20), 20);
// Text
gfx_r = gfx_g = gfx_b = 1;
gfx_x = 10;
gfx_y = gfx_h/2 + 20;
gfx_printf("Level: %.1f dB", rms_db);
The global structure of the code:
Here is a view of the result :

In this example, we will use a JSFX UI library to produce a better representation of the amplifier’s elements.
First, clone the https://github.com/geraintluff/jsfx-ui-lib repository and copy the file ui-lib.jsfx-inc into the directory where your JSFX files are saved.
desc:Simple Amplifier with UI Lib VU
import ui-lib.jsfx-inc
slider1:0<-60,24,0.1>Gain (dB)
@init
freemem = ui_setup(0);
rms = 0;
coeff = 0.999;
gfx_rate = 30; // 30 FPS
@slider
gain = 10^(slider1/20);
@sample
spl0 *= gain;
spl1 *= gain;
mono = 0.5*(spl0 + spl1);
rms = sqrt(coeff*rms*rms + (1-coeff)*mono*mono);
// ---- RMS computation ----
level_db = 20*log(rms)/log(10);
level_db < -60 ? level_db = -60;
@gfx 300 200
ui_start("main");
// ---- Gain ----
control_start("main","default");
control_dial(slider1, 0, 1, 0);
cut = (level_db + 100) / 200 * (ui_right() - ui_left()) + ui_left();
// ---- VU ----
ui_split_bottom(50);
ui_color(0, 0, 0);
ui_text("RMS Level: ");
gfx_printf("%d", level_db);
ui_split_bottom(10);
uix_setgfxcolorrgba(0, 255, 0, 1);
gfx_rect(ui_left(), ui_top(), ui_right() - ui_left(), ui_bottom() - ui_top());
uix_setgfxcolorrgba(255, 0, 0, 1);
gfx_rect(ui_left(), ui_top(), cut, ui_bottom() - ui_top());
ui_pop();
The global structure of the example:
Here is a view of the result :

Now, produce some sound and use MIDI for that.
The core of this example will be the ADSR envelope generator ([10]).
desc:Simple MIDI Synth (Mono Sine)
// Parameters
slider1:0.01<0.001,2,0.001>Attack (s)
slider2:0.2<0.001,2,0.001>Decay (s)
slider3:0.8<0,1,0.01>Sustain
slider4:0.5<0.001,3,0.001>Release (s)
slider5:0.5<0,1,0.01>Volume
@init
phase = 0;
note_on = 0;
env = 0;
state = 0; // 0=idle,1=attack,2=decay,3=sustain,4=release
@slider
// Compute the increment / decrement for each states
attack_inc = 1/(slider1*srate);
decay_dec = (1-slider3)/(slider2*srate);
release_dec = slider3/(slider4*srate);
@block
while (
midirecv(offset, msg1, msg23) ? (
status = msg1 & 240;
note = msg23 & 127;
vel = (msg23/256)|0;
// Note On
status == 144 && vel > 0 ? (
freq = 440 * 2^((note-69)/12);
phase_inc = 2*$pi*freq/srate;
note_on = 1;
state = 1;
);
// Note Off
(status == 128) || (status == 144 && vel == 0) ? (
state = 4;
);
);
);
@sample
// ADSR Envelope [10]
state == 1 ? ( // Attack
env += attack_inc;
env >= 1 ? (
env = 1;
state = 2;
);
);
state == 2 ? ( // Decay
env -= decay_dec;
env <= slider3 ? (
env = slider3;
state = 3;
);
);
state == 3 ? ( // Sustain
env = slider3;
);
state == 4 ? ( // Release
env -= release_dec;
env <= 0 ? (
env = 0;
state = 0;
);
);
// Sine oscillator
sample = sin(phase) * env * slider5;
phase += phase_inc;
phase > 2*$pi ? phase -= 2*$pi;
// Stereo output
spl0 = sample;
spl1 = sample;
Global structure of the example:
Here is a view of the result :

Advantages of JSFX:
Limitations:
Advantages:
Limitations:
A functional audio effect can be written in just a few lines, adding a simple graphical interface, and then loaded this script as an CLAP / VST3 plugin on Fedora Linux. This requires no compilation, no complex SDK, no cumbersome toolchain.
JSFX scripts don’t replace native C++ development when it comes to producing optimized, widely distributable plugins. However, they offer an exceptional environment for experimentation, learning signal processing, and rapid prototyping.
Thanks to YSFX, JSFX scripts now integrate seamlessly into the Linux audio ecosystem, alongside Carla, Ardour, and a PipeWire-based audio system.
For developers and curious musicians alike, JSFX provides a simple and immediate entry point into creating real-time audio effects on Fedora Linux.
A free collection of JS (JesuSonic) plugins for Reaper.
Code available at: https://github.com/chkhld/jsfx
To install this set of YSFX plugins:
$ dnf install ysfx-chokehold
YSFX plugins will be available at /usr/share/ysfx-chokehold.
Collection of JSFX effects.
Code available at: https://github.com/geraintluff/jsfx
To install this set of YSFX plugins:
$ dnf install ysfx-geraintluff
YSFX plugins will be available at /usr/share/ysfx-geraintluff.
Some JSFX effects from Cockos.
Code available at: https://www.cockos.com/jsfx
To install this set of YSFX plugins:
$ dnf install ysfx-jesusonic
YSFX plugins will be available at /usr/share/ysfx-jesusonic.
A bundle of JSFX and scripts for reaper.
Code available at: https://github.com/JoepVanlier/JSFX
To install this set of YSFX plugins:
$ dnf install ysfx-joepvanlier
YSFX plugins will be available at /usr/share/ysfx-joepvanlier.
LMS Plugin Suite – Open source JSFX audio plugins
Code available at: https://github.com/LMSBAND/LMS
To install this set of YSFX plugins:
$ dnf install ysfx-lms
YSFX plugins will be available at /usr/share/ysfx-lms.
Community-maintained collection of JS effects for REAPER
Code available at: https://github.com/ReaTeam/JSFX
To install this set of YSFX plugins:
$ dnf install ysfx-reateam
YSFX plugins will be available at /usr/share/ysfx-reateam.
Reaper JSFX Plugins.
Code available at: https://github.com/Justin-Johnson/ReJJ
To install this set of YSFX plugins:
$ dnf install ysfx-rejj
And all the YSFX plugins will be available at /usr/share/ysfx-rejj.
Sonic Anomaly JSFX scripts for Reaper
Code available at: https://github.com/Sonic-Anomaly/Sonic-Anomaly-JSFX
To install this set of YSFX plugins:
$ dnf install ysfx-sonic-anomaly
YSFX plugins will be available at /usr/share/ysfx-sonic-anomaly.
TiagoLR collection of JSFX effects
Code available at: https://github.com/tiagolr/tilr_jsfx
To install this set of YSFX plugins:
$ dnf install ysfx-tilr
YSFX plugins will be available at /usr/share/ysfx-tilr.
JSFX Plugins for Reaper
Code available at: https://github.com/TukanStudios/TUKAN_STUDIOS_PLUGINS
To install this set of YSFX plugins:
$ dnf install ysfx-tukan-studio
YSFX plugins will be available at /usr/share/ysfx-tukan-studio.
[1] – https://www.cockos.com/jsfx
[2] – https://github.com/geraintluff/jsfx
[3] – https://github.com/JoepVanlier/ysfx
[4] – https://www.reaper.fm/sdk/js/js.php
[5] – https://audinux.github.io
[6] – https://copr.fedorainfracloud.org/coprs/ycollet/audinux
[7] – https://www.reaper.fm/index.php
[8] – https://github.com/falkTX/Carla
[9] – https://ardour.org
[10] – https://en.wikipedia.org/wiki/Envelope_(music)
[11] – https://jackaudio.org
RPMs of PHP version 8.5.4 are available in the remi-modular repository for Fedora ≥ 42 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).
RPMs of PHP version 8.4.19 are available in the remi-modular repository for Fedora ≥ 42 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).
ℹ️ These versions are also available as Software Collections in the remi-safe repository.
ℹ️ The packages are available for x86_64 and aarch64.
ℹ️ There is no security fix this month, so no update for versions 8.2.30 and 8.3.30.
Version announcements:
ℹ️ Installation: Use the Configuration Wizard and choose your version and installation mode.
Replacement of default PHP by version 8.5 installation (simplest):
On Enterprise Linux (dnf 4)
dnf module switch-to php:remi-8.5/common
On Fedora (dnf 5)
dnf module reset php dnf module enable php:remi-8.5 dnf update
Parallel installation of version 8.5 as Software Collection
yum install php85
Replacement of default PHP by version 8.4 installation (simplest):
On Enterprise Linux (dnf 4)
dnf module switch-to php:remi-8.4/common
On Fedora (dnf 5)
dnf module reset php dnf module enable php:remi-8.4 dnf update
Parallel installation of version 8.4 as Software Collection
yum install php84
And soon in the official updates:
⚠️ To be noticed :
ℹ️ Information:
Base packages (php)
Software Collections (php83 / php84 / php85)
RPMs of PHP version 8.5.3 are available in the remi-modular repository for Fedora ≥ 42 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).
RPMs of PHP version 8.4.18 are available in the remi-modular repository for Fedora ≥ 42 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).
ℹ️ These versions are also available as Software Collections in the remi-safe repository.
ℹ️ The packages are available for x86_64 and aarch64.
ℹ️ There is no security fix this month, so no update for versions 8.2.30 and 8.3.30.
Version announcements:
ℹ️ Installation: Use the Configuration Wizard and choose your version and installation mode.
Replacement of default PHP by version 8.5 installation (simplest):
On Enterprise Linux (dnf 4)
dnf module switch-to php:remi-8.5/common
On Fedora (dnf 5)
dnf module reset php dnf module enable php:remi-8.5 dnf update
Parallel installation of version 8.5 as Software Collection
yum install php85
Replacement of default PHP by version 8.4 installation (simplest):
On Enterprise Linux (dnf 4)
dnf module switch-to php:remi-8.4/common
On Fedora (dnf 5)
dnf module reset php dnf module enable php:remi-8.4 dnf update
Parallel installation of version 8.4 as Software Collection
yum install php84
And soon in the official updates:
⚠️ To be noticed :
ℹ️ Information:
Base packages (php)
Software Collections (php83 / php84 / php85)
Imagine that Fedora Workstation is your desk, and GNOME Shell extensions are small accessories you add to make it feel more personal. It’s like placing a pencil case on the right side, a lamp that helps you focus, or a small cabinet to keep your things from getting scattered. It’s the same desk—GNOME stays clean and minimal—but a few additions can make your routine more comfortable.
Extensions work on the GNOME interface: the top panel, the way you open applications, how notifications appear, and small details that usually stay hidden. These simple changes can be enough to make your Fedora Workstation feel different. With just one extension, you can make Fedora feel more “you.”
But like any accessories, choose only what truly helps—don’t install everything. Too many extensions can clutter your desktop or make things feel unstable. The goal isn’t to chase excitement, but to find a few small add-ons that better fit the way you work in Fedora Workstation.
Note: The user will need to enable Flathub/Third Party Repos in order to get Extension Manager.
Once you see extensions as small “accessories” for GNOME, a question comes up fast: how do you install them without the hassle? This is where Extension Manager helps.
Instead of opening many browser tabs, you can do everything in one place. You can browse extensions. You can search for what you need. You can also read a short description before installing. As a result, the whole process feels calmer and more familiar.
More importantly, Extension Manager makes it easier to experiment safely. For example, you can try one extension to make the top panel more useful. If it doesn’t feel right, you can simply turn it off. Or you can uninstall it in seconds. That way, you stay in control.
Also, you’re not “modding” your whole system. You’re only adding small features. And if you change your mind, you can always go back to GNOME’s clean default look.
In short, Extension Manager is like a small drawer on your desk. It keeps your extensions in one spot. So they’re easy to find, easy to try, and easy to tidy up again.
Let’s move to the easiest part: installing Extension Manager with just a few clicks. Open the Software app on Fedora Workstation, then search for Extension Manager using the search bar. Select the app and click Install. That’s it.
Once the installation is complete, open it from the app menu—look for Extension Manager. Now you’re ready to customize. Start slowly: try one extension first, then see if it fits your daily routine.

After you open Extension Manager, it can feel like opening an “accessories shop” for your Fedora Workstation. There are many options, from small tweaks to extensions that can change how you work.
Start with the search bar. Think about what you most often need in your day-to-day routine. For example, you might want quicker access to apps, tray icons for indicators, or a more informative top panel. When you find an extension that looks interesting, open its page for a moment. Read the short description, look at the screenshots, and then ask yourself whether it will really help your work flow.
If you’re sure, just click Install. In a few seconds, it will be installed, and you’ll notice the change right away. However, if it doesn’t feel right, don’t hesitate to uninstall it. At this stage, you’re simply trying things out—like picking the accessories that best fit your desk.

After you install a few extensions, you don’t have to stick with all of them. Sometimes an extension is useful, but you don’t need it all the time. That’s the nice thing about Extension Manager: you can enable or disable extensions at any time, without any drama.
Think of it like accessories on your desk. Some days you need a desk lamp to help you focus. On other days, you want your desk to stay clean and simple. Extensions work the same way. You can turn one on when you need it, and turn it off when you’re done.
If an extension has options, you’ll usually see a Settings or Preferences button. From there, you can tweak small details to match your style—icon placement, button behaviour, panel appearance, and more. This is what makes extensions feel personal. You’re not just installing something and forgetting it; you’re shaping it around your workflow.
And if one day your Fedora starts to feel too crowded, don’t panic. Just open the list of installed extensions and disable the ones you don’t need. Take it slow. The best customization isn’t about how many extensions you have, but how well they fit your daily activities.

At this point, you might start thinking, “Wow, there are so many things I can change.” And that’s true. However, if you want Fedora Workstation to stay light and comfortable, there are a few simple habits worth keeping in mind.
First, install extensions the same way you choose tools: only when you truly need them. If you stop using an extension after a few days, it’s better to disable it or remove it. A comfortable desktop isn’t the most crowded one—it’s the one with fewer distractions.
Second, try extensions one by one. If you install many at once, it’s hard to tell which one causes a problem. On the other hand, if you take it slowly, you can quickly feel what fits and what doesn’t.
Finally, remember that GNOME keeps evolving. Sometimes after a major update, an extension may not be ready yet. If something feels odd after an update, the safest move is simple: open Extension Manager and disable the extension you suspect. Once things are back to normal, you can wait for an update or choose an alternative.
In the end, Extension Manager isn’t a ticket to customize without limits. It’s more like a clean toolbox. If you use it with care and focus on what you really need, customization can stay enjoyable—without losing the clean, stable feel of Fedora Workstation.
Now you know how to customize your Fedora Workstation with Extension Manager. You’ve learned how to install the app, try a few extensions, and adjust their settings. And here’s the fun part: everyone ends up with a different mix of extensions, because we all have different needs and work styles.
If you have a favorite extension, share it. Which one do you rely on most, and what do you use it for? Maybe it helps you stay focused during presentations. Or maybe it makes the top panel more informative, brings back tray icons, or simply speeds up your work flow. Tell us why you like it, so others can picture the benefit.
Who knows—your list might inspire someone else. And you might also discover a new extension that fits your daily routine even better.
comments? additions? reactions?
As always, comment on mastodon: https://fosstodon.org/@nirik/116308267360944066