Another week has gone by and so time for another bit of round up and
longer form information about the last week for me in Fedora infra.
deploymentconfig to deployment
I finally managed to merge the last of the pull requests moving our
applications from the old deploymentconfig (openshift specific, depreciated)
to deployment (k8s, standard).
I'd like to thank Pedro my co-worker for all the pull requests.
Things were unfortunately anoying at times, as we had some things that
were not working in staging (and I had to fix that to test) or had
odd deploymentconfigs and needed tweaking.
Anyhow, it's all done, we are moved. Great to have that technical debt
all taken care of.
bunch of kernel security issues
As anyone folling linux news knows, there was a series of kernel security
bugs out this week. They were bad in that they were local user to root
and easy to exploit. We pushed out fedora kernel fixes for all these on
friday, but were delayed a bit by the next item below.
Folks are seeing a lot more security related reports of late and it is
indeed likely AI is helping find them. However, in all these cases as far
as I can tell, humans decided to explore the area and Ai simply helped
them zero in on a exploitable path.
I'm sure there will be more. So, keep applying updates, make sure any
local users really need to exist and in the end we should have a more
secure world I hope.
builder capacity / speed
This last week also had discussion about s390x resources (on the fedora
devel list and in the FRCL meeting). s390x is definitely the arch
we have that hits backlogs and is 'slowest' (for some values of slowest).
I was fully away of the 'long builds filling the pipeline' problem there.
That is: a bunch of builds that take a long time are submitted, and
they monopolize the builders, causing all other builds to just sit
and wait for one of them to finish. The solution to that is to have
more builders, so builds can always keep flowing. We can/do have this
for all the other arches, but we don't have nearly as many builders
on s390x. The other side of the balance however is that if you have
lots more smaller builders, those builds that normally take a long time
will take even longer. This problem happens perhaps a few times
a week (often it seems like monday morning, perhaps lots of people
like to do builds then?).
But there was also mention of ppc64le builds being slow.
We did already plan to get another power10 server later this year
because we can see that the two we have are pretty busy.
However, I don't see the slowness that people seem to see.
Taking at random the last rust build for rawhide:
x86_64 - 2 hours 20min
aarch64 - 2 hours 58min
ppc64le - 3 hours 1min
s390x - 4 hours 58min
So, yeah, s390x is slowest (likely because of less cpus for builders there).
But ppc64le is only a few min behind aarch64.
Of course thats just one random package. If you are seeing ppc64le
be wildly slower than the others, please file a infra ticket and we
can look into it. It might be something in the setup, tools, a
specific builder having problems or something else, but if we don't
know about a problem we cannot look.
There were even some people mentioning aarch64 being slow, but
I cannot see that at all. We have a bunch of aarch64 resources now
and lots of builders and they are all really fast. If you are seeing
aarch64 being slow, please do report that too. It may be again tools
or specific builder having some issue or something else we can not do
anything about, but we would like to know about the problem at the
very least.
I didn't see anyone saying x86_64 was slow, so I guess thats good?
s390x outage
Quite unfortunately, we had a complete outage of our s390x builders this
week. They failed on wed morning and were back up friday morning.
I wasn't working directly on the problem, just watching and conveying
information back from the folks doing the work to the community.
I'd like to commend the IBM techs and Red Hat folks working on this.
Including the tech that came back at 1am to replace things.
I'm sure there is going to be a retrospective of this and why it
happened and what can be fixed so it doesn't happen again.
This is a report created by CLE Team, which is a team containing community members working in various Fedora groups for example Infrastructure, Release Engineering, Quality etc. This team is also moving forward some initiatives inside Fedora project.
Week: 4 – 8 May 2026
Fedora Infrastructure
This team is taking care of day to day business regarding Fedora Infrastructure. It’s responsible for services running in Fedora infrastructure. Ticket tracker
Copyfail: updated and rebooted key Fedora hosts, applied kernel team recommended mitigation on key EL 9 hosts
CentOS Infra including CentOS CI
This team is taking care of day to day business regarding CentOS Infrastructure and CentOS Stream Infrastructure. It’s responsible for services running in CentOS Infrastructure and CentOS Stream. CentOS ticket tracker CentOS Stream ticket tracker
This team is taking care of day to day business regarding Fedora releases. It’s responsible for releases, retirement process of packages and package builds. Ticket tracker
Post-release cleanup.
RISC-V
This is the summary of the work done regarding the RISC-V architecture in Fedora.
OpenJDK CVE updates Wrote up initial analysis. Created an F44 branch with three small RISC-V patches on the secondary dist-git. Kicked off a boot JDK build.
Flock prep: Started writing up notes about the updates for our talk on RISC-V. If logistics work out, we may bring a board or two for a demo.
Explore shipping some upcoming RVA23 hardware (“SpacemiT K3”) for JasonM and DavidA for Koji builders. Figure out approximate costs for it.
Continued with LLVM’s ‘libomp’ test failures investigation; it’s a slow grind.
Caught up with Adam Williamson’s nice talk on lessons from root cause analysis.
AI
This is the summary of the work done regarding AI in Fedora.
This team is taking care of quality of Fedora. Maintaining CI, organizing test days and keeping an eye on overall quality of Fedora releases.
Fedora QE Pagure -> Forge migration is now fully complete. The new version of Blockerbugs (that uses Forge instead of Pagure for blocker discussion tickets) is now deployed to production.
If you’re short on time, jump to the status brief below for the 30-second version. If
you have any suggestions for how to improve this space, hit me up in Matrix or email.
This week was another huge week of laying foundation – this time, with yours truly diving into
privacy policy and data protections in addition to infrastructure. What could go wrong?
Anne Ndung’u has joined us as our very first analyst to go through our new onboarding process!
(More on that below.) She is a Data Science student from Nairobi and a current Outreachy intern.
She has an interest in AI and is currently writing a paper on AI erasure and bias, which looks at
how data systems can sometimes overlook specific social contexts like gender and race. Her
background is in building predictive models for social sciences such as housing market trends.
We’re quite excited to put her skills to use here!
Now that our code repos are starting to get moving, we’ve decided that FDWG is big enough (and noisy
enough) that it warrants splitting off from CommOps into our very own space. This will make it
easier to find us and will set the stage for many other changes such as moving most of our repos
from my personal Codeberg to Forge and setting up an official Fedora Docs site.
FAS groups have been created, and now we’re waiting on our Forge space.
Additionally, this news feed will (hopefully soon) start appearing on Fedora
Planet.
Policy is one of the biggest foundational pieces which needs to be established / addressed across
Fedora in order to have a healthy, secure, and sustainable data practice. In short, Fedora’s
Privacy Statement and general GDPR stance
creates more questions than it answers, and it hasn’t been updated since 2018 when GDPR first became
a thing. That’s not sufficient for my taste.
Of course, I’m just one person, with zero authority and zero law degrees. So all I can do is take
care of my corner of the world and try to escalate. The lawyers still haven’t picked up the phone,
so I’ll try a more-differenter /dev/null.
Fedora’s Privacy Statement explicitly calls out that “Fedora is going to collect your activity data
and analyze it” (paraphrased). This is what GDPR calls a “Legitimate Interest”. Users agree to
this when they create their FAS account.
But who is Fedora, exactly? With a corporation’s employees or contractors it’s pretty clear who
is acting on behalf of the corporation, but this is much less clear when no paperwork has
transpired and anyone off the street can pick up a shovel and start digging.
If I take some Fedora data and do a bunch of evil things with it that violate GDPR, then did
@mwinters do that, or did Fedora do that? What’s the determining factor? Is a FAS account all
that’s needed for me to incur legal risk on behalf of Fedora / Red Hat / IBM?
…
We have been meaning for some time to lay out data handling guidelines for analysts, but I went one
step further to address this murky legal water. I reviewed our privacy stance / data operations /
etc extensively with Claude and iterated to create a Volunteer Analyst
Agreement. This agreement explicitly
calls out the “instructions” (required per GDPR) that Fedora is issuing to its analysts. As long as
analysts sign this agreement (digitally!) and are following these instructions, they are acting on
behalf of Fedora, so it is Fedora doing the analyzing.
Of course, this all has to go before Council
for final approval, and they may want Legal to sign off on it. I hope this doesn’t sound too
negative (we’re all very busy), but my realistic expectations for a response from Council is
somewhere between 6 and 36 months.
Lest that sound unfair (it almost certainly is since I’m terribly uninformed in this area), here is
my entire braindump of Council tickets and news that I can remember:
Somebody saying, “Hey, where is the report that you promised to release 18 months ago?”
An announcement that Council was releasing “annual” reports from 3 years ago.
A request from Framework to partner with us. After something like 2 years without a response,
they gave up.
…
We’re implementing this agreement now since it doesn’t make anything worse, and it potentially
covers my butt a little in the event of any major GDPR flare-ups. But I do wish it were possible to
take serious things seriously here.
This is also the first step towards correcting many other upstream GDPR issues while avoiding the
need to slam the door on all data operations (which I hope IBM legal will agree with me about!
Though they’ve broken my heart a dozen times in the past, so I hold no hope if this needs to go
through them.) Maybe we can get at least this much passed before the winds of fate conspire to blow
me elsewhere 🤞.
In short, we’ve made huge strides to open up the Hatlas infra. All of the Kubernetes configuration
is now available for anyone to work on, and the Ansible config is WIP for the same thing.
This meant deciding upon and implementing a secrets mechanism for shared ops, for which I’ve settled
on fnox + age. I honestly am
super happy with this middle ground between something like KMS / Vault and “YOLO”.
This also meant writing a stupid amount of docs, which is normally my favorite thing but I’m eager
to get past the foundation-building phase!
Since we’re getting a steady flow of new contributors lately, I’ve decided to tackle the last
remaining non-SSO service, which is Postgres. This means that it’s currently not accessible for new
users, but I’m WIP’ing as hard as I can folks.
Flock is coming! And I think FDWG will have a slot for a presentation + workshop!
FDWG still has sooo many things on the TODO list to get done before I’ll feel ready to announce our
work to the world, but the flywheel is starting to accelerate here with the help of our new
contributors. I’m hopeful I’ll be able to prepare something interesting and worthy of other people’s
time and attention in the next 6 5 weeks…
TODO lists? Friends don’t let friends use repo issue trackers. But is there one with the great
organizational capabilities of Bugzilla? But with a lightweight UI? And OIDC? And open source?
Open to suggestions!
@smoliicek is almost fully onboarded and getting his head around our configs. He may be able to
help us get federated with FAS so that people can log in directly to Hatlas with their FAS
credentials.
The data dictionary is coming along nicely with some huge contributions this past week from
@evelynrp. (Thanks Evelyn!!) I’m really looking forward to finishing the POC of generating our
Datanommer SQLMesh pipelines from the dictionary via Argo Workflows.
We are almost in position to launch general shared tooling such as Superset! And they just
released a brand spanking new Operator.
(When will I learn that the cutting edge is aptly-named?)
Some good podcasts this week, I liked the ones about aging programmers, bees, and GLP-1s.
Also some scientific insights into super blocks.
Quote of the Week
You bring skills, your experience and your presence to a conversation with someone else who is far more experienced in their life and work than you are, even if they are not yet noticing it.
The Fedora ELN SIG maintains a tool called ELNBuildSync (or EBS) which is responsible for monitoring traffic on the Fedora Messaging Bus and listening for Koji tagging events. When a package is tagged into Rawhide (meaning it has passed Fedora QA Gating and is headed to the official repositories), EBS checks whether it’s on the list of packages targeted for Fedora ELN or ELN Extras and enqueues it for the next batch of builds.
A batch begins when there are one or more enqueued builds and at least sixty wallclock seconds have passed since a build has been enqueued. This allows EBS to capture events such as a complete side-tag being merged into Rawhide at once; it will always rebuild those together in a batch. Once a batch begins, EBS stops accepting messages from the Fedora Messaging Bus. The messages remain enqueued and awaiting processing. When the current batch is complete, EBS will resume accepting messages and a new batch will begin.
The first thing that is done when processing a batch is to create a new side-tag derived from the ELN buildroot. Into the new target associated with this side-tag (which will be referred to as the “build tag” from now on), EBS will tag most1 of the Rawhide builds. It will then wait until Koji has regenerated the buildroot for the batch tag before triggering the rebuild of the batched packages. This strategy avoids most of the ordering issues (particularly bootstrap loops) inherent in rebuilding a side-tag, because we can rely on the Rawhide builds having already succeeded.
Once the preparations are complete, we divide the batch up into one or more “batch slices”. The EBS configuration file contains information about certain packages that must be built and added to the buildroot before or after other packages. (Most packages will be part of the same primary slice, but some packages must be built early, such as llvm). EBS triggers all of the builds for a batch slice in the side-tag concurrently, sourcing the content from the git commit that was used to build the triggering Rawhide build. This is to ensure we are building the same content, in case dist-git has received subsequent changes. EBS monitors these builds for completion. Internally, we call these “rebuild attempts”.
Once all of the tasks in a rebuild attempt have completed (successfully or not), EBS will trigger another rebuild attempt of the failures. While a heavyweight solution, this helps us avoid failures due to infrastructure outages, flaky tests and bootstrapping issues not covered by the Rawhide build tagging. Rebuild attempts will continue to be initiated until the same number of failures occurs twice in a row. At that point, we assume they are legitimate build issues and we continue on.
Once all of the rebuild attempts have concluded, EBS moves on to the next slice in order until they have all completed, at which point, the next phase of operation begins: errata creation.
In earlier versions of ELNBuildSync, EBS would now tag all successful builds into the eln-updates-candidate tag and then remove the build tag. The effect of this would be to trigger Bodhi to generate an erratum for each individual package in the batch.
In modern versions of ELNBuildSync, it will now create a second2 side-tag (call it the “errata tag”). EBS then tags all of the successful package builds from the batch into this new errata tag. This ensures that the tag contains only the new ELN builds and none of the Rawhide packages that were tagged into the build tag. From there, EBS talks to Bodhi via its public API and requests that a single Bodhi update erratum be created for all of the packages in this errata tag. This is done to reduce the load on Fedora QA, as the infrastructure there is far better equipped to deal with a single update of a few hundred packages than it is with a few hundred separate updates.
At this point, the batch is complete and EBS moves on to preparing another batch, if there are packages waiting.
History
In its first incarnation, ELNBuildSync (at the time known as DistroBuildSync) was very simplistic. It listened for tag events on Rawhide, checked them against its list and then triggered a build in the ELN target. Very quickly, the ELN SIG realized that this had significant limitations, particularly in the case of packages building in side-tags (which was becoming more common as the era of on-demand side-tags began). One of the main benefits of side-tags is the ability to rebuild packages that depend on one another in the proper order; this was lost in the BuildSync process and many times builds were happening out of order, resulting in packages with the same NVR as Rawhide but incorrectly built against older versions of their dependencies.
Initially, the ELN SIG tried to design a way to exactly mirror the build process in the side-tags, but that resulted in its own new set of problems. First of all, it would be very slow; the only way to guarantee that side-tags are built against the same version of their dependencies as the Rawhide version would be to perform all of those builds serially. Secondly, even determining the order of operations in a side-tag after it already happened turned out to be prohibitively difficult.
Instead, the ELN SIG recognized that the Fedora Rawhide packagers had already done the hardest part. Instead of trying to replicate their work in an overly-complicated manner, instead the tool would just take advantage of the existing builds. Now, prior to triggering a build for ELN, the tool would first tag the current Rawhide builds into ELN and wait for them to be added to the Koji buildroot. This solved about 90% of the problems in a generic manner without engineering an excessively complicated side-tag approach. Naturally, it wasn’t a perfect solution, but it got a lot further. (See below for “Why are some package not tagged into the batch side-tag?” for more details.
A more recent modification to this strategy came about as CentOS Stream 10 started to come into the picture. With the intent to bootstrap CS 10 initially from ELN, tagging Rawhide packages to the ELN tag suddenly became a problem, as CS 10 needs to use that tag event as its trigger. The solution here was not to tag Rawhide builds into Fedora ELN directly, but instead to create a new ELN side-tag target where we could tag them, build the ELN packages there and then tag the successful builds into ELN. As a result, CS 10 builds were only triggered on ELN successes.
In late 2025, Fedora QA came to the ELN SIG and requested that we find some way to reduce the number of individual errata we were generating, as when they attempted to turn on automated testing for ELN, the result was an overload and significant queuing around mass-rebuilds and other large batches. When it got to the point that the Standard Operating Procedure for mass-rebuilds included disabling all the tests for ELN, it became clear that changes were needed and EBS was modified to start directly requesting errata for all the builds in the batch instead.
Frequently Asked Questions
Why does it sometimes take a long time for my package to be rebuilt?
Not all batches are created equal. Sometimes, there will be an ongoing batch with one or more packages whose build takes a very long time to complete. (e.g. gcc, firefox, LibreOffice). This can lead to up to a day’s lag in even getting enqueued. Even if your package was part of the same batch, it will still wait for all packages in the batch to complete before the tag occurs.
As of this writing, we are currently investigating having certain extremely large packages built and tagged directly and without batching in order to shorten the average batch time.
Why do batches not run in parallel?
Simply put, until the previous batch is complete, there’s no way to know if a further batch relies on one or more changes from the previous batch. This is a problem we’re hoping might have a solution down the line, if it becomes possible to create “nested” side-tags (side-tags derived from another side-tag instead of a base tag). Today however, serialization is the only safe approach.
Why are some packages not tagged into the batch side-tag?
Some packages have known incompatibilities, such as libllvm and OCAML. The libraries produced in the ELN build and Rawhide build are API or ABI incompatible and therefore cannot be tagged in safely. We have to rely on the previous ELN version of the build in the buildroot.
Why do you not tag successes back into ELN immediately?
Despite the fact that we do not block ELN builds going to the stable repository based on test results, we do want to know about and address any issues revealed. Many packages are interdependent and it’s far simpler to test the result of all the builds collectively, once we know they have all been rebuilt.
There are certain packages that we exclude from this so that the Rawhide package is not used in the ELN buildroot; see the skip_tag section of the configuration file for the current set. ︎
In the case of very large batches (such as mass-rebuilds), the set of packages may be split into more than one Bodhi update, to avoid in overtaxing things. ︎
This is a report created by the CLE Team, which is a team containing community members working in various Fedora groups, for example, Infrastructure, Release Engineering, Quality, etc. This team is also moving forward with some initiatives inside the Fedora project.
Week: 27 April – 01 May 2026
Fedora Infrastructure
This team is taking care of day to day business regarding Fedora Infrastructure. It’s responsible for services running in Fedora infrastructure. Ticket tracker
Put risc-v Koji behind anubis and tightened its robots.txt to mitigate a scraper flood
Somehow managed to keep the lights on with Kevin on PTO, teamwork win!
Status.fpo webpage does not pickup status changes merged into status github repo
Signing problem with the latest rawhide kernels
vmhost-x86-copr04 rebooting
CentOS Infra including CentOS CI
This team is taking care of day to day business regarding CentOS Infrastructure and CentOS Stream Infrastructure. It’s responsible for services running in CentOS Infratrusture and CentOS Stream. CentOS ticket tracker CentOS Stream ticket tracker
ELNBuildSync is unable to connect to Fedora Messaging
Add GenericCloud images tests on testing.stream.centos.org
Please add ogajduse to ocp-cico-foreman
Fix openshift underlying issue (investigate) and upgrade
Replace sigs.centos.org infra with el10 host
Modernize centos mirror network CDN
Create an “ansible-init” tool or container
Release Engineering
This team is taking care of day to day business regarding Fedora releases. It’s responsible for releases, retirement process of packages and package builds. Ticket tracker
Release of Fedora 44 and torrents.
Automation works for Mass Branching Bits.
Unretirement workflow is now being processed from the fedpkg command; I found one bit to solve, which is being worked on.
Samyak is trying to solve the dependency checker, which didn’t account for alternative providers and rich dependencies.
RISC-V
This is the summary of the work done regarding the RISC-V architecture in Fedora.
F44 rebuild update (community work): 60% of rebuild is done so far. (This is expected as we’re with reduced Koji builders for a little while.)
LLVM related debugging (this is slow-grind work)
Started a local rebase of my Fedora LLVM patches in my fork. I still have to test it on P550 and submit for review. The builds are time-consuming; it’ll be a background task over the week.
Resumed debugging ‘libomp’ test suite failures. Setup an environment on DP1000.
Explored what it takes to create a shadow Koji instance for RISC-V that tracks Rawhide. DavidA already tried several years ago and had to drop it due to deep-sync issues; there are some workarounds to deal with it today. (Half-baked ideas: we could write an updated koji-shadow but it’s time-intensive. Experiment with a local copy, we don’t want to mess with the active RISC-V Koji instance. (Also needs a capable “volunteer” with time to try this out.)
Investigated and ordered an eGPU (external GPU) docks; last time I looked the one we needed was out of stock. Luckily one came back in stock. This dock helps us drive Tenstorrent’s AI accelerator card.
QE
This team is taking care of quality of Fedora. Maintaining CI, organizing test days and keeping an eye on overall quality of Fedora releases.
Executive summary: F44 signed off and released, LFNW, blockerbugs forgejo migration and test enablement work
awilliam attended LinuxFest Northwest (with brendan, carl and gordon), gave a well-attended talk on root-causing theory and practice – video, slides , did booth duty
PR to add openQA test of installing with a kickstart containing a non-existent package is blocked because anaconda handling of this is currently broken, anaconda team working on a fix
Blockerbugs port to Forgejo is merged, deployed to staging and undergoing testing
Fedora 44 was announced last week: syslog-ng 4.11 is part of it. While checking the Fedora Copr build service for Fedora 44, I realized that CentOS 7 and Amazon Linux 2023 packages are also there. I have a few questions about those for you!
syslog-ng logo
Fedora 44
The availability of the Fedora 44 release was announced last week. Vesion 4.11 of syslog-ng, the current latest release, is part of it. As usual, I did a quick test: everything works as expected.
RHEL 6
The removal of RHEL 6 packages from Copr was announced many years ago. Then, the countdown was silently canceled. I have just checked: RHEL 6 packages are no longer available, so I deleted all my related repositories. Also, I deleted a couple of temporary test repos along the way.
CentOS 7
When talking to product manager friends around the world, I realized that syslog-ng is not an “enterprise” application. “Enterprise” developers are still actively maintaining packages for RHEL 6, when even RHEL 7 has reached end of life a long time ago. Of course, this is just a satirical definition of “enterprise”, at least in my view…
Support for RHEL 7 / CentOS 7 was dropped in syslog-ng a month before the distro became end-of-life. Copr announced the deletion of CentOS 7 packages, but after a while, the countdown suddenly disappeared. Packages built 10+ years ago are still here, and CentOS 7 is still a valid build target.
Question: is there anyone still using my syslog-ng packages on RHEL 7 / CentOS 7? Otherwise, I would be happy to delete anything related from my Copr repositories. I could delete many repositories, save storage, and I would not have to deselect them as a build target during package builds.
Amazon Linux 2023
Another question mark in my mind is Amazon Linux 2023 support. If we can believe the download statistics provided by Copr, then this is one of the most popular syslog-ng repos on Copr. However, over the years, I only received a single feedback about it, which was on Twitter years ago: “Thanks, I use it.” That is all. While there were regular requests to create these packages, nobody asked for features, updates, whatever. The repo is still stuck at syslog-ng version 4.8.
Question: should I update syslog-ng to a more recent version, as time permits?
Release Candidate versions are available in the testing repository for Fedora and Enterprise Linux (RHEL / CentOS / Alma / Rocky and other clones) to allow more people to test them. They are available as Software Collections, for parallel installation, the perfect solution for such tests, and as base packages.
RPMs of PHP version 8.5.6RC1 are available
as base packages in the remi-modular-test for Fedora 42-44 and Enterprise Linux≥ 8
as SCL in remi-test repository
RPMs of PHP version 8.4.21RC1 are available
as base packages in the remi-modular-test for Fedora 42-44 and Enterprise Linux≥ 8
as SCL in remi-test repository
ℹ️ The packages are available for x86_64 and aarch64.
ℹ️ PHP version 8.3 is now in security mode only, so no more RC will be released.
The intersection of open source development and artificial intelligence (AI) is currently a minefield of valid excitement and legitimate anxiety.
Recently, the Fedora Project introduced the Fedora AI-Assisted Contributions Policy.
The full policy text outlines the basic ground rules.
I was informally involved in the drafting process.
Since its rollout, I am actively adjusting my own workflow to comply with its requirements.
For those outside the Fedora ecosystem, the policy establishes basic ground rules for how AI can be used in the project.
It operates on three main pillars:
Accountability:
You can use AI, but you own the output.
The human contributor is always the author and fully accountable for the submission’s quality, license compliance, and utility.
Transparency:
If a significant part of a contribution is taken unchanged from an AI tool, you must disclose it (typically via an Assisted-by: commit trailer).
Evaluation Limits:
AI cannot act as the final judge on substantive contributions or evaluate a person’s standing within the community.
As someone who relies on these tools, I want to share my perspective on why this policy matters, the harsh realities of enforcing it, and why we must tactically embrace AI to protect the future of software freedom.
The "Accountability" Reality and the Threat of "Slop"
On paper, the Accountability clause looks like a strong deterrent against low-quality, automated spam.
In reality, it functions primarily as a legal liability shield.
If a contributor breaks a system by submitting AI-generated falsehoods, the policy simply puts the fault at their feet.
It does not, however, stop the spam.
We saw this clearly during a recent round of the Outreachy internship program.
We had a record number of contributions recorded, but very few were actually merged.
The volume of noise and sloppy contributions ballooned compared to past rounds.
Many newcomers ignored the transparency mandate and flooded our git repositories with low-quality contributions.
What actually deterred this "AI slop" wasn’t the text of the policy; it was the tangible enforcement mechanism of internship eligibility.
The primary consequence of submitting AI spam isn’t legal prosecution.
It is reputational damage.
The Ethical Elephant in the Room
This brings us to the loudest objection from the open source community: the ethics of AI generation.
Many maintainers deeply feel that LLMs are fundamentally engaged in theft.
Maintainers recognize that the companies behind the LLMs extracted immense value from 40+ years of open source material without contributing value back, bypassing licenses and author attribution entirely.
I sympathize with the maintainers who hold this objection.
It is frustrating to watch value extracted without reciprocity.
However, I do not share the same sense of existential panic as other maintainers.
My view goes that the old rules simply no longer apply.
Open source projects must evolve to survive.
Instead of playing defense and fighting a philosophical war we cannot win, we need to think about how to tactically use AI to advance the interests of software freedom in a time when it has never been more threatened.
The Fedora AI-Assisted Contributions Policy Dual Mandate
When used responsibly, AI has the potential to be a massive equalizer.
It can lower the barrier to entry for non-native English speakers drafting documentation and help junior developers navigate legacy codebases with decades or more of context.
Democratizing access is how we attract the next generation of open source contributors and strengthen the social fabric of our community.
This includes Fedora!
But there is a dark side to this democratization.
Dumping a massive volume of low-effort, AI-facilitated contributions onto the laps of Fedora maintainers is a recipe for disaster.
Many Fedora contributors are already stretched thin, doing more than is required of them.
Forcing them to review a tidal wave of "AI slop" breeds deep resentment; to them, it feels like the active "enshittification" of our own community.
This is why we cannot focus solely on the people side of AI inclusion.
We must carefully balance it by providing maintainers with the resources, support, and AI-facilitated tooling they need to survive.
Practical improvements need deeper research and evaluation, such as using AI for advanced automated testing, scaling review workflows, and complex code refactoring.
We should use AI to improve the quality and efficiency of our community’s work, ensuring that the same technology that lowers the barrier to entry also helps maintainers manage the gates.
If we handle this correctly, AI won’t replace the community.
It will empower a more inclusive, diverse, and heavily-fortified one.
The Fedora AI-Assisted Contributions Policy is our first step.
Review the official Fedora Project policy to see the framework in action.
Let’s keep the process open, keep the declarations honest, and tactically build the future of open source together.
During my work on the RISC-V 64-bit architecture port of Fedora, I created
several pull requests to Fedora packages. And some were stalled…
Non-responsive maintainer process
Fedora project has a process called ‘non-responsive maintainer’.
You check is maintainer on vacation, check latest activity and open a bug asking
for action.
The problem was that it linked to fedora_active_user.py script
which does not work since Fedora 41. During cycle of that release the
python-fedora package got retired and no one updated the script.
Let me look
As my actions brought some complains (and some discussions) I decided to take a
look at the script and make it work with current Fedora releases. Created pull
request, mailed original author etc.
There was no answer of any kind so I decided to take over maintaining the
script. Rewrote it to be Python 3 only, moved from urllib to requests,
refactored some repeated code into functions etc.
Then started checking service by service how to get things working better.
Turned out that script had several assumptions which not always apply.
FAS has separate email for Bugzilla
Fedora Accounts Service (FAS) has a separate field for the Bugzilla email. I did
not had to look for testing accounts for this because that’s my case — I use
‘short’ Red Hat email in Bugzilla due to Single Sign-On (SSO) service we use and
my ‘long’ one for the rest. So fedora-active-user script grabs user
information from FAS and checks for separate Bugzilla email and use it if present.
FAS query requires Kerberos
To query FAS you need Kerberos ticket. Both urllib and requests packages
have a way to use it for authentication — one extra package is needed to make
it work.
Lack of valid ticket is caught and info is provided to the user.
Bugzilla is tricky
Querying Bugzilla service is the trickiest part. You can request data but there
is no warranty that you get the latest one. Sure, there is the ‘order’ field for
a query but it feels like a mere suggestion. It is nothing strange to get 2008
entries next to 2023 ones.
Wanna help?
For now, I am hosting
fedora-active-user on GitHub. Will
move it to Fedora Forge later this year. Feel free to open issues, send pull
requests if you have suggestions or changes.
Current version is not the best one. It is a bit better than it was two weeks ago.
At the moment package is present in Fedora rawhide. I am waiting for branches
for stable releases and updates will follow.
Example output
$ fedora-active-user --user hrw
Last action on koji:
2026-05-04 built fedora-active-user-26.05.04-1.fc45
2024-09-12 built python-system-calls-6.11.0-1.fc42
2024-01-08 built python-system-calls-6.7.0-1.fc40
2023-09-18 built python-system-calls-6.6.0-1.fc40
2023-05-08 built python-system-calls-6.4.0-2.fc39
2022-08-06 built python-system-calls-5.19.0-2.fc37
2022-07-25 built python-system-calls-5.19.0-1.fc36
2022-07-25 built python-system-calls-5.19.0-1.fc37
2022-01-10 built python-system-calls-5.16.2-1.fc36
2021-11-15 built python-system-calls-5.16.0-1.fc35
Last package updates on bodhi:
2026-05-04 fedora-active-user-26.05.04-1.fc45
2024-09-12 python-system-calls-6.11.0-1.fc42
2024-01-08 python-system-calls-6.7.0-1.fc40
2023-09-18 python-system-calls-6.6.0-1.fc40
2023-05-08 python-system-calls-6.4.0-2.fc39
2022-08-06 python-system-calls-5.19.0-2.fc37
2022-07-25 python-system-calls-5.19.0-1.fc36
2022-07-25 python-system-calls-5.19.0-1.fc37
2022-01-10 python-system-calls-5.16.2-1.fc36
2021-11-15 python-system-calls-5.16.0-1.fc35
2021-11-15 python-system-calls-5.16.0-1.fc36
2021-09-21 python-system-calls-5.15.5-1.fc36
Last actions performed according to fedmsg:
2026-05-04 hrw commented on the pull-request rpms/prusa-slicer#67
2026-05-04 hrw's Badges rank changed from 272 to 260
2026-05-04 hrw was awarded the badge `Missed the Train`
2026-05-04 hrw commented on update fedora-active-user-26.05.04-1.fc45 (karma: 0)
2026-05-04 fedora-active-user-26.05.04-1.fc45 was tagged into f45 by bodhi
2026-05-04 fedora-active-user-26.05.04-1.fc45 was untagged from f45-updates-candid
2026-05-04 hrw's fedora-active-user-26.05.04-1.fc45 bodhi update has met stable te
2026-05-04 fedora-active-user-26.05.04-1.fc45 was untagged from f45-updates-testin
2026-05-04 fedora-active-user-26.05.04-1.fc45 was tagged into f45-updates-testing-
2026-05-04 fedora-active-user-26.05.04-1.fc45 was untagged from f45-signing-pendin
Last emails on Fedora mailing lists:
2026-04-29 mjuszkiewicz@redhat.com as Marcin Juszkiewicz mailed devel@lists.fedora
2026-04-17 mjuszkiewicz@redhat.com as Marcin Juszkiewicz mailed devel@lists.fedora
2026-04-17 mjuszkiewicz@redhat.com as Marcin Juszkiewicz mailed devel@lists.fedora
2026-04-17 mjuszkiewicz@redhat.com as Marcin Juszkiewicz mailed devel@lists.fedora
2026-04-17 mjuszkiewicz@redhat.com as Marcin Juszkiewicz mailed devel@lists.fedora
2026-04-17 mjuszkiewicz@redhat.com as Marcin Juszkiewicz mailed devel@lists.fedora
2026-04-17 mjuszkiewicz@redhat.com as Marcin Juszkiewicz mailed devel@lists.fedora
2026-04-17 mjuszkiewicz@redhat.com as Marcin Juszkiewicz mailed devel@lists.fedora
2026-04-17 mjuszkiewicz@redhat.com as Marcin Juszkiewicz mailed devel@lists.fedora
2026-04-17 mjuszkiewicz@redhat.com as Marcin Juszkiewicz mailed devel@lists.fedora
Bugzilla activity (may not be the latest):
No activity found on Bugzilla
Looks like I still need to work on querying Bugzilla ;D
I'm back from my vacation, so time for another weekly recap...
Vacation
Week before last I had a lovely time away in hawaii (The big island).
I saw volcanoes (we missing lava fountaining by like 15minutes), lava
tubes (really cool (literally) and dark), botanical gardens (unreal flowers),
had a dinner/sunset cruise with history and finally a sunset/stargazing
trip to the top of mona kea. Super fun! Wish I had another week there to
lounge on the beach. If you ever have a chance to go, take it!
I did look at my email and such the first day or so, but after that
I was too busy and never took my laptop out even until I got back.
Fedora 44 released!
Of course first thing monday on getting back was that we were go for
fedora 44 release tuesday!
Release went pretty smoothly overall and I hope everyone enjoys the release.
Infra freeze ends
Of course with the release on tuesday, we end our infrastructure freeze on
wed. For some reason this time we had a pretty big pile of pending pull
requests, which I attempted to merge and deploy.
The bulk of them were moving our openshift applications from deploymentconfig
(which was a openshift specific object) to deployment (which is a k8s native
object). Openshift still supports deploymentconfig, but it will go away
and it sprews deprecation notices and the sooner we get moved the better.
I ran into some problems with a few applications that had preexisting
issues in staging when I went to test there. There were also some problems
on some applications with selectors (where it chooses how to map a service
on to a deployment). In one case (fmn) the app had two builds for two
different things and one of them was a newer api version and updated
the database, but then the second one couldn't handle that. Had to update
it upstream to get the db versions to match.
Anyhow, there's only a very few left now. Looking forward to being done
paying down this tech debt. :)
scrapers
What weekly recap would be complete without some scraper news? :)
This time they started hitting cgit links on fedorapeople.org (where
contributors can have git repos). I setup anubis there which mostly
quashed them. That did break some redirects tho, so we will need
to fix that.
Scrapers have also been hitting the wiki pretty hard from time to time.
It's not easy to just put that behind anubis because it's in the base
fedoraproject.org domain and we don't want some things there behind it.
For now we just increased resources for the backend, but we will probibly
have to figure out how to setup anubis there before long.
Loadouts for Genshin Impact v0.1.16 is OUT NOW with the addition of support for recently released characters like Linnea and for recently released weapons like Golden Frostbound Oath from Genshin Impact Luna VI or v6.5 Phase 2. Take this FREE and OPEN SOURCE application for a spin using the links below to manage the custom equipment of artifacts and weapons for the playable characters.
Automated dependency updates for GI Loadouts by @renovate[bot] in #516
Resolve the SEST assets alignment issue by @gridhead in #524
Change the default artifact to None by @gridhead in #526
Update dependency pillow to v12.2.0 [SECURITY] by @renovate[bot] in #527
Update dependency pytest to v9.0.3 [SECURITY] by @renovate[bot] in #528
Resolve the DGFT assets alignment issue by @gridhead in #525
Switch packaging from PyInstaller to Nuitka by @gridhead in #518
Update GitHub workflows with Fedora Linux 43 by @gridhead in #523
Update dependency python to 3.13 || 3.14 by @renovate[bot] in #529
Update dependency nuitka to v4 by @renovate[bot] in #530
Automated dependency updates for GI Loadouts by @renovate[bot] in #517
Introduce the recently added weapon Golden Frostbound Oath by @gridhead in #533
Introduce the recently added character Linnea to the roster by @gridhead in #532
Document the evolving Windows SmartScreen errors by @gridhead in #541
Stage the release v0.1.16 for Genshin Impact Luna VI (v6.5 Phase 2) by @gridhead in #542
Automated dependency updates for GI Loadouts by @renovate[bot] in #534
Characters
One character has debuted in this version release.
Linnea
Linnea is a bow-wielding Geo character of five-star quality.
Linnea - Workspace and Results
Weapons
One weapon has debuted in this version release.
Golden Frostbound Oath - Workspace
Appeal
While allowing you to experiment with various builds and share them for later, Loadouts for Genshin Impact lets you take calculated risks by showing you the potential of your characters with certain artifacts and weapons equipped that you might not even own. Loadouts for Genshin Impact has been and always will be a free and open source software project, and we are committed to delivering a quality experience with every release we make.
Disclaimer
With an extensive suite of over 1558 diverse functionality tests and impeccable 100% source code coverage, we proudly invite auditors and analysts from MiHoYo and other organizations to review our free and open source codebase. This thorough transparency underscores our unwavering commitment to maintaining the fairness and integrity of the game.
The users of this ecosystem application can have complete confidence that their accounts are safe from warnings, suspensions or terminations when using this project. The ecosystem application ensures complete compliance with the terms of services and the regulations regarding third-party software established by MiHoYo for Genshin Impact.
All rights to Genshin Impact assets used in this project are reserved by MiHoYo Ltd. and Cognosphere Pte., Ltd. Other properties belong to their respective owners.
I’m taking a stab at more-frequent updates plus a structure around those. I personally hate to
“announce” things that are highly fluid / tentative because I try hard not to propagate noise, but I
think I can do better than once every 6 months.
If you’re short on time, jump to the status brief below for the 30-second version. If
you have any suggestions for how to improve this space, hit me up in Matrix or email.
And PSA: if you haven’t seen https://copy.fail/ yet, spit out your coffee now and run.
Evelyn Park is a Master of Library and Information Science
(MLIS) student by day and
FDWG contributor by night. I absolutely love that she has joined FDWG because “Information Science”
is exactly what we’re trying to do here, and it’s an often-invisible and under-appreciated
pre-requisite to the sort of number crunching we want to do. (It’s stunningly easy to accidentally
crunch the wrong numbers / with the wrong assumptions!) We are very lucky to be able to put
Evelyn’s academic training in taxonomy, information architecture, etc to use here!
She’s also a perfect example of how “open source” work doesn’t just mean “code”. That said, she’s
already contributed her first PR to the Datanommer data
dictionary! This sort of “boring” and
“abstract” foundational work will underpin all of our data pipelines and eventually our analysis of
Datanommer data. We’re very grateful for her contributions. Welcome to open source, Evelyn!
Vít Smolík is a student from Czech Republic and a trusted member of the Fedora Infrastructure team.
He’s been contributing his talents to Fedora’s core infrastructure for nearly a year now, and he’s
recently offered to help run the Hatlas infra too. FDWG caught his eye with our HTTP logs
POC, and he’s excited about the operational
value this could provide.
Ever since I launched Hatlas there have been too many Infra yaks to shave for me to get my hands
dirty with any analytics work. I’m very hopeful that with a little help from Vit we can change
that soon, and start producing some actionable insights.
The big focus this past week has been on moving Hatlas towards shared ownership of the
infrastructure. This was always the plan, but it became a priority a little over a week ago when
@smoliicek offered to help.
Hatlas has always been my “quick and dirty personal dev environment”, but now it’s quickly becoming
prod. (Or perhaps, “staging” in a long-term roadmap view.) Time to take a step up the maturity curve!
I’ve been slowly working towards “separation of concerns” in the infra code for some time and
applying opportunistic refactors as I go, but it’s time to finish the job. This will position the
“dev” portion for a (hopefully) clean lift into Fedora Infra at some point.
Status:
Basic security best practices:
[x] Stock k8s governance tools such as NetworkPolicies (via Cilium), etc
[x] Kyverno
[x] Kubescape
[ ] Alerting
Keycloak SSO
[x] Configure all things RBAC and SSO: Kubernetes apiserver, ArgoCD, Headlamp, etc etc
[ ] Observability tools
Refactor code for shared access
[WIP: 90%] Refactor Ansible code into “k8s” and “hatlas”
[WIP: 70%] Refactor ArgoCD code into “core” and “fdwg”
I intend for any trusted FDWG member to have at least read-only access to all of our infra tooling,
so let me know if you’re interested.
I’ve been reviewing both the Hatlas and Fedora GDPR stance with the help of Claude Opus because my
infosec / compliance / governance background has been setting off alarms in my head ever since I
started working on this data. This has caused me to hold off on being as public as I’d like with
many aspects of Hatlas, partially because I don’t want to stir up community distrust.
On a side note: I’d much prefer to work with a real human lawyer here, but the friendly ones at
RedHat seem to be occupied in their queue to join the Borg (IBM legal), and let’s just say I’m not
in any rush to get a response from them. (Ask me about my scar tissue!)
Suffice to say I’m working on both some policy items and technical items.
Now that Hatlas has SSO capabilities, I’ve decided to change the Datanommer canonical
parquet downloads to require
a login. The general push here is that all data analysis activity must be directly tied to Fedora,
and we can’t assert that this is true if we’re making data available to the general public.
Working through the OIDC specifics was interesting, but I think the final result is almost as easy
to use as the unauthenticated version, so I plan to poke at getting similar implemented upstream.
Flock is coming! And I think FDWG will have a slot for a presentation + workshop!
FDWG still has sooo many things on the TODO list to get done before I’ll feel ready to announce our
work to the world, but the flywheel is starting to accelerate here with the help of our new
contributors. I’m hopeful I’ll be able to prepare something interesting and worthy of other people’s
time and attention in the next 6 weeks.
The data dictionary is coming along nicely. After the current round of Infra enablement work I’d
like to finish the POC of generating our Datanommer SQLMesh pipelines from the dictionary via Argo
Workflows.
The super-secret Fedoran docs and this news feed have been kept up to date, but the public side of
Hatlas is still way overdue for some updates. We also have a goal to create official FDWG docs.
I’m not sure what order this will happen in.
Quite a few articles about AI costs getting out of control.
For something different I recommend the podcast about the origins of Rose Bikes.
Quote of the Week
What they found instead was that “who is on a team matters less than how the team members interact, structure their work, and view their contributions” (Google 2015). In other words, it all comes down to team dynamics.
GNOME is once again participating in GSoC. This year, we have 6 contributors working on adding Debug Adapter Protocol support to GJS, incorporating vocab-style puzzles into GNOME Crosswords, creating a native GTK4/Rust rewrite of the Pitivi timeline ruler, porting gitg to GTK4, implementing app uninstallation in the GNOME Shell app grid, and enabling recovery from GPU resets.
As we onboard the contributors, we will be adding them to Planet GNOME, where you can get to know them better and follow their project updates.
GSoC is a great opportunity to welcome new people into our project. Please help them get started and make them feel at home in our community!
Special thanks to our community mentors, who are donating their time and energy to help welcome and guide our new contributors: Philip Chimento, Jonathan Blandford, Yatin, Alex Băluț, Alberto Fanjul, Adrian Vovk, Jonas Ådahl, and Robert Mader.
A new upstream version of Libblockdev was released on Monday (April 27th) – 3.5.0. This release brings both new functions and a large number of bug fixes.
Btrfs plugin: recursive deletion of subvolumes and device stats
Two new functions have been added to the btrfs plugin.
bd_btrfs_delete_subvolume_recursive can be used to remove a btrfs subvolume and all its children in one call. This was prompted by an issue reported for blivet-gui requesting this feature. While we could do this manually, using the new btrfs --recursive option can noticeably speed up this operation.
bd_btrfs_device_stats can be used to get device statistics for a btrfs volume. This is equivalent to the btrfs device stats command. In the future we want to bring this functionality to UDisks and ultimately to Cockpit, which requested this feature.
Crypto plugin: open with flags
All “open” functions in the crypto plugin were extended to allow passing the activation flags to libcryptsetup. This for example allows enabling discard when opening encrypted devices. These new functions will be used in the next version of UDisks to correctly support options specified in /etc/crypttab.
NVMe plugin: listing namespaces for a controller
bd_nvme_find_namespaces_for_ctrl is a new utility function to find all namespaces associated with a given controller. This mirrors the existing bd_nvme_find_ctrls_for_ns function but performs the reverse operation.
Bug fixes
The biggest chunk of the changes in this release are various bug fixes. As described in my previous post, we started experimenting with using LLM for code review and for libblockdev the result is over a hundred issues fixed, mostly memory and resource leaks, logical issues in error paths and various issues in tests and documentation.
At the time of writing, libblockdev 3.5.0 is already available in Fedora Rawhide and Debian Unstable and on its way to Fedora 44. The latest builds for testing and bleeding-edge enthusiasts can be always found in our Copr repository.
I read more in this stretch than made it into this post. As I sorted it, I realized the leftovers would turn into snark about Kash Patel, a pointer to the Sam Altman bio/promo piece, or an essay about Sugey Amaya as a case study in systems gone bad. You don’t need me for that.
Instead, here are some pieces on the Czech Republic, technology, being American, and more.
Disclaimer: I work at Microsoft on upstream Linux in Azure. These are my personal notes and opinions.
The way Europe considers poverty is refreshing. There is a concept called ‘social deprivation’ that measures whether individuals and families can participate in society. The score is based on the ability to do things like afford a week’s vacation away from home. The NGO in this article built a “‘minimum decent wage’ for full-time work in the Czech Republic, [which is] enough to cover the needs of an adult with a child, leisure time and some savings.” This calculation came out to CZK 48,336 per month and compares very favorably to the average gross wage of 49,215 per month. On the face of it this is great. However, the median wage was only 45,523 and the statutory minimum is 20,000. There is still work to do.
When I first moved to the Czech Republic, I noticed that business names were often ‘odd.’ It is common to see the legal name of a business printed in small print on a receipt or even on the window or front door of the shop. Many were just names and some seemed completely unrelated to the store I was in. As I understand it, the names are a vestige of the way non-corporations are organized here. Briefly, you have to “wear your own skin” into the market place. So if you are forming a single-person entity (self-employment and the like) your name is the name of the entity. Getting a DBA (Doing Business As) doesn’t remove your name from the business, instead it just tacks on more words.
But the unrelated names had a much more interesting backstory. It was, as I understood it, common to start a business by buying an existing one. This was done because the established business had a track record with all the banks and government ministries. It was already registered and on the books. You didn’t have to fight the red tape, you just assumed the work of someone who had already done it. It also meant that your restaurant might be called, “Alpha Shoes” (a fictitious example). Thankfully the red tape has lessened and I understand this practice has died out.
All of this is to say that I don’t plan to buy my AI inferencing from a shoe company.
all three of the remaining Brno K2 cars (including the retro car and Party Šalina) in operation on the Červinkova–Česká–Maloměřický most route. The program will culminate with a convoy of all three cars through the city centre, with a photoshoot on namesti Svobody at 5pm.
Sadly the Party Tram is no more, but we still have the Cafe Tram!
I understand why the word ‘lazy’ is used here. But we are not doing ourselves any favors by redefining words away from their connotative meanings to prove points. The primary officially stated reason we had to invent ‘open source’ was because ‘free software’ is a language conflict that gives us the even more tortured ‘FLOSS.’ Let’s do ourselves favors and be direct. The criticism of unchecked LLM software bloat is valid. Don’t hide that great point behind something most people don’t want to be called.
The harness is what you add to the agent. The AGENTS.md files. The skills. The custom MCP tools. The hand-crafted linters. The system prompts. The recipes and subrecipes. The extension configurations. The provider choices. The permission policies.
I loved this definition of harness. It fits so well what most people are trying to do in their workflows, just like we do with our operating systems and desktops. I find it highly ironic that it comes out via a write up on a project dedicated to building … agents and harness frames.
Claude 4.6 had a section specifically clarifying that “Donald Trump is the current president of the United States and was inaugurated on January 20, 2025”, because without that the model’s knowledge cut-off date combined with its previous knowledge that Trump falsely claimed to win the 2020 election meant it would deny he was the president. That language is gone for 4.7, reflecting the model’s new reliable knowledge cut-off date of January 2026.
Claude avoids saying “genuinely”, “honestly”, or “straightforward”.
In the context of our current world, somehow these things feel right and wrong at the same time.
I recently had to use Gnome after a long time away and I couldn’t put my finger on why I didn’t like it. I remembered happily using it for years and now it was a mess. This essay did a really good job of putting my thoughts into words. As a bonus, his screenshots include files named ‘farts’ which is my go to placeholder and the bemusing “[m]aybe someday our children will live in a post-files world, but that is not our reality.”
The title of the book is Everything But The Burden, as in, non-black people want to take everything from us except the weight of our history.
I occasionally watch an African American content creator on Instagram (@bmotheprince - I don’t know how to link to Instagram and I use someone else’s account so … get off my lawn! :P). His humor is mostly about the generational divide in the US, but it is an occasional window into African American culture. I also periodically listen to F.D Signifier for US cultural deep dives and … again … insights into African American culture. So I was drawn to this article about ‘unc’ which I had derived a definition for from the way these two men use it. I am glad to say I was mostly right. That said, damn that book title hits hard.
This article set me off so badly I debated penning an angry letter to the editor, but had a coffee instead. They profiled three people: a woman who relocated to Tbilisi, which on the face of it is an interesting choice until you realize she is actually from there and had immigrated to America; a digital nomad trying to skip around the world and go through spend/save cycles in different economies; and a man who moved to Mexico with literally no plan and wound up teaching English online and writing for a content mill. He has since returned home broke with the intention of pivoting into insurance.
Only one of these people is even trying to save money. Other than the guy who never found a career path, none seem particularly focused on integrating with where they live and are instead focused on having a luxury lifestyle. The article jumps straight into the idea that they are avoiding US income tax, which not every article needs to explain in detail, but at least don’t short-hand it, and it doesn’t acknowledge any local costs. They’ve chosen not to avail themselves of real retirement savings options or in most cases local healthcare options by refusing to be more than “just visiting.” Additionally, many digital nomads seem to believe they never have local tax obligations (they do) and while the article doesn’t confirm our digital nomad is doing that, the coda on taxation implies it in this case too.
I didn’t leave the US to achieve cheaper living costs. I also didn’t leave over politics. I have worked hard at integrating into Czech society, as much as my laziness about learning the language allows. I participate in the local tax system and the local healthcare system. My quality of life is objectively better than I enjoyed in the US, and I am currently, as I was previously, in tech and earn well. Not every American abroad wants to go back to the US, and the US systems that make that hard are failing Americans who didn’t leave, not solely Americans who did. I am trying hard not to write an essay here and instead focus on my coffee.
Thankfully our household doesn’t get sick too often, but I can promise you I always have to Google, ‘what is Paracetamol’ and ‘what is normal human body temperature in Celsius.’
We could do slow news again. I found this article via a game of Bracket City and it reminded me why I try not to read headlines too frequently. News needs more time to develop and marinate than the 24-second cycle gives us. I used to subscribe to The Week, which publishes a roundup of major events after they have had time to gel. Having a child has made time more precious so I let my subscription lapse, but I still recommend the format if you want your life back from the news cycle.
They don’t produce many articles, which is actually great because what they write is long and goes deeper into a topic than most people would desire, but I strongly recommend putting Low Tech Magazine into your RSS reader. This article was one I thought would be a dud, but it turns out that hand carts are way more interesting than I ever imagined. Did you know they had sails …
My score: 2 out of 8. I mostly got the wrong definition for these British insults. According to the test, I am a plonker. That’ll be “Unc Plonker” to you.
We are happy to announce the general availability of Fedora Asahi Remix 44. This release brings Fedora Linux 44 to Apple Silicon Macs.
Fedora Asahi Remix is developed in close collaboration with the Fedora Asahi SIG and the Asahi Linux project. This release incorporates all of the exciting improvements brought by Fedora Linux 44. Fedora Asahi Remix 44 also retires our vendored Mesa and virglrenderer packages. Users who have not already manually done so will be automatically transitioned to the upstream Mesa and virglrenderer packages provided by the upstream Fedora repositories.
Fedora Asahi Remix offers KDE Plasma 6.6 as our flagship desktop experience, with all of the new and exciting features brought by Fedora KDE Plasma Desktop 44. Plasma Setup replaces the previous Calamares-based setup wizard, providing a Plasma-native experience for user account creation and system setup. Additionally, Plasma Login Manager is now the default greeter and session manager, replacing SDDM. This applies to new installs only; users upgrading from previous versions of Fedora Asahi Remix will not have their configuration changed.
A GNOME variant is also available, featuring GNOME 50, with both desktop variants matching what Fedora Linux offers. Fedora Asahi Remix also provides a Fedora Server variant for server workloads and other types of headless deployments. Finally, we offer a Minimal image for users that wish to build their own experience from the ground up.
You can install Fedora Asahi Remix today by following our installation guide. Existing systems running Fedora Asahi Remix 42 or 43 can be updated following the usual Fedora upgrade process. Upgrades via GNOME’s Software application are unfortunately not supported; either KDE’s Plasma Discover or DNF’s System Upgrade command must be used.
Fedora has released Fedora KDE Plasma Desktop Edition 44 to the public.
The Fedora KDE Plasma Desktop Edition is suitable for many needs. It combines the reliable and trusted Fedora Linux base with the KDE Plasma Desktop environment. It provides a selection of KDE applications that are simple by default, but powerful when needed.
KDE Plasma 6.6
The KDE community makes your life easier with the latest release of KDE Plasma. It builds upon the foundations of Plasma 6 to provide a seamless, friendly, and familiar experience.
Fedora KDE 44 ships with Plasma 6.6.4 featuring:
Custom global theme creation by saving the current theme setup
More options for using color accent in windows with tint intensity for window frames
Support for connecting to Wi-Fi networks by scanning QR codes
Per-application volume adjustment from the task manager
New grayscale filter for colorblindness correction
New screen magnifier feature that tracks the mouse pointer
New “Slow keys” and “reduced motion” settings
Spectacle can do OCR scanning of images to capture text
Per-window filter from screencast through the menu in the title bar
Beyond just the updates included in KDE Plasma 6.6, there are some major new features with Fedora KDE on Fedora Linux 44.
Fresh installations now use the brand-new Plasma Setup and Plasma Login Manager. These provide a more cohesive and integrated experience from the moment the computer is powered on the first time. The installation process has been simplified. It now enables you to easily set up a computer with Fedora KDE Plasma Desktop for a friend or a loved one.
The on-screen keyboard uses the new Plasma Keyboard, providing a fresh and future-forward implementation for keyboard input.
Fedora Linux 44 general updates
Some broader changes in Fedora Linux also directly impact Fedora KDE Plasma Desktop Edition, notably:
PackageKit now uses version 5 of the DNF package manager as the backend.
Support for select Qualcomm-based laptops.
The /etc/pki/tls/cert.pem file no longer exists by default. This may impact some programs that expect this file to provide system CA certificates instead of leveraging behaviors built into cryptographic security libraries to offer this information.
Fedora Ready is ready for Fedora KDE
The Fedora KDE Plasma Desktop 44 edition is fully supported within the Fedora Ready program. Fedora KDE is actively engaging with hardware vendors to support Fedora KDE Plasma Desktop on their devices.
We are pleased to announce that Star Labs offers preinstalled Fedora KDE Plasma Desktop as an option for their portfolio of devices. As makers of computers with an open source ethos embedded into the core of their products with even open source firmware powered by Coreboot, they share many of the same principles the Fedora community values. This is a very exciting moment for Fedora KDE and we look forward to deepening our collaboration with Fedora Ready participants and extending to other vendors. If you are a vendor potentially interested in Fedora Ready, please reach out!
I’m happy to announce that we have sealed bootable container images ready for testing for the Fedora Atomic Desktops!
What are sealed bootable container images?
Sealed bootable container images include all the components needed to create a fully verified boot chain, from the firmware to the operating system composefs image. This relies on Secure Boot and thus only supports system booting with UEFI on x86_64 & aarch64.
The components are:
systemd-boot as bootloader
a Unified Kernel Image (UKI) which includes the Linux kernel, an initrd and the kernel command line
a composefs repository with fs-verity enabled. This is managed by bootc.
Both systemd-boot and the UKI are signed for Secure Boot. The images are test images so the components are not signed with the official keys from Fedora.
The main direct benefit that we will get from this support is that we will be able to enable passwordless disk unlocking using the TPM in a way that will be reasonably secure by default.
We welcome testing and feedback! Please see the list of known issues and report new issue at github.com/travier/fedora-atomic-desktops-sealed. We’ll redirect them as needed to the right upstream projects.
Beware, those are testing images. The root account does not have a password set and sshd is enabled, by default, to make debugging easier. The UKI and systemd-boot are signed for Secure Boot but, since those are test images, they are not signed with the official keys from Fedora. Don’t use those images in production.
Where can I get more details about how this works?
If you want to know more about how sealed images work (i.e. how we make bootable containers, UKI and composefs work together to create a verified boot chain), see the following presentations and documentation:
Fedora Linux 44 has been released! So, let’s see what is included in this new release for the Fedora Atomic Desktop variants (Silverblue, Kinoite, Sway Atomic, Budgie Atomic and COSMIC Atomic).
Changes for all Atomic Desktops
Issue tracker moved to the new Fedora forge
We have moved the cross-variants issue tracker to the new Fedora forge. This is the best place to file issues that impacts all variants or to coordinate work between all of them. If you have issues specific to a given desktop environment then we usually prefer to track them in each respective SIG trackers. These are available on the README for the atomic-desktops organization.
Unified documentation, hosted on the new forge
The unified documentation for all Atomic Desktops is finally live! Unfortunately the translations have not been migrated so we will need help to re-translate everything again, once the translation setup is ready with the new forge. It should be mostly copy/paste from the previous docs and this time we will only have to translate the docs once and not for every (new) variant.
Some AppImages are still using an old AppImage runtime that relies on FUSE 2 libraries being available on the host. See the Discussion thread for examples on how to check the runtime of an AppImage.
If some of your AppImages do not work on Fedora Atomic Desktops 44, we recommend:
Looking for a Flatpak for the application and giving it another try. Consider helping upstream package their application as a Flatpak.
Reporting the issue upstream so that they are aware that they should use a newer runtime. Consider helping upstream with this as well.
EncFS or CryFS backends for Plasma Vaults are removed
KDE upstream no longer recommends using the EncFS nor CryFS backends for Plasma Vaults, notably because they rely on the FUSE 2 libraries. If you are using one of those backends, you should migrate your data to a new vault using the only maintained backend (gocryptfs). Ideally this should occur before the update to Fedora Linux 44. If you have already updated to Fedora Linux 44 and need access to your data, you can layer the needed packages (cryfs or fuse-encfs) using rpm-ostree install <package>, then migrate your data and finally reset the layers with rpm-ostree reset.
Dropping compatibility for pkla Polkit rules
Support for the legacy pkla Polkit rules format has been removed. It is unlikely that you were relying on support for those rules as most of the ecosystem has moved on to the new JavaScript based format.
Unified out of the box experience with KDE Plasma Setup (OEM installation)
Thanks to the new Plasma Setup, it is now possible to install the system with Anaconda with minimal configuration and then complete the installation on the first boot by creating a new user and selecting the timezone. This is great when you want to install Fedora Kinoite on a computer and don’t want to setup a user in advance.
As always, I heavily recommend checking them out, especially if you feel like some things are missing from the Fedora Atomic Desktops and you depend on them (NVIDIA drivers, extra media codec, out of tree kernel drivers, etc.).
What’s Next
Helping us with a few nasty bugs
If you have an interest in contributing to Fedora Atomic Desktops, here are some bugs that we will have to fix in the short term. We would greatly appreciate help with:
Fixing root mount options (atomic-desktops#72): This is a long standing and mostly invisible bug that impacts performance.
Moving away from nss-altfiles (atomic-desktops#108): This is another long standing source of issues that new users regularly face.
A lot of work is happening to make the transition to Bootable Containers as smooth as possible for our existing users. You can look at the road map for this transition at atomic-desktops#26.
Fedora Silverblue is an operating system for your desktop built on Fedora Linux. It’s excellent for daily use, development, and container-based workflows. It offers numerous advantages such as being able to roll back in case of any problems. If you want to rebase to Fedora Linux 44 on your Fedora Silverblue system, this article tells you how. It not only shows you what to do, but also how to revert things if something unforeseen happens.
Update your existing system
Prior to actually doing the rebase to Fedora Linux 44, you should apply any pending updates. Enter the following in the terminal:
$ rpm-ostree update
or install updates through GNOME Software and reboot.
Note
rpm-ostree is the underlying atomic technology that all the Fedora Atomic Desktops use. The techniques described here for Silverblue will apply to all of them with proper modifications for the appropriate desktop.
Rebasing using GNOME Software
GNOME Software shows you that there is new version of Fedora Linux available on the Updates screen.
First thing to do is download the new image, so select the Download button. This will take some time. When it is done you will see that the update is ready to install.
Select the Restart & Upgrade button. This step will take only a few moments and the computer will restart when the update has completed. After the restart you will end up in a new and shiny release of Fedora Linux 44. Easy, isn’t it?
Rebasing using terminal
If you prefer to do everything in a terminal, then this part of the guide is for you.
Rebasing to Fedora Linux 44 using the terminal is easy. First, check if the 44 branch is available:
$ ostree remote refs fedora
You should see the following in the output:
fedora:fedora/44/x86_64/silverblue
If you want to pin the current deployment (meaning that this deployment will stay as an option in GRUB until you remove it), you can do this by running this command:
# 0 is entry position in rpm-ostree status $ sudo ostree admin pin 0
To remove the pinned deployment use the following command:
# 2 is entry position in rpm-ostree status $ sudo ostree admin pin --unpin 2
Next, rebase your system to the Fedora Linux 44 branch.
$ rpm-ostree rebase fedora:fedora/44/x86_64/silverblue
Finally, the last thing to do is restart your computer and boot to Fedora Linux 44.
How to roll back
If anything bad happens (for instance, if you can’t boot to Fedora Linux 44 at all) it’s easy to go back. At boot time, pick the entry in the GRUB menu for the version prior to Fedora Linux 44 and your system will start in that previous version rather than Fedora Linux 44. If you don’t see the GRUB menu, try to press ESC during boot. To make the change to the previous version permanent, use the following command:
$ rpm-ostree rollback
That’s it. Now you know how to rebase Fedora Silverblue to Fedora Linux 44 and roll back. So why not do it today?
FAQ
Because there are similar questions in comments for each blog about rebasing to newer version of Silverblue I will try to answer them in this section.
Question: Can I skip versions during a rebase of Fedora Linux? For example from Fedora Silverblue 41 to Fedora Silverblue 44?
Answer: Although it could sometimes be possible to skip versions during rebase, it is not recommended. You should always update to one version prior (41->42->43->44 for example) to avoid unnecessary errors.
Question: I have rpm-fusion layered and I get errors during rebase. How should I do the rebase?
Answer: If you have rpm-fusion layered on your Silverblue installation, you should do the following before rebase:
After doing this you can follow the guide in this blog post.
Question: Could this guide be used for other ostree editions (Fedora Atomic Desktops) as well like Kinoite, Sericea (Sway Atomic), Onyx (Budgie Atomic),…?
Yes, you can follow the Rebasing using the terminal part of this guide for every Fedora Atomic Desktop. Just use the corresponding branch. For example, for Kinoite use fedora:fedora/44/x86_64/kinoite
This article highlights a few noteworthy changes in the latest release of Fedora Workstation that we think you will love. Upgrade today from the official website, or upgrade your existing install using GNOME Software or through the terminal with dnf system-upgrade.
GNOME 50
Fedora Linux 44 Workstation ships with the latest GNOME release, GNOME 50. This comes with a long list of refinements to your desktop, including everything from accessibility, to color management and remote desktop.
As part of the Digital Wellbeing initiative, new native Parental Controls let you set screen time limits and bedtimes directly from Settings.
Many of the applications that are installed by default on the Fedora Workstation have also seen improvements, from the Document Viewer to the File Manager and the Calendar.
To learn more about these and other changes, you can read the GNOME 50 release notes.
I’m excited to announce that Fedora Linux 44 is here! Keep reading to discover highlights of Fedora Linux 44, or if you are ready, just jump right in and give Fedora Linux 44 a try!
Thanks to everyone who helped!
Thank you and congrats to everyone who has contributed to this release. And thanks to everyone who showed up for the virtual release party last Friday. We celebrated a little early this year, just after the go/no-go meeting made the release official. If you weren’t able to join us live, you can watch the recording and hear about some of the great work from the contributors involved.
Looking to upgrade?
If you have an existing system, Upgrading Fedora Linux to a New Release is easy. In most cases, it’s not very different from just rebooting for regular updates, except you’ll have a little more time to grab a coffee.
As usual with Fedora Linux, there are just too many individual changes and improvements to go over in detail. You’ll want to take a look at the release notes for that.
Notable User Visible Changes
Anaconda
For those of you installing fresh Fedora Linux 44 Spins, you may notice a change in how Anaconda handles network devices. Anaconda now only creates network profiles for devices configured during installation (by boot options, kickstart, or interactively in UI) instead of providing default profiles for all devices. This change will simplify post-installation network configuration for users who need to customize after installation.
Workstation
Fedora Linux 44 Workstation ships with the latest GNOME release, GNOME 50. This comes with a long list of refinements to your desktop, including everything from accessibility to color management and remote desktop. Many of the applications that are installed by default on Fedora Workstation have also seen improvements, from Document Viewer to File Manager and Calendar. To learn more about these and other changes, you can read the GNOME 50 release notes.
KDE Plasma Desktop
KDE Plasma Desktop: If you are a KDE user, you should also notice a couple of very obvious changes. Fedora KDE Plasma Desktop 44 is based on the latest Plasma 6.6, which includes the new Plasma Login Manager and Plasma Setup to provide a more cohesive and integrated experience from the moment the computer is powered on for the first time. The installation process has been simplified, enabling you to easily set up Fedora KDE Plasma Desktop for a computer for a friend or a loved one.
Plumbing Upgrades
Beyond the user-visible changes, there are some important plumbing changes user should be aware of.
OpenSSL Cert File Handling Improvements
The loading time of OpenSSL has been improved by making use of directory-hash support for ca-certificates. This improvement required changes to where some certificate bundles are stored on the filesystem. You can read the specific Change details for more information.
The MariaDB default version is now 11.8
MariaDB packages use a versioned package layout, which allows Fedora to deliver both, mariadb-10.11 and mariadb-11.8 for users. The “distribution default” unversioned MariaDB packages now install the 11.8 versions in Fedora Linux 44. User doing upgrades to Fedora Linux 44 won’t notice the change in the default. For new users installing MariaDB for the first time, unless you specify the version, you’ll now get 11.8 by default.
Wine NTSYNC
The NTSYNC kernel module is enabled for select packages by package recommendation (notably Wine and Steam), which can improve compatibility and performance when running Windows applications (especially games). When packages that recommend the wine-ntsync package are installed, the package recommendation ensures NTSYNC is configured automatically on subsequent boots, so that users don’t have to manually enable NTSYNC.
Fedora Cloud boot partition using Btrfs
The /boot partition has been replaced with a Btrfs subvolume for Fedora Cloud images that support it. This results in better space utilization and smaller images.
If you hit a snag
If you run into a problem, visit our Ask Fedora user support forum. This forum includes a category where we collect common issues and solutions or work-arounds.
Just drop by and say “hello”
Drop by our “virtual watercooler” on Fedora Discussion and join a conversation, share something interesting, and introduce yourself. We’re always glad to see new people!
Although OpenSSL 4.0 released just two weeks ago, the syslog-ng project has already received a GitHub issue complaining that we do not support it. So, before we would allocate too much effort on it: what should we expect?
This raises the question that if it is not an LTS release, then can we stay on version 3.x and skip 4.x altogether? When will Linux distributions start using it?
Looking at Repology, there are already a few places where OpenSSL 4.0 is available. This includes Gentoo, the community where the GitHub issue originated from, and also various FreeBSD ports. The current list is available at: https://repology.org/project/openssl/versions
Fedora is planning to use OpenSSL 4.0 as default starting from the next release: https://fedoraproject.org/wiki/Changes/OpenSSL40 However, OpenSSL 3.x will most likely stay supported for backwards compatibility.
I am also curious if there are any other projects which have added support for OpenSSL 4.0. If so, then what are your experiences? Was porting your code to use OpenSSL 4.0 difficult?
I am all in for supporting the latest technologies, but currently, even if we have an open request for OpenSSL 4.0 support, I do not feel that I have enough information to prioritize its development.
Smoke testing – You want to know if your system commands actually work, not just when you run them the way the docs say, but when users (or their scripts) feed them garbage.
AI is excellent at generating potential edge cases, and tracking systems are already all too eager to collect new tickets. I’m being careful not to dump every AI finding into Bugzilla; I don’t want to clutter the backlog and mainly waste developer time on theoretical bugs. Or Should I?
Plus, segfaults don’t lie – either the system crashed or it didn’t, and those are the issues that actually deserve the ticket.
Throwing Random Arguments at System Binaries Until They Crash
Script to do the work:
A pretty straightforward bash script, vibing with AI-generated chaos.
Grab all binaries from /usr/bin and /usr/sbin
Parse --help for flags (--whatever, -x, you know the drill)
Pick random combos of those flags (1-4 per run)
Feed them garbage: broken JSON/XML, binary junk, path traversal attempts, format strings, absurdly long lines
Only logs actual crashes – SIGSEGV, SIGABRT, SIGILL, SIGBUS. Exit code 1 from bad args gets ignored.
Core logic looks like this:
# Extract flags from --help
flags=$(timeout 3s "$bin" --help 2>&1 |
grep -aoE -e '--[a-zA-Z0-9_-]+' -e '-[a-zA-Z]' |
grep -avE 'help|version|usage')
# Pick random flags (1-4 of them)
chosen=$(echo "$flags" | shuf -n $((1 + RANDOM % 4)))
# Add a random test file
fuzz_file="$WORKSPACE/$(random_pick: bad.json, random.bin, longline.txt, ...)"
# Run it
timeout 5s "$bin" $chosen $fuzz_file
Script skips the obvious no go zones – package managers, rm, network tools, editors. I’m glad to see the script finish with the machine still answering.
Look, these are edge cases. Nobody’s actually running edgepaint --wtf malformed.json in prod. But segfaults are segfaults – the binary should bail with “invalid option” or “bad input”, not dump core.
Now What?
So I’ve got a pile of crashes. Some in critical components. All reproducible.
File bugs for all of them? That’s a lot of BZ tickets for “yes hm this crashes if you feed it random garbage with weird flags”. Developers have better things to do.
Ignore them? They’re real bugs. And some of these are in grub2 and perl – not exactly throwaway packages.
Fedora 44 has been released!
🎉 So let’s see what is included in this new release for the Fedora Atomic Desktops variants
(Silverblue, Kinoite, Sway Atomic, Budgie Atomic and COSMIC Atomic).
We have moved the cross-variants issue tracker to the new Fedora forge.
This is the best place to file issues that impacts all variants or to coordinate work between all of them.
If you have issues specific to a given desktop environment then we usually prefer to track them in each respective SIG trackers.
They are listed on the README for the atomic-desktops organization.
Unified documentation, hosted on the new forge
The unified documentation for all Atomic Desktops is finally live!
Unfortunately the translations have not been migrated so we will need help to re-translate everything again, once the translation setup is ready with the new forge.
It should be mostly copy/paste from the previous docs and this time we will only have to translate the docs once and not for every (new) variant.
Some AppImages are still using an old AppImage runtime that relies on FUSE 2 libraries being available on the host.
See the discussion thread for examples on how to check the runtime of an AppImage.
If some of your AppImages do not work on Fedora Atomic Desktops 44, we recommend:
Looking for a Flatpak for the application and giving it another try. Consider helping upstream package their application as a Flatpak.
Reporting the issue upstream so that they are aware that they should use a newer runtime. Consider helping upstream with this as well.
EncFS or CryFS backends for Plasma Vaults are removed
KDE upstream no longer recommend using the EncFS nor CryFS backends for Plasma Vaults, notably because they rely on the FUSE 2 libraries.
If you are using one of those backends, you should migrate your data to a new Vault using the only maintained backend (gocryptfs).
Ideally this should occur before the update to Fedora 44.
If you have already updated to Fedora 44 and need access to your data, you can layer the needed packages (cryfs or fuse-encfs) using rpm-ostree install <package>, then migrate your data and finally reset the layers with rpm-ostree reset.
Dropping compatibility for pkla polkit rules
Support for the legacy pkla polkit rules format has been removed.
It is unlikely that you were relying on support for those rules as most of the ecosystem has moved on to the new Javascript based format.
Unified out of the box experience with KDE Plasma Setup (OEM installation)
Thanks to the new Plasma Setup, it is now possible to install the system with Anaconda with minimal configuration and then complete the installation on the first boot by creating a new user and selecting the timezone.
This is great when you want to install Fedora Kinoite on a computer and don’t want to setup a user in advance.
As always, I heavily recommend checking them out, especially if you feel like some things
are missing from the Fedora Atomic Desktops and you depend on them (NVIDIA drivers, extra
media codec, out of tree kernel drivers, etc.).
What’s next
Helping us with a few nasty bugs
If you are interested in contributing to Fedora Atomic Desktops, here are some bugs that we will have to fix in the short term.
We would greatly appreciate help with:
Fixing root mount options (atomic-desktops#72):
This is a long standing and mostly invisible bug that impacts performance.
Moving away from nss-altfiles (atomic-desktops#108):
This is another long standing source of issues that new users regularly face.
A lot of work is happening to make the transition to Bootable Containers as smooth as possible for our existing users.
You can look at the roadmap for this transition at atomic-desktops#26.
Sealed bootable container images include all the components needed to create a fully verified boot chain, from the firmware to the operating system composefs image.
This relies on Secure Boot and thus only supports system booting with UEFI on x86_64 & aarch64.
The components are:
systemd-boot as bootloader,
a Unified Kernel Image (UKI) which includes the Linux kernel, an initrd and the kernel command line,
a composefs repository with fs-verity enabled. This is managed by bootc.
Both systemd-boot and the UKI are signed for Secure Boot.
The images are test images so the components are not signed with the official keys from Fedora.
The main direct benefit that we will get from this support is that we will be able to enable passwordless disk unlocking using the TPM in a way that will be reasonably secure by default.
We welcome testing and feedback! Please see the list of known issues and report new issue at github.com/travier/fedora-atomic-desktops-sealed.
We’ll redirect them as needed to the right upstream projects.
Beware, those are testing images. The root account does not have a password set and sshd is enabled, by default, to make debugging easier.
The UKI and systemd-boot are signed for Secure Boot but, since those are test images, they are not signed with the official keys from Fedora.
Don’t use those images in production.
Where can I get more details about how this work?
If you want to know more about how sealed images work (i.e. how we make bootable containers, UKI and composefs work together to create a verified boot chain), see the following presentations and documentation:
Some time ago I used a feature in KDE called “Run a command” when an event triggered. It triggered for me when a calendar event fired and used Piper TTS to read the event to me out loud. A small popup and a pling don’t work for me.
I tried to get the feature back into KDE, but since the merge request isn’t going anywhere and people don’t give details how to implement it correctly I wrote Sigrun now. It is named after a Norse Valkyrie and is short for Signal Run.
It is a systemd service running as a user and listening on DBus signals. Once it finds a configured one, it runs its command. The desktop doesn’t matter.
Here is the rule that reads my calendar reminders aloud via kde-tts.py:
I’ve not updated this space in quite some time as I am still discovering new opportunities and
obstacles within the problem space. Plus, my priorities tend to shift based on whomever is around
and offering to help, which is noise I try to avoid propagating.
However, I do generally try to produce weekly(-ish) status updates in the Matrix
channel, and I’ve decided to start providing something
similar here since I do see some hits against the RSS feed.
Sorry for the interruption, but if you’re aware of any Sr. / Staff SRE or SWE roles, especially
those with a focus on open data and/or open source, please reach out.
Alternatively, if you might be able to contribute to any of the hosting costs for Hatlas (roughly
$100/mo), I have several options available:
Lakehouse architecture and specifically Iceberg is what drew me to
launch this project in the first place. This data model is simultaneously far more flexible than
the alternatives while also being at least one order of magnitude cheaper – likely several. The
cost savings aspect is especially what drew me in this direction, because this puts medium- to
large-scale analytics within the budget of a hobbyist or cost-conscious non-profit.
I still believe in this direction, but I’ve had to pause these efforts temporarily because
Apache Polaris (which is not the only Iceberg REST solution, but
certainly the most prominent and the most promising) only fully supported AWS S3, not any other “S3
compatible” storage. AWS S3 is terribly expensive compared to basically all other options, so
non-AWS support was and continues to be critical to our story.
I launched the Hatlas lakehouse shortly after support for non-AWS S3 buckets landed on the Polaris
main branch. This actually worked just fine with pyiceberg, which
is what our POC Jupyter notebook used. However, I had perhaps gotten ahead of myself because I
quickly realized that no other client / query engine knew how to handle this configuration. This
meant that I had exactly one tool with which to manage the Lakehouse, including tasks such as data
maintenance, which pyiceberg wasn’t really built for.
We now had the inverse of all of our design goals: we had a single working client, and our costs
would quickly rise without a process to clean out stale data. Meanwhile, we had experienced
analysts who were waiting in the wings and eager to get moving, and they couldn’t connect using
their tools of choice.
I made the difficult (for me) decision to temporarily pull the plug on the Iceberg approach and
adopt an interim solution which is far more traditional and will deliver more immediate value.
We had two experienced analysts who were basically sitting idle, both of whom knew Postgres well,
and our source data is in Postgres anyway, albeit very poorly-optimized. I decided to accept the
idea of doing traditional “everything in Postgres” analytics for now.
I granted trusted members of FDWG direct access to the Hatlas Datanommer replica, but first we
needed to perform some optimizations in order to make this work feasible. Upstream has json blobs
stored as text, which meant that every query had to json-parse every single row, which took
basically forever. Additionally we have a “topic” column which fits very naturally with Postgres'
ltree type, and almost every query includes a
topic WHERE clause.
It took me a few weeks to figure out the best approach for this because A) I’m not a Postgres
expert, and B) nothing with this dataset is easy. For example, some of our JSON blobs are larger
than the hard-coded maximum size that Postgres can parse before it gives up. We also have blobs
that contain JSON nulls, which Postgres blows up on. You have to handle these errors and others in
several different ways in order to have a process that can work through the whole dataset, plus I
wanted to ensure that our approach would continue to work as we ingest new data.
Once I had a process, I had to run it across the entire dataset. And this is where you become a
Postgres expert, because getting this to run well required knowledge of most of the Postgres
tunables plus btrfs optimizations. I was figuring this out as I went, so overall the full
conversion for both data types took an entire calendar month! This is because the database is large
enough that it doesn’t fit on any nvme or ssd storage available to me, and you need to really
understand Postgres internals in order to optimize for HDD head seek times. I think that if I were
to run this again it would take significantly less time with the tuning applied, and it would
probably only take hours if I had a large nvme volume. I guess I can say I’m a DBA now though!
The optimizations turned out to be worth it: queries against json fields are now 36x faster, and
queries against topic are now 2x faster! We finally have a database that is feasible for analytics!
(Even though it’s far from optimal.)
This puts us into position to start considering how we want to transform this data, and we can start
building these transformation pipelines with tools that will translate directly to Iceberg later
e.g. SQLMesh.
Unfortunately, this work was completed just in time for both of our “rockstar” analysts to pull back
from their Fedora contributions :(. One had a career change and the other had a major life change,
all of which is totally understandable. Hopefully we’ll see their return as they settle into their
new normal, and hopefully we’ll continue to see new curious onlookers joining our Matrix channel.
I’m working upstream with Fedora Infra on several critical aspects of this work. For example:
Upstream Datanommer (our main dataset) is now producing nightly exports so that our replica
can stay in
sync.
I’m also working to get ssh git access enabled on Fedora Forge, which is Fedora’s new
ForgeJo-based home, as
that is critical to my workflow and it is the only thing preventing me from moving the Hatlas
code repos out of my personal Codeberg.
We finally have a simple and reliable path to produce a canonical dataset in Parquet (via
pg_parquet). This is available for FDWG members.
Some test queries against a local dataset with DuckDB are astoundingly fast – sometimes 1000x
faster than Postgres. To me, this further validates the Lakehouse path.
I’ve designed and implemented a PII-protection scheme (currently only implemented in Postgres but
applicable elsewhere), whereby our transformations from “barely-queryable data” into “ideal
analytical data” also separate out identifiers (PII) into privileged tables.
This means that as our transformation pipelines improve, we will be able to implement a very low
bar for access to analytics work while retaining a high minimum level of trust for access to
PII.
More details in a future update.
I’ve started creating a data
dictionary to capture all of my / our
accumulated knowledge about Datanommer’s data.
These facts exists elsewhere but are spread across many different repos and technologies, and
even if we could somehow ingest them alll we would still have an incomplete understanding of our
data.
Building this dictionary is laborious work, and slightly fragile since updates elsewhere will
need to be manually propagated to the dictionary. The upshot is that we can build tools that
fully understand our dataset, which is needed for:
I’ve started building some prototype data transformation pipelines in SQLMesh.
These accomplish the transformations I think we need, although the current pipelines were built
by hand as a POC.
I’m WIP with building a data-driven SQLMesh pipeline generator, which will (re)build all
pipelines from the data dictionary. It should then be trivial to orchestrate these pipelines
based on the same dictionary.
The data dictionary includes a rudimentary taxonomy of user actions.
This is nothing short of an attempt to classify all human activity on Fedora, in part to ensure
that all such activities are actually being captured.
I’m hoping to eventually run my version past the academic-types in this space such as those
within CHAOSS, as any taxonomy will have a tendency towards being
highly-subjective and localized to one’s personal view on the project.
I’m foolishly hopeful that that the collaboration necessary to get this formalized will actually
act as a form of knowledge capture to help people see the Fedora project more holistically.
(Including myself!)
We have many different use cases for analyzing http traffic: the docs team wants to know which docs
are most- / least-popular, infra wants to know which ASNs are driving our DDOS traffic, etc.
I made a POC of ingesting apache logs
into Clickhouse and enriching them with geospatial data while also pseudonymizing them, and then
analyzing with Grafana.
The results were promising, but the data sets here are large enough that hosting hot data on
Hatlas would potentially conflict with our other pursuits. I’ve dropped this for now, but I plan
to return if/when we get our Lakehouse back on track.
I’ve continued to flesh out our bare-metal Kubernetes deployment with an eye towards multi-tenant
access.
Much of the tech debt from “this is my throwaway dev environment” has been cleaned up, and we
now have basic security measures in place such as NetworkPolicies and SSO.
The cleaned-up infra code helped me migrate from OVHCloud to Hetzner to reduce costs by about 40%.
The Docker work to
upgrade
and run our Datanommer replica is now
shared and documented.
I’m working on refactoring my infra-as-code (e.g. ArgoCD repos) further so that anyone in FDWG
will be able to make a PR against our analytics tooling.
The goal is to fully open up all Hatlas infra code and throw open the gates to FDWG as a
project bigger than myself.
Polaris is due for a revisit.
When I paused the Polaris POC, v1.3 was still unreleased. That version finally came out while
I was still in the depths of the Postgres optimizations, and v1.4 just came out a few days
ago. Time to try again!
Since I’ve not received any traction from RedHat Legal, I’ve asked my favorite non-lawyer Claude
Opus to review our GDPR stance.
In short, the broad strokes of what we’re doing are well within GDPR but we have some policy
updates and technical controls which need to be freshened up since Fedora last made its big
GDPR push in 2018.
These are WIP both upstream in coordination with Fedora leadership and within Hatlas.
The biggest user-visible portion of this is that we need to enact an access gate for FDWG
analysts.
This is what I’m currently focused on, both at the policy and infra level, and I have not
updated any of the data on Hatlas since receiving this advice.
This was … long. Sorry about that. I’m impressed that you actually read this! I will try
to keep future updates much shorter. This means that they will be less concrete / more tentative in
tone. I need to become ok with that :)
While working on the new git signing feature for
tumpa-cli I noticed that some of
the commits can not be verified. For a moment I freaked out and then thought it
must be a problem in my code. But, I could not dig enough. Opus 4.7 helped me
to find the eaxct commit in git's history and a reproducer. I reported the issue to the
maintainers
and they are working on a fix.
\xc2\xa7 aka § was the cause for me.
msg.txt body
sign stdin (tee'd)
stored commit body
verify
git 2.43 (host)
... 20 a7 0a
... 20 c2 a7 0a
... 20 c2 a7 0a
OK
git 2.53 (CI, docker)
... 20 a7 0a
... 20 a7 0a
... 20 c2 a7 0a
BAD
git 2.43 transcoded the message to UTF-8 BEFORE calling the signer;
signer and storage saw the same bytes (c2 a7). git 2.53 hands the
signer the RAW bytes (a7) and transcodes only on the way to the
commit object (c2 a7). The invariant "bytes fed to gpg.program at
sign time equal the bytes a verifier sees when it reads the commit
back" is broken.
git config i18n.commitEncoding iso-8859-1 is supposed to be the configuration
if we have non UTF-8 characters. But, I never knew about this configuration
before I found the bug.
I want to thank my friends in Anthropic for letting me use the tools and
techonology to keep building.
If you work with patches and git am, then you’re probably used to seeing patches fail to apply. For example:
$ git am CVE-2025-14512.patch
Applying: gfileattribute: Fix integer overflow calculating escaping for byte strings
error: patch failed: gio/gfileattribute.c:166
error: gio/gfileattribute.c: patch does not apply
Patch failed at 0001 gfileattribute: Fix integer overflow calculating escaping for byte strings
hint: Use 'git am --show-current-patch=diff' to see the failed patch
hint: When you have resolved this problem, run "git am --continue".
hint: If you prefer to skip this patch, run "git am --skip" instead.
hint: To restore the original branch and stop patching, run "git am --abort".
hint: Disable this message with "git config set advice.mergeConflict false"
This is sad and frustrating because the entire patch has failed, and now you have to apply the entire thing manually. That is no good.
Here is the solution, which I wish I had learned long ago:
$ git config --global am.threeWay true
This enables three-way merge conflict resolution, same as if you were using git cherry-pick or git merge. For example:
$ git am CVE-2025-14512.patch
Applying: gfileattribute: Fix integer overflow calculating escaping for byte strings
Using index info to reconstruct a base tree...
M gio/gfileattribute.c
Falling back to patching base and 3-way merge...
Auto-merging gio/gfileattribute.c
CONFLICT (content): Merge conflict in gio/gfileattribute.c
error: Failed to merge in the changes.
Patch failed at 0001 gfileattribute: Fix integer overflow calculating escaping for byte strings
hint: Use 'git am --show-current-patch=diff' to see the failed patch
hint: When you have resolved this problem, run "git am --continue".
hint: If you prefer to skip this patch, run "git am --skip" instead.
hint: To restore the original branch and stop patching, run "git am --abort".
hint: Disable this message with "git config set advice.mergeConflict false"
Now you have merge conflicts, which you can handle as usual. This seems like a better default for pretty much everybody, so if you use git am, you should probably enable it.
I’ve no doubt that many readers will have known about this already, but it’s new to me, and it makes me happy, so I wanted to share. You’re welcome, Internet!
Motivation: dealing with multiple Toolbox containers¶
Lately, I've been getting annoyed by my current Bash prompt offering me a poor
UX when dealing with multiple Toolbox containers.
The prompt lacked crucial information: to which of the running containers a
given shell belongs to?
I did a quick search to see if there's an easy fix I'm missing out but it turned
out there is a long-standing desire to improve Toolbox's UX in this respect and
multiple approaches have been discussed/tried. Here are some relevant tickets:
Discovering the old and new version of Bash Color Prompt¶
After looking around on how to update my Bash prompt to become
"container name"-aware, I came across Fedora's shell-color-prompt package
which was conveniently just a dnf install bash-color-prompt away (strangely,
the source package is named shell-color-prompt while the binary package is
named bash-color-prompt, see also RHBZ #2291024).
My attempts at configuring the Bash prompt to be "container name"-aware with the
help of shell-color-prompt didn't look very promising.
I had a little epiphany when discovering that shell-color-prompt's maintainer,
Jens Petersen, recently wrote a replacement for it: namely Bash Color Prompt
(bcp). Jens describes it as having a cleaner declarative approach for creating
one's custom Bash prompt.
It worked and its declarative approach at creating a custom Bash prompt was
really easy to follow and tailor to my needs.
Currently, until the new version of Bash Color Prompt (bcp) is packaged in
Fedora (and other distributions), a simple way to install it is to just grab the
bash-color-prompt.sh file directly from its GitHub repository and put it
somewhere in your home directory.
Afterwards, just source and configure it in your .bashrc file. Here is how
I've done it:
# Use the new Bash Color Prompt (bcp) by Jens Petersen (Red Hat) to handle PS1.# NOTE: Temporarily, I've just copied the script from:# https://github.com/juhp/bash-color-prompt/blob/main/bash-color-prompt.shif[-f$HOME/bash-color-prompt.sh];thensource$HOME/bash-color-prompt.sh
fi# Configure bcp.
bcp_layout(){localexit_code=$1# hexagonbcp_container
# opening [bcp_append"["# user@host or user@container(host)localuser_color="green"if[[$EUID-eq0]];thenuser_color="red";filocalmachine="\h"if[-f/run/.containerenv];thencontainer_name=$(grep-oP'(?<=name=")[^"]+'/run/.containerenv)machine="$container_name(\h)"fibcp_append"\u@$machine ""$user_color;bold"bcp_title"\u@$machine:\w"# directorybcp_append"\w""blue"# git statusbcp_git_branch" ""magenta""yellow"# status indicatorif[[$exit_code-ne0]];thenbcp_append" ✘$exit_code""red;bold"fi# actual prompt charbcp_append"]\$ ""default"}# Initialize bcp.
bcp_init
This is a report created by CLE Team, which is a team containing community members working in various Fedora groups for example Infrastructure, Release Engineering, Quality etc. This team is also moving forward some initiatives inside Fedora project.
Week: 20 – 24 April 2026
Fedora Infrastructure
This team is taking care of day to day business regarding Fedora Infrastructure. It’s responsible for services running in Fedora infrastructure. Ticket tracker
[Badges/Outreachy] refactor: Extract duplicated search dropdowns and error handling into reusable components [Suggested]
[Badges/Outreachy] Refactor: Replace hardcoded email domain with config variable [Approved][Resolved]
[Badges/Outreachy] cleanup: remove dead template infrastructure from app.py[Suggested]
[Badges/Outreachy] fix: add missing session.commit() in opt_out [Approved]
[Badges/Outreachy] Add get_persons_by_nickname with pagination to TahrirDatabase [Rejected]
This team is taking care of day to day business regarding CentOS Infrastructure and CentOS Stream Infrastructure. It’s responsible for services running in CentOS Infratrusture and CentOS Stream. CentOS ticket tracker CentOS Stream ticket tracker
This team is taking care of day to day business regarding Fedora releases. It’s responsible for releases, retirement process of packages and package builds. Ticket tracker
Fedora 44 Final release preparation
RISC-V
This is the summary of the work done regarding the RISC-V architecture in Fedora.
F44 rebuild: we’re halfway through; getting through about 300 builds/day. (NB: This is still pretty decent, given the reduced builders we have. Fedora also has a non-trivial number of ‘noarch’ packages; they can get imported into RISC-V Koji without a rebuild.)
Tried out an upcoming server hardware called SpacemiT K3. I got 24h remote access (with some limitations). Uploaded some basic data. Ran an initial benchmark of building ‘binutils’:
K3 is ~6.5x faster at compiling ‘binutils’ compared to our current build horse P550. (NB: this is not a 100% apples-to-apples comparison, as P550 is on Fedora, while the K3 is on some FrankenLinux, so the default compiler flags from the host differ.)
Started a thread with a few folks on a backup plan to improve reliability of builder hardware. Roughly: until server-grade hardware is widely accessible, see if we can get a few of the current “workhorse” (SiFive P550), and make them available somewhere so that they can be easily hooked to RISC-V Koji.
Conferences: RISC-V EU Summit schedule preparation is done.
QE
This team is taking care of quality of Fedora. Maintaining CI, organizing test days and keeping an eye on overall quality of Fedora releases.
Rmdepcheck (replacement for rpmdeplint repoclosure) improvements: swapped out the core implementation from XML parsing to clever dnf commands to make it simpler, more robust, faster, and work properly on EPEL packages/updates
Forgejo
This team is working on introduction of https://forge.fedoraproject.org to Fedora and migration of repositories from pagure.io.
[Forgejo] Participated in the Fedora Forge Sprint Planning meeting call
[Forgejo] Strategized the collaboration on the private issues feature inclusion
A bit of a mixed bag today. Good reads are the Agents and the Era of Overproduction and AI reshaping values at Enode blog posts.
Podcast-wise, give Driverless World and the Pedro Sánchez episode a listen.
Quote of the Week
“It’s like the free puppy,” I continue. “It’s not the upfront capital that kills you, it’s the operations and maintenance on the back end.”
Before I knew it, I found myself already at the second day of the FOSSAsia 2026 conference on 10th March 2026 after a rather eventful previous day. While I did plan to wake up a little later, I realized that I had to prepare for my presentation on the Fedora Badges Revamp Project that was scheduled later that day. This was the only proposal that got selected, and the other one on the Fedora Forgejo Migration Project did not make it, so I had to ensure that I was well prepared for this. After some rounds of talk rehearsals and some quick bites of breakfast, Samyak Jain and I exited Lumen Bangkok Udomsuk Hotel to be greeted by comparatively cooler weather on that day. Since it was more of the same choices on the breakfast menu, we were able to leave the hotel as early as 0945am Indochina Time. Climbing the long escalator to finally make it to the FOSSAsia 2026 event venue, we noticed just how much the community booth layouts had changed that day. For instance, the GNOME Foundation community booth was now positioned beside those of the Debian Project and the TeaLinuxOS Project.
Manifest #1
I had a mixed feeling about this, while this did allow for the corridor to be widened, the space for volunteer staffing at the community booths of the GNOME Foundation, Debian Project, and TeaLinuxOS Project was severely restricted. This meant that it was quite a struggle for folks who had to move away from or into the booth locations, as that required the folks around them to be moved as well. Just like the day before, we decided to assist Aaditya Singh with booth operations, all while trying to finish off the last hundred Fedora Project stickers that we had saved up from our participation at DevConf.IN 2026. While we had previously signed up for the ExpressVPN-sponsored FOSSAsia hackathon on Internet Security Development Using Artificial Intelligence, we ultimately decided to give away our designated slot to other younger participants who could not register in time. We discussed just how important this hackathon participation was for budding folks who were getting started with free and open source software, but not so much for us, since we have been around the scene for a long while now.
Manifest #2
This opened us up further for many impromptu conversations, ideation discussions, booth visits, and of course, scheduled presentations. After leaving Samyak at the hallway track, I headed into a newer arrangement of community booths in the second corridor. Since my work laptop had malfunctioning cooling, which made screeching loud noises, I had to use his laptop to deliver my presentation later that day. Not only did I ensure that I downloaded my slide deck and speaker notes onto his computing device well in advance, but I also took care to avoid display inactivity suspensions and ensure an ample laptop battery charge. After a quick round through the community booths, I headed into the hall where the ExpressVPN-sponsored FOSSAsia hackathon was taking place, purely out of curiosity. Informing one of the co-located volunteers about my involvement as a visitor and not as a participant, they allowed me into the room to chat with the fellow participants. Since the hackathon was duration gated, I took extra care to curb my nosiness and allow the teams to work their magic.
Manifest #3
Out of all the participating groups that I interacted with in the competition hall, my conversations with the likes of Saksham Sirohi and Arnav Angarkar about their ideas stayed with me. They knew how they had to limit their actual implementation and were focused on delivering an MVP (Minimum Viable Product) that could be expanded upon. I also took some time to have a quick conversation with ExpressVPN employees who were serving as competition moderators there to understand what they look for in a certain implementation. Getting myself a cup of boba tea (and skipping it because I did not like it), I met up with the likes of Ajinkya R. and Dakshita Thakkar at the hallway track. On returning to the collective, I decided to lend my assistance to Pongsakorn S. at the KDE e.V. community booth as well, besides helping Aaditya at the GNOME Foundation community booth. I learned that he was using OpenSUSE Leap on his personal laptop and was interested in RPM packaging, while I was placing some of the last fifty Fedora Project stickers on the booth table.
Manifest #4
In contrast to the unoccupied booths from yesterday, it was endearing to see how the likes of Aaditya and Pongsakorn stood their ground at their respective booths, even when the footfall on the second day was noticeably smaller than the day before. Amidst our conversations, I connected them with each other, and Aaditya even took the chance to showcase the GNOME Foundation community quiz application that he had been working on. Following the community quiz activity idea from the Fedora Project Community Presence at DevConf.IN 2026, he wanted to use the remaining GNOME-styled tee-shirts as prizes for folks who attempted the quiz and got all the answers right at the hardest difficulty. Needless to state explicitly, I gave the quiz a try, not because I wanted another tee, but because I wanted to appreciate what he had been working on. A part of me also wanted to take on this hardest-difficulty question challenge to understand how much I knew about the GNOME Foundation and to see if I could learn things I did not know about the community and its activities in the process.
Collection #1
While I got most questions correct on the first attempt, it took me three attempts to get the satisfaction of having all the answers right. Departing from the booth and weaving through a thin collection of mostly booth attendants, event volunteers, and talk presenters, I made it to Mitchell Yue's talk on the Lynx Framework and how it could be used to move from web development to native applications at around 1000am Indochina Time. I was intrigued to learn about this development library, which made use of native bindings for impressive performance. I was even more taken aback (but positively) when I was gifted a ByteDance tee-shirt for asking how it differed from the usual QtWebEngine bindings, as I was well-versed in Qt. This act of brand advocacy definitely seemed to have encouraged the audience to share more feedback or ask more questions during the talk. Heading back to the Debian Project community booth, I shared my experiences from my splurge at Animate Store from the previous day with Ananthu CV before helping Abhijit PA get some cold coffee from the reception desk.
Collection #2
We were also joined by Shreenivas at the GNOME Foundation community booth, which allowed us to pace ourselves while tending to questions and feedback from booth visitors. During a brief visit from Daniel J Blueman, he connected with Aaditya to understand where he could report bugs and improvements for the GNOME desktop. He seemed to have been plagued by problems with the use of external monitors on his ARM-based (Advanced RISC Machines) SoC-powered (System on Chips) laptop on his fresh Debian Linux installation. Meanwhile, I wasted no time unofficially promoting my presentation on the Fedora Badges Revamp Project to the enthusiastic folks from the day before who visited our booth. Meanwhile, I also gave a quick demonstration on how RPM packaging works to a relentlessly curious Pongsakorn, who wanted to package their Rust application for Fedora Linux. Using one of my own hobby projects, Loadouts for Genshin Impact, I showed him how to write an RPM specfile and how programming-language-specific macros could help make things easier.
Manifest #5
As I was not sure if something like the PyProject RPM Macros for Rust existed, I wanted our conversation to be an entry point for them into the RPM packaging tooling ecosystem. While answering his question about Linux kernel package versioning on Fedora Rawhide, I also fielded a question about conflicting packages from Abhijit. Since he was experienced with how the Debian Linux packaging process would handle this, he was curious to know how RPM package management tools would address the situation. Using various examples of how software packages are related to one another, I explained how this linking not only mapped dependencies but conflicts as well. At around 1230pm Indochina Time, I wrapped up my lunch and connected with Simon Strohmenger, who was visiting the community booth then. He shared how he worked on funding free and open source software events across Europe and supporting critical engineering ecosystem resources and was also curious to know more about what I had to share regarding the AI-assisted Contribution Policy from the Fedora Project.
Manifest #6
After sharing contact details with each other, he also checked in with me on whether I would be willing to present a proposal on best practices in contributor onboarding and retention in free and open source software communities. In our conversations, we realized just how crucial it had become to discuss governance policies among grassroots collaborators to ensure that their implementation does not come off as a negative surprise. I also shared my approaches to using LLM (Large Language Model) tooling to assist project maintainers and budding contributors by automatically addressing various low-hanging fruits. Empathizing with how stressful and overwhelming it ends up being for the maintainers and newcomers, respectively, AI tooling could be utilized for these positive purposes instead of worrying about its popularized contemporary taboo perspective. While I could not get him to participate in my Fedora Badges Revamp Project talk later that day, he said that he would send over some of his engineering friends, as he found the general idea of awarding contributions fascinating.
Manifest #7
Since there were not a lot of folks present at the event apart from those who had some activity to participate in, Aaditya struggled to distribute the remaining GNOME-styled tee-shirts. On the other hand, we had been extremely successful in distributing the stickers, so we were able to advocate for our projects from the community booth. At around 0130pm Indochina Time, I checked in with the event volunteers about the allegedly malfunctioning livestreaming functionality in one of the presentation halls. Amidst my attempts to have that fixed before my talk began in about three hours, I briefly met up with Saksham again in the hallway, who mentioned his plans to expand the hackathon project, and I shared the idea of "releasing (software) fast and releasing often." It was an absorbing sight to see how the participants at the ExpressVPN-sponsored FOSSAsia Hackathon were helping each other with a gargantuan variety of problem statements. Such a sight (and evidence of friendly competition) is something one would rarely get to see outside of free and open-source software communities.
Manifest #8
I was thankfully able to get myself caffeinated at around 0330pm Indochina Time, as with the activities moving slowly throughout that that day, I found myself zoning out every now and then. After addressing the livestreaming issue, I met up with some folks from AWS (Amazon Web Services) who were visiting our trinity community booth lineup and had previously worked with the likes of David Duncan and Rich Bowen. Since they had experience with Amazon Linux, they reflected on how Fedora Linux provided them with an innovation-driven upstream distribution to build upon. After that one last conversation with them, Aaditya decided to start packing up the booth at around 0430pm Indochina Time, while I kept myself busy helping him with the tooling. With the time being barely fifteen minutes away from my presentation's commencement, Samyak returned to the GNOME Foundation community booth. Pongsakorn also found his companion back when Tomas returned to the KDE e.V. community booth, and I was ready for my talk to be delivered at the tail end of the event.
Collection #3
As the presentation designated training room had my slide deck already fetched, I did not have to bother with sharing my screen. There were some technical issues with the clicker device, though, as it failed to switch slides when the window was in fullscreen mode. I decided not to settle for windowed mode to save time, while Norbert Preining introduced me in the speaker area at around 0445pm Indochina Time. For a presentation scheduled at the tail end of the conference, it was reassuring to see that I still had around twenty attendees in the hall who were curious to know what I had to offer. Being the deciding moment that I had been practicing regularly for, I wanted to ensure that I was doing justice to their time (and attention) and that of the remote attendees. Thanks to Samyak's laptop and the regular touchups, the fifteen-minute-long presentation went largely well, and I also addressed some feedback and questions from both the in-person attendees and the hall host, Norbert. After finishing my talk, we headed into the competition hall where the four judges had assembled, by then.
Collection #4
After briefly waiting for the participating teams to propose their project ideas and for the judges to complete their evaluations, our collective made our way into the main hall to witness the winner announcements at around 0530pm Indochina Time. As the winners were announced and the event concluded, Samyak and I deliberated on our evening plans, as we did not intend to continue our stay with the FOSSAsia 2026 attendees at the Night Market. After his proposal was turned down by the folks he was planning to invite, we formed a group of four, including him, Soundarya Rangarajan, Aaditya, and myself, to visit Bangkok Chinatown. This idea felt strategically sound as it had become a little too late to visit the riverside for a calming evening boat ride dinner, and coincidentally, Soundarya was staying at the same hotel as Samyak and me. After making a quick drop of Aaditya's belongings at my hotel room and taking some time to re-energize ourselves after the long second day, we started looking for cabs using the Grab application at around 0700pm Indochina Time.
Collection #5
Amidst the heavy Bangkok evening traffic, we struggled to get a ride until we finally secured one after about thirty minutes of waiting through the Bolt service. I was not a fan of the service, as the toll expenses were not accounted for in the final billing, thus resulting in us having to pay for them separately. What I was a fan of, though, was the taxi driver assigned to us, as the cheerful person did not let the piling traffic and the linguistic barrier prevent him from having a friendly chat with us. With the use of Google Translate, he graciously helped us plan our course and described what we could expect in Bangkok Chinatown. And he could not have been more right - because when we got off the Bolt ride, the view of the neon filled, slightly humid hustle and bustle of Chinatown was a scene that nothing could even compare to. Weaving through the visitor crowd at the periphery, our first stop was, of course, a stall selling the world-famous Mango Sticky Rice for just 100 Thai Baht per plate. We finally got to experience firsthand just how true the people were who sang nothing but praises of this exclusive snack!
Collection #6
We dove deeper into our little roadside dining adventure with some Coconut Egg Sweet Crepes and Japanese Fried Octopus Balls before deciding to split into groups of two, as it did not sit right by Aaditya and myself, since the cuisines there were mostly non-vegetarian. As Samyak and Soundarya headed their way at around 0830pm Indochina Time, him and I decided to get even more creative by digging into some Shrimp Meat Dimsum Dumplings and Fried Crab Meat Rolls. After finishing off with some Fried Sweet Maple Fish and Ripe Alphonso Mango Slices, all for amazing bargains, we navigated the confusing pathways to reach Jam Jam Eatery Chinatown. As Aaditya's belongings were in my room, we left for the hotel at around 0915pm Indochina Time without waiting much longer. It was not that I had my fill of exploration, but I felt responsible to ensure he made it back safely at BTS Punnawithi to his hotel room. After hanging out in my hotel room and discussing industrial mentorship ideas, I decided to see him off and call it a day at around 1200am Indochina Time.
Dans cet article, je détaille la mise en place d’une authentification mTLS avec Cloudflare afin de sécuriser l’accès à mes métriques Prometheus. Un cas concret avec reverse proxy Apache et intégration dans Grafana.
The question was simple enough: How good of an image editor can you build with $20 worth of
Claude Code Pro subscription?
The answer, after one month and roughly that budget, is: surprisingly good, occasionally wrong
about performance, and frustratingly confident about things it hadn’t measured.
RasterLab is a non-destructive RAW image editor written in Rust, built almost entirely by Claude
Code. Not prototyped by it, not scaffolded by it — actually built by it, with me driving
direction and reviewing the output. One month, four weekly usage blocks, one image editor.
On April’s fool’s day, I shared that syslog-ng can reach 7 million EPS. This test lab result was in part possible thanks to a few performance enhancements coming to syslog-ng version 4.12.
How 7 million EPS is possible? Before diving deeper, let me repeat it: 7 million EPS is just a lab testing result, not (yet) possible in the real world. However, the technologies enabling this are already available on the development branch of syslog-ng, or have been available for ages, just not tested or promoted enough.
Scheduled maintenance work on Forge Runners. Runners will be offline. Scheduled jobs or event triggered actions should be picked up when service is back running, however it's not completely excluded that some actions might need to be retriggered.
forge.fedoraproject.org will not be affected.
Fedora Linux 44 is almost officially here! While our release engineering team and packagers focus on the final touches for F44, it is nearly time for the usual tradition of a Global Virtual Release Party! It is almost time to celebrate! For this release, we will celebrate Fedora Linux 44 slightly ahead of its actual final release.
Regardless of the final calendar date of any Fedora Linux release, every release represents months of hard work, testing, and collaboration from our global community. Whether you are a long-time package maintainer, a dedicated documentation writer, a creative graphic artist, or a brand-new user firing up a Fedora Atomic Desktop for the very first time, this release belongs to you.
To mark the occasion, we are hosting the Fedora Linux 44 Virtual Release Party this Friday, April 24, 2026.
Join us for a half-day of live sessions, recorded deep-dives, and community socialization. We have packed the schedule with updates from the Fedora Project Leader, behind-the-scenes looks at new features like Nix integration and DNF5, and a sneak peek at our upcoming Flock conference!
How to Attend
The event is 100% free and open to everyone, but registration is required to access the virtual venue. We are also happy to continue using our chat communication provider, Element Creations, as the virtual venue for the Global Virtual Release Parties. Thanks Element & Matrix.org for providing us the great tools to bring our global community together!
All times are listed in US Eastern (UTC-4) and UTC.
Time (EDT)
Time (UTC)
Session
Speaker(s)
Description
09:00 AM
13:00
Opening Remarks
Jef Spaleta, Justin Wheeler
Join the Fedora Project Leader and Community Architect as we kick off the celebration, look back on the last release cycle, and share news from around the project.
09:15 AM
13:15
FPL Update
Jef Spaleta
Jef Spaleta shares his reflections on Fedora Linux 44, what this release means for the project, and his vision for what lies ahead.
09:30 AM
13:30
Packit as Fedora dist-git CI
František Lachman, Laura Barcziova, Maja Massarini, Matej Focko, Nikola Forro
The Packit team walks through how Packit is taking over Fedora dist-git CI, what this change means for contributors, and what’s next.
09:45 AM
13:45
Adding Nix to Fedora: we did a thing
Jens Petersen
A behind-the-scenes look at bringing the Nix package tool to Fedora 44 — what it took, what it unlocks, and what it means for reproducible environments.
10:00 AM
14:00
PackageKit with DNF5 and KDE Integration
Neal Gompa
Dive into the integration of PackageKit with DNF5 and KDE in F44, what changed under the hood, and what it means for the desktop experience.
10:15 AM
14:15
Server WG
Peter Boy
An overview of the Server Working Group’s initiative to create a dedicated home server spin, driven by community home lab feedback.
10:30 AM
14:30
Break
None
Take a screen break, grab some coffee, or merge that Pull Request. We will be back with more programming soon!
11:00 AM
15:00
Fedora Docs
Petr Bokoc, Peter Boy
An update on the state of Fedora Docs and the ongoing Docs Initiative — where things stand today, and how you can get involved.
11:15 AM
15:15
What’s new and what’s next for the Fedora Atomic Desktops
Timothée Ravier
Discover what is new across the Fedora Atomic Desktops family (Silverblue, Kinoite, Sway, Budgie, COSMIC) and the roadmap toward Bootable Containers.
11:30 AM
15:30
Flock Preview
Justin Wheeler
With Flock just weeks away, get an early look at what to expect — sessions, highlights, and reasons to get excited about this June’s event.
11:45 AM
15:45
TBA
TBA
Stay tuned!
See you there!
Don’t miss out on the chance to connect with the people who build Fedora. Grab your ticket, share the link with your friends, and get ready to celebrate Fedora Linux 44.
After kicking off the FOSSAsia 2026 proceedings with the "choose-your-adventure-flavoured" Community Day, the first day on 09th March 2026 began for me at around 0730am Indochina Time. It took me quite a lot of strength to get myself moving out of bed, but I knew that I had to make it to the conference venue by 1000am Indochina Time to appear in the group photo. I rang up Samyak Jain and we planned to rendezvous downstairs at the hotel reception in about ninety minutes from then. While I had my talk on the Fedora Badges Revamp Project scheduled for the next day, I wanted to ensure that I was able to find some time to rehearse it at least twice per day leading up to the scheduled time. Fifteen minutes were not enough for the number of topics I wanted to cover as a part of my presentation, and as such I wanted to use it as an appetizer or an entry point for those interested to explore the project by themselves. I headed downstairs to get some breakfast bites after having connected with my friends and family from back home. The choices were more or less the same as yesterday, but this time around, I elected to stick with the greens as much as possible, just as Samyak did.
Manifest #01
This change in my dietary preference gave me a first-person perspective on just how difficult it ends up being for someone who is vegetarian. While Samyak had his collection of "just-pour-warm-water" packaged food, I had to stick mostly with noodles and salad for the most part. When it came to the origin of the meat products, you would not want your guesses to go wrong, and the language barrier could pose a real problem for the specifics. After an okayish refueling, we stepped out of the hotel premises into a humid Thailand at around 0945pm Indochina Time and were able to reach True Digital Park West in about ten minutes from then. Thankfully, the venue was a whole lot more crowded than the previous day, with a variety of community booths either getting placed or attended by visitors. While this day (and the next day) were exclusive to ticketed personnel, the crowd was definitely a whole lot more than the free-of-charge Community Day (of the day before). I split from Samyak as he left with one of the FOSSAsia 2026 community volunteers to obtain his attendee badge. As I already had the speaker badge, I could dive in headfirst into all the booths and friends that we had on the ground there.
Manifest #02
One of the first folks I met up with was, of course, Aaditya Singh from the GNOME Foundation, who was setting up the community booth by himself. With one more pair of hands, we made quick work of the booth setup as we placed both the Fedora Project stickers beside the GNOME Foundation stickers on display there for picking. Aaditya mentioned that he had been still rocking the Fedora Workstation installation that he had set up during the GNOME Asia 2024 event, where I gave away flash drives with distributions having the premier GNOME desktop environment. As he set up his exhibition laptop, he wanted to promote the vanilla GNOME desktop experience which, in his opinion, could not be done better than it was on Fedora Workstation. Funnily enough, we had the KDE e.V. community booth a corridor apart from the GNOME Foundation community booth. Trying not to read too much into the poetically inclined booth arrangements, I decided to visit them next once I was done setting up the operations with Aaditya. Amidst all the upcoming developments there, Tomas was pleased to share his work on the Konsole KDE terminal emulator with various advancements.
Manifest #03
From live previews of image thumbnails and semantic colors to proactive sharing of directory layouts and drag-and-drop operations, Tomas' improvements felt really impactful in the quality-of-life areas for any general terminal emulator user. This put the Konsole KDE terminal emulator miles ahead of the container-first terminal emulator, Ptyxis, in my personal opinion. I did have questions about the security of this approach, as some files could be malicious in nature (especially SVG files when it came to image assets), but since Konsole relied on pre-generated thumbnails, this was not a problem. No files were executed in the attempt to make the proactive previews available to the users, and that gave me peace of mind to top off my already excited overall feeling about the feature improvements. After meeting up with Pongsakorn S., another KDE e.V. community member, and picking up some stylish badges, I left a portion of the Fedora Project stickers to be shared from their community booth as well. It was important to commemorate the fact that since the release of Fedora Linux 42, the KDE Plasma variant of Fedora Linux had stepped up to become an actual edition instead of just being a spin.
Manifest #04
With Samyak returning to the GNOME Foundation community booth, Aaditya was no longer staffing by himself, so I went ahead to have a conversation with the folks at the Debian Project and the TeaLinuxOS Project community booths. It was interesting to note that they had their ground operations set up as early as 0830am Indochina Time. Chatting with the likes of Ananthu CV and Harry LBI from both the community booths respectively gave me distinctive perspectives on their projects and their involvements. While the stalwart Debian Project community has been around for a while, my interest was piqued to see just how TeaLinuxOS made the Arch Linux distribution usable by normal users with the use of the Calamares Installer and curated OOBE tooling. On being requested for advice from our steadfastly evolving Fedora Linux operating system, I emphasized just how important it is for the documentation to have clarity and to be accessible. A lot of contributors to a free and open source software project come from its pool of users, so it cannot be understated just how important it is to ensure that users are treated as first-class citizens and that their issue statements are taken seriously.
Manifest #05
I departed further into the wider collective of community booths after having taken some pictures with these folks at their community booths. I was halted at the entrance by Wendy Ha, who was just arriving at the conference venue then and wanted me to meet a CNCF ambassador from Japan. Our (rather, short-lived) conversation was put to an end by Rajan Shah, who was seeking out folks from Red Hat and IBM Corp for a group photo. I pulled in Samyak, and with the likes of Shivraj Patil, Veerkumar Patil, Deepesh Nair, and Gaurav Kamathe from Red Hat and a bunch of others from IBM Corp, we had photographs both in front of the FOSSAsia conference venue entrance and in front of the long escalator. This is where I met Soumyadip Choudhury, another colleague from Red Hat, from my hometown, and we spent some time chatting before we headed into the large hall for the FOSSAsia community group photo. I was glad to note that, in an attempt to avoid large commotion, the participants could stay right where they were in their seats and the photo would be taken from the event stage, while only those who were at the extreme periphery were requested to make it to the middle part of the hall.
Manifest #06
As most of our Red Hat collective was already together at the center, all we had to do was enjoy the event hosts' excellent oration while we put out our best poses for the cameraman. We also did not miss the opportunity to take some selfies by ourselves while we were at it, because the audience population was only going to get thinner from here on out for the day. Once we were through with those pictures, Samyak and I ran into the likes of Pritesh Kiri and Dakshita Thakkar, who happened to be attending the event for the first time just like us. Heading out to visit the community booths together, we began by interacting with folks working on ESP32x-powered 2D robotic drawing computers and 3D-printer-powered accessibility-focused appliance designs. Not only did we get to see the demonstration, but we also got to experience how a differently abled person could use a Sony DualShock 4 controller with just one hand and how a pencil-triggered nail cutter could prevent folks from getting hurt. This was followed by a visit to the VideoLAN Project community booth, consisting of what looked to me like mostly disinterested staff members tending to the booth visitors as and when they saw fit.
Manifest #07
After helping ourselves to the postcard prints featuring the famous VideoLAN Project's parodies of famous movies, we moved over to the Matrix Project and database-related community booths. With their booth placements also made beside one another, it was interesting to know from them what they had to offer that their alternatives did not. Skipping through the dishearteningly unoccupied FLOSS Fund community booth, I met Samyak again at the Google Summer of Code community booth. It was great to catch up with Stephanie Taylor after having met her during FOSDEM 2025, and she also graciously provided us with some swag for having participated in the program as a mentee (for Samyak) and as a mentor (for myself). We returned to the GNOME Foundation community booth while being surrounded by an enthusiastic group of folks who were contributors to both the GNOME Foundation (through donations) and the Fedora Project (through contributions). Not only were they thrilled to find us there, but they also went a step ahead by proudly showing off their Fedora Linux installations (most prominently of which were Fedora Silverblue and Fedora Kinoite) from their laptops.
Manifest #08
Stellar moments like these always end up re-energizing my resolve to support free and open source software in ways that I could. Getting to interact with folks like these who were just as resolute about the Fedora Project as I was made this trip already worthwhile, even though we had a couple of days ahead of us. They mentioned just how they regularly organized their local Fedora Linux installfest events to help with the adoption of our Fedora Project's primary offerings and also participated in the regularly organized testing events in our engineering community. While I could not provide them with the means to make donations as they wanted to contribute further, I requested them to propose their event through the officially ratified Fedora Mindshare event process. Being the model open source citizens that they were, I wanted to ensure that they did not have to spend out of their own pockets to organize Fedora Project events while ensuring that they were well equipped to host one in the already underrepresented APAC region. Increasing the Fedora Project's APAC representation was already at the top of my list, and this interaction only ended up centering my commitment towards this mission.
Manifest #09
They magically had the intrinsic understanding of just how badly we need contributors to be onboarded and retained within the community to ensure the longevity of a project. A lot of times, it eventually ends up falling on the shoulders of a flywheel person when someone from the community has to depart for some reason. We did not even have to share what was upcoming in Fedora Linux, as they even had an alternate Fedora Rawhide installation handy for development purposes. As the event owner for the Fedora Project's community presence at FOSSAsia 2026, a part of me still felt bad to have missed out on having a dedicated separate community booth at the conference. Sure—we felt right at home with the folks from the GNOME Foundation and KDE e.V., but we could very well have taken advantage of being the prominent RPM-based distribution there on site. A great deal of the community conversations would have then made their way directly to our community booth instead of us having to seek them out as event prospectors in all of our interactions. Instead of dwelling on this situation, I wanted to interact with the booth visitors from a Fedora Project governance member's perspective.
Manifest #10
As I was both a part of the Fedora Mindshare and the Fedora Council governance bodies at the time, I used my event presence to hear more about potential opportunities as well as possible approaches towards community outreach from grassroots contributors. Depending on the matter at hand, I could either choose to address the same or pass it over to the teams responsible as a conduit. Finishing off with this heartfelt interaction, I headed over to the OpenKylin Project community booth to connect with the folks working on this downstream distribution of Debian Linux. Apart from its absolute resemblance to the Windows 11 user interface, this provided folks with a pleasant slope of technical learning while they executed their departure from Microsoft's popular operating system. On my technical suggestion, they made it a point to reach out to the downstream packagers because, from the Fedora Project packaging perspective, their packages were either wildly outdated or simply broken, thus hurting their adoption in the RPM-based universe. I also proposed the utilization of Fedora Linux for the development of the project's codebase, as we provided a speedy-moving development toolchain with updates that they could utilize.
Manifest #11
While there were presentations and workshops scheduled throughout the first day, I still found myself spending most of my time in the hallway track. It was so easy to find someone I knew (or wanted to connect with) or something I knew (or wanted to know about) in all of the conversations. I also appreciated just how my fellow colleagues from Red Hat participated as volunteers for the event, all while sharing goodwill for potential long-term collaboration with the FOSSAsia organizers. This was around the time that the queue for availing lunch started forming at around 1200pm Indochina Time, so Samyak and I ended up joining in. Unfortunately, we had to push our lunch plans for later to make it to Praveen Kumar's interactive talk on Creating Custom Linux Images Using Bootc Technology and Podman Desktop. Delaying our lunch meals by about fifteen minutes or so allowed us to skip waiting in a long queue, but we did end up getting disappointed to notice that the menu had not changed for that day too. Do not get me wrong - the meal was okay for an obligatory refueling, but it left a lot to be desired when it came to ensuring that whatever ended up on our palate tasted good.
Manifest #12
Finishing our meals allowed us to obtain some more energy to visit the community booths that seemed to have popped up later or had absent attendants. We started off with the MapConductor Project community booth, staffed by Masashi Katsumata, who briefed us about how they worked towards unifying all mapping APIs (Application Programming Interfaces) for mobile application map SDKs (Software Development Kits). It was rather intriguing to know how these APIs could be integrated with MCPs (Model Context Protocols) to allow mobile applications to have a semantic understanding of the navigation utility. Using the puzzle question about the number of post offices in Japan was a fascinating way to draw a picture of the important purpose of this community project. We sifted through the ARM Project and RISC-V Project community booths with our discussions around the necessity of an alternative architecture for desktop computing amidst the exorbitant prices of the year 2026. The demos were made using a custom AlmaLinux distribution image flashed on a Raspberry Pi500 Plus, which had the SBC (Single Board Computer) inside a mechanical keyboard.
Manifest #13
This reminded me of the SyncStar Project that I worked on a couple of years back, when the Raspberry Pi computers were still generally accessible and not prohibitively expensive, as they had become by then. Returning to Aaditya allowed us to meet Rich Bowen, who was just heading out from the workshop at around 0100pm Indochina Time. It was delightful to catch up with him after having met him the last time during CentOS Connect 2025, and he was just as surprised to meet me here, since I was attending FOSSAsia for the first time. Wrapping up our conversations here, Samyak and I headed over to attend Shivani Bhardwaj's demonstration of Hands-on Network Security With Suricata. As someone who primarily works on infrastructure architecture, it was compelling to see just how the project worked both as an intrusion detection system as well as an intrusion prevention system. While her presentation was plagued with a bunch of technical issues related to the presenter's laptop inadvertently suspending, Shivani kept the audience engaged with her admirable showpersonship and funny jokes, giving me something to learn about too when it came to speaking.
Manifest #14
We stayed back in the same hall to attend Peter Membrey's talk on Open Sourcing Secure GPU Workloads in Enclaves as it shared a similar cybersecurity-inclined theme. As he was the Chief Research Officer of ExpressVPN, an organization that was also one of the primary sponsors of the conference, I wanted to understand just how bought in they were with the free and open source software mission. Having gotten a satisfactory answer, I left to get a FOSSAsia tee-shirt and necessary caffeine, since I did not have to join the queue for the identification badge that morning. I was pleasantly corrected when they declined my payment for the FOSSAsia tee-shirt and informed me that this was available at zero extra cost for the event participants. The lack of a proper lunch did manage to keep me awake, but a proper shot of cold coffee was able to finally get me back into the game once I was done exchanging the participant coupons. I hung out with my fellow Red Hat colleagues at the OpenEuler Project community booth before going back to Aaditya's place. Since he had kept a medium-sized GNOME tee-shirt aside for me, it only made sense for me to pledge a certain amount to the community.
Manifest #15
And might I add just how attractive of a bargain 100 Thai Baht was in exchange for a white GNOME round-necked tee-shirt, all while supporting the great work that they did! It could not be understated just how often free and open source software communities end up giving you friendships that you cherish for your lifetime. At around 0215pm Indochina Time, we attended Dakshita's presentation on Observability for Backend Developers before reaching out to the FOSSAsia volunteers regarding my concerns about the absence of speaker desks in various smaller-sized halls. While my concerns were accounted for by the staffing volunteers there, it was only when Rajan entered the scene that it was not only resolved for the training room that I had my presentation planned for the day after, but also for other associated halls. He mentioned to me how my preemptive concern about my deferred presentation allowed others to benefit too, since it had become unwieldy to use a lower-height general table. After having attended Joe Blubaugh's talk on SQL Expressions in Grafana Dashboards at around 0240pm Indochina Time, I decided to return to the hallway track to have some more conversations.
Manifest #16
One discussion on internet technologies with the likes of Ananthu and Deepesh at the Debian Project community booth later, we were joined by an extremely enthusiastic Daniel J Blueman at the GNOME Foundation community booth. Since he had presented his talk about Linux on ARM (Advanced RISC Machines) Laptops earlier that day, he wanted to have a comparative study of how different both the architectures were on a quantitative level. While our collective had a qualitative understanding of just how efficient ARM-based SOCs (System On Chips) were as compared to the widely available x86-based CPUs (Central Processing Units), stress-ng testing (as requested by him) could help paint an accurate picture. While the tests were running on both Aaditya's power-deprived laptop and Daniel's cool-running laptop, he was appreciative of the attempts made to make Fedora Linux work on ARM-poweredAppleMacBooks and general-purpose ARM-powered laptops. Although the initial tests gave us an idea about the performance per wattage across both the architectures, we had to dispose of the results as the x86-based laptop was not getting power from the wall, unlike the ARM-based one.
Manifest #17
Of course, we had to have a rematch - not specifically for getting different-looking results but getting genuinely computed ones, and this time around it was the turn for Samyak'sLenovoThinkPad P16v Gen 1. After making sure that both of these devices were getting enough juice from the power outlet and shutting down the unnecessary applications, we were able to obtain an anecdotal result that put the AMD (Advanced Micro Devices) Ryzen 7 Pro 7840HS CPU miles ahead of the QualcommSnapdragon X Elite SoC. This kind of unstructured research was able to draw quite a huge crowd in the second half of the day, and we deduced that while Samyak's laptop was drawing at most about 100 watts from the wall, Daniel's laptop was barely pulling at most around 25 watts. With improved compatibility across various applications and additional efforts in hardware enablement, the ARM-based SoCs could very well be the future of sustainable domestic general computing. After sharing contact details, Ananthu and I had a quick discussion on exploring places that exhibited (or franchised) anime-related swag, especially those from the popular Genshin Impact brand at around 0400pm Indochina Time.
Manifest #18
With Samyak on his way back to the hotel room, Aaditya, Ananthu, and Shreenivas (who began hanging out with us at around that time) and I decided to stay back to attend the FOSSAsia Evening Social Event. I seemed to have chosen wisely, as I ended up running into Harish Pillay shortly after in the hallway track, whom I had last met during CHAOSScon EU 2024. In our friendly exchange, he shared how he used Anthropic Claude to generate slide decks based on the speaker notes that he had manually prepared, a process that felt flipped to me, but I was captivated to see the results that he had on display. I noted the effective learning for times with a shorter timeline for proposal submissions without having to compromise on the overall quality. After a couple of photographs with him, I joined back our little gang for a quick round of instant photoshoots with a hilarious set of props. Aaditya and I decided to split from the likes of Shreenivas and Ananthu for the day to have our snack bites at around 0700pm Indochina Time. We enjoyed the Thai cultural performance presented by the event volunteers before I decided to depart swiftly for the newly discovered Animate Store at the MBK Center.
Manifest #19
Coordinating with Samyak again, I dropped off my stuff in my hotel room before leaving for the BTS SkyTrain at around 0715pm Indochina Time. Since MBK Center was around a ten-minute walk away from BTS Siam, and so was the Jain temple that he wanted to visit, we decided to travel together. Having one of those rare moments where Samyak was ready to leave before I was, we headed to an ATM to withdraw some money. Just like my arrival day, we had to cough up around 250 Thai Baht besides the actual amount, but that had nearly become something that we had gotten used to. Cash was indeed king in Thailand, and we had to ensure that we had enough money to not require another round of ATM withdrawals. After a forty-five-minute-long journey from BTS Udomsuk, we split at BTS Siam after deciding to meet up there at around 0930pm Indochina Time. Getting local Thailand SIM cards proved to be extremely beneficial, as that allowed us to stay connected with our friends and families while we were navigating the hustle and bustle of Bangkok. We did not worry about the possibility of getting lost during transit, as we could always find our way back to each other.
Manifest #20
I was able to make it to Animate Store on the seventh floor of the sprawling MBK Center mall by around 0830pm Indochina Time, giving me roughly half an hour before the shop shuttered. Connecting remotely with Shounak Dey from back home, I made the best use of the time to collect some Genshin Impact collectibles for us. After making quite a lot of purchases from the official miHoYo swag catalog, I decided to be on my way back to BTS Siam right when Animate Store closed down for the day at around 0900pm Indochina Time. An uneventful BTS SkyTrain journey later, we found ourselves at BTS Udomsuk at around 1015pm Indochina Time with few choices for Indian cuisine, as most adjacent shops were closed and the farther restaurants would be closed by the time we would have gotten there. After a quick trip to an adjacent SevenEleven outlet and helping Samyak with packaged mineral water bottles from the day before, we decided to order some Indian cuisine takeaways from the Grab application. We decided to call it a day at around 1130pm Indochina Time after sharing a light dinner of some homestyle Indian meal together in my hotel room and going through a round of presentation prep.
comments? additions? reactions?
As always, comment on the fediverse: https://fosstodon.org/@nirik/116545941754988966