Restart of Copr servers. There is possibility of short outage.
Affected Services:
copr-frontend - https://copr.fedorainfracloud.org copr-backend - https://copr-be.cloud.fedoraproject.org/
/rss20.xml">
Restart of Copr servers. There is possibility of short outage.
Affected Services:
copr-frontend - https://copr.fedorainfracloud.org copr-backend - https://copr-be.cloud.fedoraproject.org/
The intersection of open source development and artificial intelligence (AI) is currently a minefield of valid excitement and legitimate anxiety. Recently, the Fedora Project introduced the Fedora AI-Assisted Contributions Policy. The full policy text outlines the basic ground rules. I was informally involved in the drafting process. Since its rollout, I am actively adjusting my own workflow to comply with its requirements.
For those outside the Fedora ecosystem, the policy establishes basic ground rules for how AI can be used in the project. It operates on three main pillars:
Accountability: You can use AI, but you own the output. The human contributor is always the author and fully accountable for the submission’s quality, license compliance, and utility.
Transparency:
If a significant part of a contribution is taken unchanged from an AI tool, you must disclose it (typically via an Assisted-by: commit trailer).
Evaluation Limits: AI cannot act as the final judge on substantive contributions or evaluate a person’s standing within the community.
As someone who relies on these tools, I want to share my perspective on why this policy matters, the harsh realities of enforcing it, and why we must tactically embrace AI to protect the future of software freedom.
On paper, the Accountability clause looks like a strong deterrent against low-quality, automated spam. In reality, it functions primarily as a legal liability shield. If a contributor breaks a system by submitting AI-generated falsehoods, the policy simply puts the fault at their feet.
It does not, however, stop the spam.
We saw this clearly during a recent round of the Outreachy internship program. We had a record number of contributions recorded, but very few were actually merged. The volume of noise and sloppy contributions ballooned compared to past rounds. Many newcomers ignored the transparency mandate and flooded our git repositories with low-quality contributions. What actually deterred this "AI slop" wasn’t the text of the policy; it was the tangible enforcement mechanism of internship eligibility.
The primary consequence of submitting AI spam isn’t legal prosecution. It is reputational damage.
This brings us to the loudest objection from the open source community: the ethics of AI generation. Many maintainers deeply feel that LLMs are fundamentally engaged in theft. Maintainers recognize that the companies behind the LLMs extracted immense value from 40+ years of open source material without contributing value back, bypassing licenses and author attribution entirely.
I sympathize with the maintainers who hold this objection. It is frustrating to watch value extracted without reciprocity. However, I do not share the same sense of existential panic as other maintainers.
My view goes that the old rules simply no longer apply. Open source projects must evolve to survive. Instead of playing defense and fighting a philosophical war we cannot win, we need to think about how to tactically use AI to advance the interests of software freedom in a time when it has never been more threatened.
When used responsibly, AI has the potential to be a massive equalizer. It can lower the barrier to entry for non-native English speakers drafting documentation and help junior developers navigate legacy codebases with decades or more of context. Democratizing access is how we attract the next generation of open source contributors and strengthen the social fabric of our community. This includes Fedora!
But there is a dark side to this democratization. Dumping a massive volume of low-effort, AI-facilitated contributions onto the laps of Fedora maintainers is a recipe for disaster. Many Fedora contributors are already stretched thin, doing more than is required of them. Forcing them to review a tidal wave of "AI slop" breeds deep resentment; to them, it feels like the active "enshittification" of our own community.
This is why we cannot focus solely on the people side of AI inclusion. We must carefully balance it by providing maintainers with the resources, support, and AI-facilitated tooling they need to survive. Practical improvements need deeper research and evaluation, such as using AI for advanced automated testing, scaling review workflows, and complex code refactoring. We should use AI to improve the quality and efficiency of our community’s work, ensuring that the same technology that lowers the barrier to entry also helps maintainers manage the gates.
If we handle this correctly, AI won’t replace the community. It will empower a more inclusive, diverse, and heavily-fortified one. The Fedora AI-Assisted Contributions Policy is our first step. Review the official Fedora Project policy to see the framework in action. Let’s keep the process open, keep the declarations honest, and tactically build the future of open source together.
During my work on the RISC-V 64-bit architecture port of Fedora, I created several pull requests to Fedora packages. And some were stalled…
Fedora project has a process called ‘non-responsive maintainer’. You check is maintainer on vacation, check latest activity and open a bug asking for action.
The problem was that it linked to fedora_active_user.py script
which does not work since Fedora 41. During cycle of that release the
python-fedora package got retired and no one updated the script.
As my actions brought some complains (and some discussions) I decided to take a look at the script and make it work with current Fedora releases. Created pull request, mailed original author etc.
There was no answer of any kind so I decided to take over maintaining the
script. Rewrote it to be Python 3 only, moved from urllib to requests,
refactored some repeated code into functions etc.
Then started checking service by service how to get things working better. Turned out that script had several assumptions which not always apply.
Fedora Accounts Service (FAS) has a separate field for the Bugzilla email. I did
not had to look for testing accounts for this because that’s my case — I use
‘short’ Red Hat email in Bugzilla due to Single Sign-On (SSO) service we use and
my ‘long’ one for the rest. So fedora-active-user script grabs user
information from FAS and checks for separate Bugzilla email and use it if present.
To query FAS you need Kerberos ticket. Both urllib and requests packages
have a way to use it for authentication — one extra package is needed to make
it work.
Lack of valid ticket is caught and info is provided to the user.
Querying Bugzilla service is the trickiest part. You can request data but there is no warranty that you get the latest one. Sure, there is the ‘order’ field for a query but it feels like a mere suggestion. It is nothing strange to get 2008 entries next to 2023 ones.
For now, I am hosting fedora-active-user on GitHub. Will move it to Fedora Forge later this year. Feel free to open issues, send pull requests if you have suggestions or changes.
Current version is not the best one. It is a bit better than it was two weeks ago.
At the moment package is present in Fedora rawhide. I am waiting for branches for stable releases and updates will follow.
$ fedora-active-user --user hrw
Last action on koji:
2026-05-04 built fedora-active-user-26.05.04-1.fc45
2024-09-12 built python-system-calls-6.11.0-1.fc42
2024-01-08 built python-system-calls-6.7.0-1.fc40
2023-09-18 built python-system-calls-6.6.0-1.fc40
2023-05-08 built python-system-calls-6.4.0-2.fc39
2022-08-06 built python-system-calls-5.19.0-2.fc37
2022-07-25 built python-system-calls-5.19.0-1.fc36
2022-07-25 built python-system-calls-5.19.0-1.fc37
2022-01-10 built python-system-calls-5.16.2-1.fc36
2021-11-15 built python-system-calls-5.16.0-1.fc35
Last package updates on bodhi:
2026-05-04 fedora-active-user-26.05.04-1.fc45
2024-09-12 python-system-calls-6.11.0-1.fc42
2024-01-08 python-system-calls-6.7.0-1.fc40
2023-09-18 python-system-calls-6.6.0-1.fc40
2023-05-08 python-system-calls-6.4.0-2.fc39
2022-08-06 python-system-calls-5.19.0-2.fc37
2022-07-25 python-system-calls-5.19.0-1.fc36
2022-07-25 python-system-calls-5.19.0-1.fc37
2022-01-10 python-system-calls-5.16.2-1.fc36
2021-11-15 python-system-calls-5.16.0-1.fc35
2021-11-15 python-system-calls-5.16.0-1.fc36
2021-09-21 python-system-calls-5.15.5-1.fc36
Last actions performed according to fedmsg:
2026-05-04 hrw commented on the pull-request rpms/prusa-slicer#67
2026-05-04 hrw's Badges rank changed from 272 to 260
2026-05-04 hrw was awarded the badge `Missed the Train`
2026-05-04 hrw commented on update fedora-active-user-26.05.04-1.fc45 (karma: 0)
2026-05-04 fedora-active-user-26.05.04-1.fc45 was tagged into f45 by bodhi
2026-05-04 fedora-active-user-26.05.04-1.fc45 was untagged from f45-updates-candid
2026-05-04 hrw's fedora-active-user-26.05.04-1.fc45 bodhi update has met stable te
2026-05-04 fedora-active-user-26.05.04-1.fc45 was untagged from f45-updates-testin
2026-05-04 fedora-active-user-26.05.04-1.fc45 was tagged into f45-updates-testing-
2026-05-04 fedora-active-user-26.05.04-1.fc45 was untagged from f45-signing-pendin
Last emails on Fedora mailing lists:
2026-04-29 mjuszkiewicz@redhat.com as Marcin Juszkiewicz mailed devel@lists.fedora
2026-04-17 mjuszkiewicz@redhat.com as Marcin Juszkiewicz mailed devel@lists.fedora
2026-04-17 mjuszkiewicz@redhat.com as Marcin Juszkiewicz mailed devel@lists.fedora
2026-04-17 mjuszkiewicz@redhat.com as Marcin Juszkiewicz mailed devel@lists.fedora
2026-04-17 mjuszkiewicz@redhat.com as Marcin Juszkiewicz mailed devel@lists.fedora
2026-04-17 mjuszkiewicz@redhat.com as Marcin Juszkiewicz mailed devel@lists.fedora
2026-04-17 mjuszkiewicz@redhat.com as Marcin Juszkiewicz mailed devel@lists.fedora
2026-04-17 mjuszkiewicz@redhat.com as Marcin Juszkiewicz mailed devel@lists.fedora
2026-04-17 mjuszkiewicz@redhat.com as Marcin Juszkiewicz mailed devel@lists.fedora
2026-04-17 mjuszkiewicz@redhat.com as Marcin Juszkiewicz mailed devel@lists.fedora
Bugzilla activity (may not be the latest):
No activity found on Bugzilla
Looks like I still need to work on querying Bugzilla ;D
I'm back from my vacation, so time for another weekly recap...
Week before last I had a lovely time away in hawaii (The big island). I saw volcanoes (we missing lava fountaining by like 15minutes), lava tubes (really cool (literally) and dark), botanical gardens (unreal flowers), had a dinner/sunset cruise with history and finally a sunset/stargazing trip to the top of mona kea. Super fun! Wish I had another week there to lounge on the beach. If you ever have a chance to go, take it!
I did look at my email and such the first day or so, but after that I was too busy and never took my laptop out even until I got back.
Of course first thing monday on getting back was that we were go for fedora 44 release tuesday!
Release went pretty smoothly overall and I hope everyone enjoys the release.
Of course with the release on tuesday, we end our infrastructure freeze on wed. For some reason this time we had a pretty big pile of pending pull requests, which I attempted to merge and deploy.
The bulk of them were moving our openshift applications from deploymentconfig (which was a openshift specific object) to deployment (which is a k8s native object). Openshift still supports deploymentconfig, but it will go away and it sprews deprecation notices and the sooner we get moved the better.
I ran into some problems with a few applications that had preexisting issues in staging when I went to test there. There were also some problems on some applications with selectors (where it chooses how to map a service on to a deployment). In one case (fmn) the app had two builds for two different things and one of them was a newer api version and updated the database, but then the second one couldn't handle that. Had to update it upstream to get the db versions to match.
Anyhow, there's only a very few left now. Looking forward to being done paying down this tech debt. :)
What weekly recap would be complete without some scraper news? :)
This time they started hitting cgit links on fedorapeople.org (where contributors can have git repos). I setup anubis there which mostly quashed them. That did break some redirects tho, so we will need to fix that.
Scrapers have also been hitting the wiki pretty hard from time to time. It's not easy to just put that behind anubis because it's in the base fedoraproject.org domain and we don't want some things there behind it. For now we just increased resources for the backend, but we will probibly have to figure out how to setup anubis there before long.

Hello travelers!
Loadouts for Genshin Impact v0.1.16 is OUT NOW with the addition of support for recently released characters like Linnea and for recently released weapons like Golden Frostbound Oath from Genshin Impact Luna VI or v6.5 Phase 2. Take this FREE and OPEN SOURCE application for a spin using the links below to manage the custom equipment of artifacts and weapons for the playable characters.
Besides its availability as a repository package on PyPI and as an archived binary on PyInstaller, Loadouts for Genshin Impact is now available as an installable package on Fedora Linux. Travelers using Fedora Linux 42 and above can install the package on their operating system by executing the following command.
$ sudo dnf install gi-loadouts --assumeyes --setopt=install_weak_deps=FalseInstallation command for Fedora Linux
SEST assets alignment issue by @gridhead in #524None by @gridhead in #526DGFT assets alignment issue by @gridhead in #525Golden Frostbound Oath by @gridhead in #533Linnea to the roster by @gridhead in #532One character has debuted in this version release.
Linnea is a bow-wielding Geo character of five-star quality.


Linnea - Workspace and Results
One weapon has debuted in this version release.

While allowing you to experiment with various builds and share them for later, Loadouts for Genshin Impact lets you take calculated risks by showing you the potential of your characters with certain artifacts and weapons equipped that you might not even own. Loadouts for Genshin Impact has been and always will be a free and open source software project, and we are committed to delivering a quality experience with every release we make.
With an extensive suite of over 1558 diverse functionality tests and impeccable 100% source code coverage, we proudly invite auditors and analysts from MiHoYo and other organizations to review our free and open source codebase. This thorough transparency underscores our unwavering commitment to maintaining the fairness and integrity of the game.
The users of this ecosystem application can have complete confidence that their accounts are safe from warnings, suspensions or terminations when using this project. The ecosystem application ensures complete compliance with the terms of services and the regulations regarding third-party software established by MiHoYo for Genshin Impact.
All rights to Genshin Impact assets used in this project are reserved by MiHoYo Ltd. and Cognosphere Pte., Ltd. Other properties belong to their respective owners.
Quite a few articles about AI costs getting out of control. For something different I recommend the podcast about the origins of Rose Bikes.
Do I belong in tech anymore? - I can relate to some of this.
GNOME is once again participating in GSoC. This year, we have 6 contributors working on adding Debug Adapter Protocol support to GJS, incorporating vocab-style puzzles into GNOME Crosswords, creating a native GTK4/Rust rewrite of the Pitivi timeline ruler, porting gitg to GTK4, implementing app uninstallation in the GNOME Shell app grid, and enabling recovery from GPU resets.
As we onboard the contributors, we will be adding them to Planet GNOME, where you can get to know them better and follow their project updates.
GSoC is a great opportunity to welcome new people into our project. Please help them get started and make them feel at home in our community!
Special thanks to our community mentors, who are donating their time and energy to help welcome and guide our new contributors: Philip Chimento, Jonathan Blandford, Yatin, Alex Băluț, Alberto Fanjul, Adrian Vovk, Jonas Ådahl, and Robert Mader.
For more information, visit https://summerofcode.withgoogle.com/programs/2026/organizations/gnome-foundation
A new upstream version of Libblockdev was released on Monday (April 27th) – 3.5.0. This release brings both new functions and a large number of bug fixes.
Two new functions have been added to the btrfs plugin.
bd_btrfs_delete_subvolume_recursive can be used to remove a btrfs subvolume and all its children in one call. This was prompted by an issue reported for blivet-gui requesting this feature. While we could do this manually, using the new btrfs --recursive option can noticeably speed up this operation.
bd_btrfs_device_stats can be used to get device statistics for a btrfs volume. This is equivalent to the btrfs device stats command. In the future we want to bring this functionality to UDisks and ultimately to Cockpit, which requested this feature.
All “open” functions in the crypto plugin were extended to allow passing the activation flags to libcryptsetup. This for example allows enabling discard when opening encrypted devices. These new functions will be used in the next version of UDisks to correctly support options specified in /etc/crypttab.
bd_nvme_find_namespaces_for_ctrl is a new utility function to find all namespaces associated with a given controller. This mirrors the existing bd_nvme_find_ctrls_for_ns function but performs the reverse operation.
The biggest chunk of the changes in this release are various bug fixes. As described in my previous post, we started experimenting with using LLM for code review and for libblockdev the result is over a hundred issues fixed, mostly memory and resource leaks, logical issues in error paths and various issues in tests and documentation.
At the time of writing, libblockdev 3.5.0 is already available in Fedora Rawhide and Debian Unstable and on its way to Fedora 44. The latest builds for testing and bleeding-edge enthusiasts can be always found in our Copr repository.
I read more in this stretch than made it into this post. As I sorted it, I realized the leftovers would turn into snark about Kash Patel, a pointer to the Sam Altman bio/promo piece, or an essay about Sugey Amaya as a case study in systems gone bad. You don’t need me for that.
Instead, here are some pieces on the Czech Republic, technology, being American, and more.
Disclaimer: I work at Microsoft on upstream Linux in Azure. These are my personal notes and opinions.
Minimum Decent Wage in The Czech Republic Was CZK 48,336 in 2025, Says Study
The way Europe considers poverty is refreshing. There is a concept called ‘social deprivation’ that measures whether individuals and families can participate in society. The score is based on the ability to do things like afford a week’s vacation away from home. The NGO in this article built a “‘minimum decent wage’ for full-time work in the Czech Republic, [which is] enough to cover the needs of an adult with a child, leisure time and some savings.” This calculation came out to CZK 48,336 per month and compares very favorably to the average gross wage of 49,215 per month. On the face of it this is great. However, the median wage was only 45,523 and the statutory minimum is 20,000. There is still work to do.
Sneaker Company Allbirds Plans to Pivot to A.I. Yes, A.I.
When I first moved to the Czech Republic, I noticed that business names were often ‘odd.’ It is common to see the legal name of a business printed in small print on a receipt or even on the window or front door of the shop. Many were just names and some seemed completely unrelated to the store I was in. As I understand it, the names are a vestige of the way non-corporations are organized here. Briefly, you have to “wear your own skin” into the market place. So if you are forming a single-person entity (self-employment and the like) your name is the name of the entity. Getting a DBA (Doing Business As) doesn’t remove your name from the business, instead it just tacks on more words.
But the unrelated names had a much more interesting backstory. It was, as I understood it, common to start a business by buying an existing one. This was done because the established business had a track record with all the banks and government ministries. It was already registered and on the books. You didn’t have to fight the red tape, you just assumed the work of someone who had already done it. It also meant that your restaurant might be called, “Alpha Shoes” (a fictitious example). Thankfully the red tape has lessened and I understand this practice has died out.
All of this is to say that I don’t plan to buy my AI inferencing from a shoe company.
Brno Will Say Goodbye To Long-Serving Tatra K2 Trams This Weekend
all three of the remaining Brno K2 cars (including the retro car and Party Šalina) in operation on the Červinkova–Česká–Maloměřický most route. The program will culminate with a convoy of all three cars through the city centre, with a photoshoot on namesti Svobody at 5pm.
Sadly the Party Tram is no more, but we still have the Cafe Tram!
An article about why Rust may be a great choice from the non-oxidation crowd.
Was it not bad enough that we reinvented the wheel in the first place? We had to reimplement that reinvented wheel?
The Conversion Process: Stealing all your furniture and replacing it with exact duplicates
The peril of laziness lost | The Observation Deck
I understand why the word ‘lazy’ is used here. But we are not doing ourselves any favors by redefining words away from their connotative meanings to prove points. The primary officially stated reason we had to invent ‘open source’ was because ‘free software’ is a language conflict that gives us the even more tortured ‘FLOSS.’ Let’s do ourselves favors and be direct. The criticism of unchecked LLM software bloat is valid. Don’t hide that great point behind something most people don’t want to be called.
Stop building agents, start harnessing Goose
The harness is what you add to the agent. The AGENTS.md files. The skills. The custom MCP tools. The hand-crafted linters. The system prompts. The recipes and subrecipes. The extension configurations. The provider choices. The permission policies.
I loved this definition of harness. It fits so well what most people are trying to do in their workflows, just like we do with our operating systems and desktops. I find it highly ironic that it comes out via a write up on a project dedicated to building … agents and harness frames.
Changes in the system prompt between Claude Opus 4.6 and 4.7
Claude 4.6 had a section specifically clarifying that “Donald Trump is the current president of the United States and was inaugurated on January 20, 2025”, because without that the model’s knowledge cut-off date combined with its previous knowledge that Trump falsely claimed to win the 2020 election meant it would deny he was the president. That language is gone for 4.7, reflecting the model’s new reliable knowledge cut-off date of January 2026.
Claude avoids saying “genuinely”, “honestly”, or “straightforward”.
In the context of our current world, somehow these things feel right and wrong at the same time.
I Don’t Care for Gnome - woltman.com
I recently had to use Gnome after a long time away and I couldn’t put my finger on why I didn’t like it. I remembered happily using it for years and now it was a mess. This essay did a really good job of putting my thoughts into words. As a bonus, his screenshots include files named ‘farts’ which is my go to placeholder and the bemusing “[m]aybe someday our children will live in a post-files world, but that is not our reality.”
The title of the book is Everything But The Burden, as in, non-black people want to take everything from us except the weight of our history.
I occasionally watch an African American content creator on Instagram (@bmotheprince - I don’t know how to link to Instagram and I use someone else’s account so … get off my lawn! :P). His humor is mostly about the generational divide in the US, but it is an occasional window into African American culture. I also periodically listen to F.D Signifier for US cultural deep dives and … again … insights into African American culture. So I was drawn to this article about ‘unc’ which I had derived a definition for from the way these two men use it. I am glad to say I was mostly right. That said, damn that book title hits hard.
They Went Abroad to Save Money. Moving Back Seems Unaffordable.
This article set me off so badly I debated penning an angry letter to the editor, but had a coffee instead. They profiled three people: a woman who relocated to Tbilisi, which on the face of it is an interesting choice until you realize she is actually from there and had immigrated to America; a digital nomad trying to skip around the world and go through spend/save cycles in different economies; and a man who moved to Mexico with literally no plan and wound up teaching English online and writing for a content mill. He has since returned home broke with the intention of pivoting into insurance.
Only one of these people is even trying to save money. Other than the guy who never found a career path, none seem particularly focused on integrating with where they live and are instead focused on having a luxury lifestyle. The article jumps straight into the idea that they are avoiding US income tax, which not every article needs to explain in detail, but at least don’t short-hand it, and it doesn’t acknowledge any local costs. They’ve chosen not to avail themselves of real retirement savings options or in most cases local healthcare options by refusing to be more than “just visiting.” Additionally, many digital nomads seem to believe they never have local tax obligations (they do) and while the article doesn’t confirm our digital nomad is doing that, the coda on taxation implies it in this case too.
I didn’t leave the US to achieve cheaper living costs. I also didn’t leave over politics. I have worked hard at integrating into Czech society, as much as my laziness about learning the language allows. I participate in the local tax system and the local healthcare system. My quality of life is objectively better than I enjoyed in the US, and I am currently, as I was previously, in tech and earn well. Not every American abroad wants to go back to the US, and the US systems that make that hard are failing Americans who didn’t leave, not solely Americans who did. I am trying hard not to write an essay here and instead focus on my coffee.
The Mystery in the Medicine Cabinet
Thankfully our household doesn’t get sick too often, but I can promise you I always have to Google, ‘what is Paracetamol’ and ‘what is normal human body temperature in Celsius.’
The Only 3 Knives You Actually Need in Your Kitchen
Let’s try three pennies of an angle away from the honing rod
Anything but an internationally recognized measurement system.
‘There is no news’: What a change from 1930 to today
We could do slow news again. I found this article via a game of Bracket City and it reminded me why I try not to read headlines too frequently. News needs more time to develop and marinate than the 24-second cycle gives us. I used to subscribe to The Week, which publishes a roundup of major events after they have had time to gel. Having a child has made time more precious so I let my subscription lapse, but I still recommend the format if you want your life back from the news cycle.
They don’t produce many articles, which is actually great because what they write is long and goes deeper into a topic than most people would desire, but I strongly recommend putting Low Tech Magazine into your RSS reader. This article was one I thought would be a dud, but it turns out that hand carts are way more interesting than I ever imagined. Did you know they had sails …
Quiz: Can You Tell Real British Insults From Fakes?
My score: 2 out of 8. I mostly got the wrong definition for these British insults. According to the test, I am a plonker. That’ll be “Unc Plonker” to you.
We are happy to announce the general availability of Fedora Asahi Remix 44. This release brings Fedora Linux 44 to Apple Silicon Macs.
Fedora Asahi Remix is developed in close collaboration with the Fedora Asahi SIG and the Asahi Linux project. This release incorporates all of the exciting improvements brought by Fedora Linux 44. Fedora Asahi Remix 44 also retires our vendored Mesa and virglrenderer packages. Users who have not already manually done so will be automatically transitioned to the upstream Mesa and virglrenderer packages provided by the upstream Fedora repositories.
Fedora Asahi Remix offers KDE Plasma 6.6 as our flagship desktop experience, with all of the new and exciting features brought by Fedora KDE Plasma Desktop 44. Plasma Setup replaces the previous Calamares-based setup wizard, providing a Plasma-native experience for user account creation and system setup. Additionally, Plasma Login Manager is now the default greeter and session manager, replacing SDDM. This applies to new installs only; users upgrading from previous versions of Fedora Asahi Remix will not have their configuration changed.
A GNOME variant is also available, featuring GNOME 50, with both desktop variants matching what Fedora Linux offers. Fedora Asahi Remix also provides a Fedora Server variant for server workloads and other types of headless deployments. Finally, we offer a Minimal image for users that wish to build their own experience from the ground up.
You can install Fedora Asahi Remix today by following our installation guide. Existing systems running Fedora Asahi Remix 42 or 43 can be updated following the usual Fedora upgrade process. Upgrades via GNOME’s Software application are unfortunately not supported; either KDE’s Plasma Discover or DNF’s System Upgrade command must be used.
Please report any Remix-specific issues in our tracker, or reach out in our Discourse forum or our Matrix room for user support.
Fedora has released Fedora KDE Plasma Desktop Edition 44 to the public.
The Fedora KDE Plasma Desktop Edition is suitable for many needs. It combines the reliable and trusted Fedora Linux base with the KDE Plasma Desktop environment. It provides a selection of KDE applications that are simple by default, but powerful when needed.
The KDE community makes your life easier with the latest release of KDE Plasma. It builds upon the foundations of Plasma 6 to provide a seamless, friendly, and familiar experience.
Fedora KDE 44 ships with Plasma 6.6.4 featuring:
There’s so much more detail available in the Plasma 6.6 release announcement.
Beyond just the updates included in KDE Plasma 6.6, there are some major new features with Fedora KDE on Fedora Linux 44.
Some broader changes in Fedora Linux also directly impact Fedora KDE Plasma Desktop Edition, notably:
The Fedora KDE Plasma Desktop 44 edition is fully supported within the Fedora Ready program. Fedora KDE is actively engaging with hardware vendors to support Fedora KDE Plasma Desktop on their devices.
We are pleased to announce that Star Labs offers preinstalled Fedora KDE Plasma Desktop as an option for their portfolio of devices. As makers of computers with an open source ethos embedded into the core of their products with even open source firmware powered by Coreboot, they share many of the same principles the Fedora community values. This is a very exciting moment for Fedora KDE and we look forward to deepening our collaboration with Fedora Ready participants and extending to other vendors. If you are a vendor potentially interested in Fedora Ready, please reach out!
The Fedora KDE SIG hopes that you’ll find the Fedora KDE Plasma Desktop 44 to be a wonderful experience. When you’re ready to try it, click here for download links and verification instructions. If you’d like to learn more, check out the Fedora KDE Plasma Desktop website.
I’m happy to announce that we have sealed bootable container images ready for testing for the Fedora Atomic Desktops!
Sealed bootable container images include all the components needed to create a fully verified boot chain, from the firmware to the operating system composefs image. This relies on Secure Boot and thus only supports system booting with UEFI on x86_64 & aarch64.
The components are:
Both systemd-boot and the UKI are signed for Secure Boot. The images are test images so the components are not signed with the official keys from Fedora.
The main direct benefit that we will get from this support is that we will be able to enable passwordless disk unlocking using the TPM in a way that will be reasonably secure by default.
See the instructions at github.com/travier/fedora-atomic-desktops-sealed on how to give the pre-built container and disk images a try and how to build your own.
We welcome testing and feedback! Please see the list of known issues and report new issue at github.com/travier/fedora-atomic-desktops-sealed. We’ll redirect them as needed to the right upstream projects.
Beware, those are testing images. The root account does not have a password set and sshd is enabled, by default, to make debugging easier. The UKI and systemd-boot are signed for Secure Boot but, since those are test images, they are not signed with the official keys from Fedora. Don’t use those images in production.
If you want to know more about how sealed images work (i.e. how we make bootable containers, UKI and composefs work together to create a verified boot chain), see the following presentations and documentation:
Thanks to all the contributors that made this possible, notably (but non exhaustively) from the following projects: bootc & bcvk, composefs & composefs-rs, chunkah, podman & buildah and systemd.
Fedora Linux 44 has been released!
So, let’s see what is included in this new release for the Fedora Atomic Desktop variants (Silverblue, Kinoite, Sway Atomic, Budgie Atomic and COSMIC Atomic).
We have moved the cross-variants issue tracker to the new Fedora forge. This is the best place to file issues that impacts all variants or to coordinate work between all of them. If you have issues specific to a given desktop environment then we usually prefer to track them in each respective SIG trackers. These are available on the README for the atomic-desktops organization.
The unified documentation for all Atomic Desktops is finally live! Unfortunately the translations have not been migrated so we will need help to re-translate everything again, once the translation setup is ready with the new forge. It should be mostly copy/paste from the previous docs and this time we will only have to translate the docs once and not for every (new) variant.
See the tracking issue atomic-desktops#10.
FUSE version 2 has been deprecated and unmaintained for a while so we have removed it from the images. In practice, this means two things:
See the Fedora Change and the tracking issue atomic-desktops#50. The implications are detailed below.
Some AppImages are still using an old AppImage runtime that relies on FUSE 2 libraries being available on the host. See the Discussion thread for examples on how to check the runtime of an AppImage.
If some of your AppImages do not work on Fedora Atomic Desktops 44, we recommend:
KDE upstream no longer recommends using the EncFS nor CryFS backends for Plasma Vaults, notably because they rely on the FUSE 2 libraries. If you are using one of those backends, you should migrate your data to a new vault using the only maintained backend (gocryptfs). Ideally this should occur before the update to Fedora Linux 44. If you have already updated to Fedora Linux 44 and need access to your data, you can layer the needed packages (cryfs or fuse-encfs) using rpm-ostree install <package>, then migrate your data and finally reset the layers with rpm-ostree reset.
Support for the legacy pkla Polkit rules format has been removed. It is unlikely that you were relying on support for those rules as most of the ecosystem has moved on to the new JavaScript based format.
See the Fedora Change and the tracking issue atomic-desktops#102.
Fedora Silverblue comes with the latest GNOME 50 release. For more details about the changes that occur alongside GNOME 50, see What’s New in Fedora Workstation 44 on the Fedora Magazine.
Fedora Kinoite ships with Plasma 6.6, Frameworks 6.24 and Gear 25.12.
See also What’s New in Fedora KDE Plasma Desktop 44 in the Fedora Magazine.
The brand new Plasma Login Manager replaces SDDM to provide a more integrated experience with systemd and the KDE Plasma session.
Thanks to the new Plasma Setup, it is now possible to install the system with Anaconda with minimal configuration and then complete the installation on the first boot by creating a new user and selecting the timezone. This is great when you want to install Fedora Kinoite on a computer and don’t want to setup a user in advance.
See the Fedora Change.
Nothing specific for this release.
Fedora Budgie Atomic comes with the latest 10.10.2 Budgie release. This release brings Wayland support to Budgie Atomic. See the 10.10 release announcement for more details.
Fedora COSMIC Atomic comes with the latest 1.0.8 release of the COSMIC desktop. This is now considered stable.
Our friends in the Universal Blue project (Bazzite, Bluefin, Aurora) have prepared the update to Fedora Linux 44. Look for upcoming announcements in their Discourse.
As always, I heavily recommend checking them out, especially if you feel like some things are missing from the Fedora Atomic Desktops and you depend on them (NVIDIA drivers, extra media codec, out of tree kernel drivers, etc.).
If you have an interest in contributing to Fedora Atomic Desktops, here are some bugs that we will have to fix in the short term. We would greatly appreciate help with:
Sealed images are now ready for testing! See the other article for all the details.
A lot of work is happening to make the transition to Bootable Containers as smooth as possible for our existing users. You can look at the road map for this transition at atomic-desktops#26.
One of the tasks is to move away from our unmaintained installation ISO building scripts to the new image-builder tooling. This is planned for Fedora Linux 45 for the ostree variants and support for Bootable Container will follow right after.
Another task is to start building the Fedora Atomic Desktops Bootable Container images using the Fedora Konflux instance.
We are looking for contributors to help us make the Fedora Atomic Desktops the best experience for Fedora users.
Fedora Silverblue is an operating system for your desktop built on Fedora Linux. It’s excellent for daily use, development, and container-based workflows. It offers numerous advantages such as being able to roll back in case of any problems. If you want to rebase to Fedora Linux 44 on your Fedora Silverblue system, this article tells you how. It not only shows you what to do, but also how to revert things if something unforeseen happens.
Prior to actually doing the rebase to Fedora Linux 44, you should apply any pending updates. Enter the following in the terminal:
$ rpm-ostree update
or install updates through GNOME Software and reboot.
rpm-ostree is the underlying atomic technology that all the Fedora Atomic Desktops use. The techniques described here for Silverblue will apply to all of them with proper modifications for the appropriate desktop.
GNOME Software shows you that there is new version of Fedora Linux available on the Updates screen.
First thing to do is download the new image, so select the Download button. This will take some time. When it is done you will see that the update is ready to install.
Select the Restart & Upgrade button. This step will take only a few moments and the computer will restart when the update has completed. After the restart you will end up in a new and shiny release of Fedora Linux 44. Easy, isn’t it?
If you prefer to do everything in a terminal, then this part of the guide is for you.
Rebasing to Fedora Linux 44 using the terminal is easy. First, check if the 44 branch is available:
$ ostree remote refs fedora
You should see the following in the output:
fedora:fedora/44/x86_64/silverblue
If you want to pin the current deployment (meaning that this deployment will stay as an option in GRUB until you remove it), you can do this by running this command:
# 0 is entry position in rpm-ostree status
$ sudo ostree admin pin 0
To remove the pinned deployment use the following command:
# 2 is entry position in rpm-ostree status
$ sudo ostree admin pin --unpin 2
Next, rebase your system to the Fedora Linux 44 branch.
$ rpm-ostree rebase fedora:fedora/44/x86_64/silverblue
Finally, the last thing to do is restart your computer and boot to Fedora Linux 44.
If anything bad happens (for instance, if you can’t boot to Fedora Linux 44 at all) it’s easy to go back. At boot time, pick the entry in the GRUB menu for the version prior to Fedora Linux 44 and your system will start in that previous version rather than Fedora Linux 44. If you don’t see the GRUB menu, try to press ESC during boot. To make the change to the previous version permanent, use the following command:
$ rpm-ostree rollback
That’s it. Now you know how to rebase Fedora Silverblue to Fedora Linux 44 and roll back. So why not do it today?
Because there are similar questions in comments for each blog about rebasing to newer version of Silverblue I will try to answer them in this section.
Question: Can I skip versions during a rebase of Fedora Linux? For example from Fedora Silverblue 41 to Fedora Silverblue 44?
Answer: Although it could sometimes be possible to skip versions during rebase, it is not recommended. You should always update to one version prior (41->42->43->44 for example) to avoid unnecessary errors.
Question: I have rpm-fusion layered and I get errors during rebase. How should I do the rebase?
Answer: If you have rpm-fusion layered on your Silverblue installation, you should do the following before rebase:
$ rpm-ostree update --uninstall rpmfusion-free-release --uninstall rpmfusion-nonfree-release --install rpmfusion-free-release --install rpmfusion-nonfree-release
After doing this you can follow the guide in this blog post.
Question: Could this guide be used for other ostree editions (Fedora Atomic Desktops) as well like Kinoite, Sericea (Sway Atomic), Onyx (Budgie Atomic),…?
Yes, you can follow the Rebasing using the terminal part of this guide for every Fedora Atomic Desktop. Just use the corresponding branch. For example, for Kinoite use fedora:fedora/44/x86_64/kinoite
This article highlights a few noteworthy changes in the latest release of Fedora Workstation that we think you will love. Upgrade today from the official website, or upgrade your existing install using GNOME Software or through the terminal with dnf system-upgrade.
Fedora Linux 44 Workstation ships with the latest GNOME release, GNOME 50. This comes with a long list of refinements to your desktop, including everything from accessibility, to color management and remote desktop.
As part of the Digital Wellbeing initiative, new native Parental Controls let you set screen time limits and bedtimes directly from Settings.
Many of the applications that are installed by default on the Fedora Workstation have also seen improvements, from the Document Viewer to the File Manager and the Calendar.
To learn more about these and other changes, you can read the GNOME 50 release notes.
Be sure to check out the Fedora Linux 44 Change Set wiki for even more details about all the features and changes that went into Fedora Linux 44. Use the Fedora Discussion forum or Fedora’s Matrix chat server if you want to converse with the Fedora community about this new release!
I’m excited to announce that Fedora Linux 44 is here! Keep reading to discover highlights of Fedora Linux 44, or if you are ready, just jump right in and give Fedora Linux 44 a try!
Thank you and congrats to everyone who has contributed to this release. And thanks to everyone who showed up for the virtual release party last Friday. We celebrated a little early this year, just after the go/no-go meeting made the release official. If you weren’t able to join us live, you can watch the recording and hear about some of the great work from the contributors involved.
If you have an existing system, Upgrading Fedora Linux to a New Release is easy. In most cases, it’s not very different from just rebooting for regular updates, except you’ll have a little more time to grab a coffee.
If this is your first time running Fedora Linux, or if you just want to start fresh with Fedora, download the install media for our flagship Editions (Workstation, KDE Plasma Desktop, Cloud, Server, CoreOS, IoT), or one of our Atomic Desktops (Silverblue, Kinoite, Cosmic, Budgie, Sway), or alternate desktop options (like Cinnamon, Xfce, Sway, or others).
As usual with Fedora Linux, there are just too many individual changes and improvements to go over in detail. You’ll want to take a look at the release notes for that.
For those of you installing fresh Fedora Linux 44 Spins, you may notice a change in how Anaconda handles network devices. Anaconda now only creates network profiles for devices configured during installation (by boot options, kickstart, or interactively in UI) instead of providing default profiles for all devices. This change will simplify post-installation network configuration for users who need to customize after installation.
Fedora Linux 44 Workstation ships with the latest GNOME release, GNOME 50. This comes with a long list of refinements to your desktop, including everything from accessibility to color management and remote desktop. Many of the applications that are installed by default on Fedora Workstation have also seen improvements, from Document Viewer to File Manager and Calendar. To learn more about these and other changes, you can read the GNOME 50 release notes.
KDE Plasma Desktop: If you are a KDE user, you should also notice a couple of very obvious changes. Fedora KDE Plasma Desktop 44 is based on the latest Plasma 6.6, which includes the new Plasma Login Manager and Plasma Setup to provide a more cohesive and integrated experience from the moment the computer is powered on for the first time. The installation process has been simplified, enabling you to easily set up Fedora KDE Plasma Desktop for a computer for a friend or a loved one.
Beyond the user-visible changes, there are some important plumbing changes user should be aware of.
The loading time of OpenSSL has been improved by making use of directory-hash support for ca-certificates. This improvement required changes to where some certificate bundles are stored on the filesystem. You can read the specific Change details for more information.
MariaDB packages use a versioned package layout, which allows Fedora to deliver both, mariadb-10.11 and mariadb-11.8 for users. The “distribution default” unversioned MariaDB packages now install the 11.8 versions in Fedora Linux 44. User doing upgrades to Fedora Linux 44 won’t notice the change in the default. For new users installing MariaDB for the first time, unless you specify the version, you’ll now get 11.8 by default.
The NTSYNC kernel module is enabled for select packages by package recommendation (notably Wine and Steam), which can improve compatibility and performance when running Windows applications (especially games). When packages that recommend the wine-ntsync package are installed, the package recommendation ensures NTSYNC is configured automatically on subsequent boots, so that users don’t have to manually enable NTSYNC.
The /boot partition has been replaced with a Btrfs subvolume for Fedora Cloud images that support it. This results in better space utilization and smaller images.
If you run into a problem, visit our Ask Fedora user support forum. This forum includes a category where we collect common issues and solutions or work-arounds.
Drop by our “virtual watercooler” on Fedora Discussion and join a conversation, share something interesting, and introduce yourself. We’re always glad to see new people!
Although OpenSSL 4.0 released just two weeks ago, the syslog-ng project has already received a GitHub issue complaining that we do not support it. So, before we would allocate too much effort on it: what should we expect?
OpenSSL 4.0 was announced on April 14: https://openssl-library.org/post/2026-04-14-openssl-40-final-release/ However, this announcement mentions that it is NOT a long-term support (LTS) release.
This raises the question that if it is not an LTS release, then can we stay on version 3.x and skip 4.x altogether? When will Linux distributions start using it? Looking at Repology, there are already a few places where OpenSSL 4.0 is available. This includes Gentoo, the community where the GitHub issue originated from, and also various FreeBSD ports. The current list is available at: https://repology.org/project/openssl/versions
Fedora is planning to use OpenSSL 4.0 as default starting from the next release: https://fedoraproject.org/wiki/Changes/OpenSSL40 However, OpenSSL 3.x will most likely stay supported for backwards compatibility.
I am also curious if there are any other projects which have added support for OpenSSL 4.0. If so, then what are your experiences? Was porting your code to use OpenSSL 4.0 difficult?
I am all in for supporting the latest technologies, but currently, even if we have an open request for OpenSSL 4.0 support, I do not feel that I have enough information to prioritize its development.
Share your thoughts with us in this syslog-ng GitHub discussion: https://github.com/syslog-ng/syslog-ng/discussions/5685 or reach out to me on Twitter / Mastodon / LinkedIn.

Originally published at https://www.syslog-ng.com/community/b/blog/posts/support-for-openssl-4-0
Two and half reasons:
Plus, segfaults don’t lie – either the system crashed or it didn’t, and those are the issues that actually deserve the ticket.
A pretty straightforward bash script, vibing with AI-generated chaos.
/usr/bin and /usr/sbin--help for flags (--whatever, -x, you know the drill)Only logs actual crashes – SIGSEGV, SIGABRT, SIGILL, SIGBUS. Exit code 1 from bad args gets ignored.
Core logic looks like this:
# Extract flags from --help
flags=$(timeout 3s "$bin" --help 2>&1 |
grep -aoE -e '--[a-zA-Z0-9_-]+' -e '-[a-zA-Z]' |
grep -avE 'help|version|usage')
# Pick random flags (1-4 of them)
chosen=$(echo "$flags" | shuf -n $((1 + RANDOM % 4)))
# Add a random test file
fuzz_file="$WORKSPACE/$(random_pick: bad.json, random.bin, longline.txt, ...)"
# Run it
timeout 5s "$bin" $chosen $fuzz_file
Script skips the obvious no go zones – package managers, rm, network tools, editors. I’m glad to see the script finish with the machine still answering.
Source: run-them-all
Run via Testing Farm (how-to):
testing-farm-public request --test-type fmf \
--git-url https://forge.fedoraproject.org/quality/fmf-tests.git \
--git-ref main \
--compose Fedora-Cloud-Base-AmazonEC2.x86_64-44-1.3 \
--arch x86_64 \
--test run-them-all \
--context "force=yes" \
--plan /plans/all
Run locally: Just use the try-all.sh binary from the test, no requirement needed, run:
curl -sSLO https://forge.fedoraproject.org/quality/fmf-tests/raw/branch/main/system-in-use/run-them-all/try-all-binaries-help-options.sh && bash try-all-binaries-help-options.sh
My results: Testing Farm Artifacts (Fedora 44 RC compose 1.3, ~950 binaries tested)
grub2-mkrescue (bootloader utilities), perl (half the system depends on it), eqn from groff (man pages break without it).
What coredumpctl caught:
TIME PID UID GID SIG COREFILE EXE SIZE
Mon 2026-04-20 12:08:09 UTC 52378 0 0 SIGABRT present /usr/bin/edgepaint 80.7K
Mon 2026-04-20 12:08:22 UTC 58926 0 0 SIGSEGV present /usr/bin/eqn 57.8K
Mon 2026-04-20 12:08:41 UTC 77687 0 0 SIGSEGV present /usr/bin/gdbm_dump 23.9K
Mon 2026-04-20 12:09:01 UTC 97901 0 0 SIGSEGV present /usr/bin/grub2-mkrescue 64.7K
Mon 2026-04-20 12:09:08 UTC 106904 0 0 SIGSEGV present /usr/bin/gtshapprox 48.3K
Mon 2026-04-20 12:10:15 UTC 161697 0 0 SIGABRT present /usr/bin/perl5.42.1 94K
Run it on your side to see the reproducers in action. I’m curious to hear your thoughts on these major findings.
MY CRASHES:
COMMAND: /usr/bin/edgepaint -s -o --random_seed --angle --lightness -v --accuracy --share_endpoint
COMMAND: /usr/bin/edgepaint -v -s --accuracy --color_scheme -o /root/fuzz_lab/empty.dat
COMMAND: /usr/bin/edgepaint --share_endpoint -o --angle -v -s --random_seed --accuracy --color_scheme -
COMMAND: /usr/bin/edgepaint -s --lightness --share_endpoint -o -v --angle --random_seed
COMMAND: /usr/bin/efibootdump --guid -g -f /root/fuzz_lab/large.dat
COMMAND: /usr/bin/eqn -C -f -M -d -v -m -T -s /root/fuzz_lab/fake.png
COMMAND: /usr/bin/gdbm_dump --format -
COMMAND: /usr/bin/grub2-file --is-x86-knetbsd /root/fuzz_lab/longline.txt
COMMAND: /usr/bin/grub2-file --is-x86-knetbsd /root/fuzz_lab/bad.json
COMMAND: /usr/bin/gtshapprox --flat -c /root/fuzz_lab/fake.jpg
COMMAND: /usr/bin/gtshapprox -v -f --verbose -c /root/fuzz_lab/cmd.txt
COMMAND: /usr/bin/gtshapprox -n -h --flat -l --closed -v /root/fuzz_lab/gzip.dat
COMMAND: /usr/bin/gtshapprox --flat --number --log --cost --closed -
COMMAND: /usr/bin/gtshapprox --log -l -f --keep -
COMMAND: /usr/bin/mkfs.xfs -q /root/fuzz_lab/bad_utf8.txt
COMMAND: /usr/bin/mkfs.xfs -K -L -m /root/fuzz_lab/paths.txt
COMMAND: /usr/bin/perl -V -F -T -S -u -p -f -E -W -I /root/fuzz_lab/bad.json
COMMAND: /usr/bin/perl -F -e -n -a -l -u -c -
COMMAND: /usr/bin/perl -S -u -s -E -I -l
COMMAND: /usr/bin/perl -s -p -n -u -D -E -d
COMMAND: /usr/bin/tree -P --filelimit --matchdirs --hyperlink -i /root/fuzz_lab/paths.txt
Look, these are edge cases. Nobody’s actually running edgepaint --wtf malformed.json in prod. But segfaults are segfaults – the binary should bail with “invalid option” or “bad input”, not dump core.
So I’ve got a pile of crashes. Some in critical components. All reproducible.
File bugs for all of them? That’s a lot of BZ tickets for “yes hm this crashes if you feed it random garbage with weird flags”. Developers have better things to do.
Ignore them? They’re real bugs. And some of these are in grub2 and perl – not exactly throwaway packages.
Still figuring that out.
My Dilemma During Testing on Fedora 44 RC compose 1.X
The post Throwing Random Arguments at System Binaries: Real Segfaults vs. Ticket Noise appeared first on Fedora Community Blog.
Fedora 44 has been released! 🎉 So let’s see what is included in this new release for the Fedora Atomic Desktops variants (Silverblue, Kinoite, Sway Atomic, Budgie Atomic and COSMIC Atomic).
Note: You can also read this post on the Fedora Magazine.
We have moved the cross-variants issue tracker to the new Fedora forge. This is the best place to file issues that impacts all variants or to coordinate work between all of them. If you have issues specific to a given desktop environment then we usually prefer to track them in each respective SIG trackers. They are listed on the README for the atomic-desktops organization.
The unified documentation for all Atomic Desktops is finally live! Unfortunately the translations have not been migrated so we will need help to re-translate everything again, once the translation setup is ready with the new forge. It should be mostly copy/paste from the previous docs and this time we will only have to translate the docs once and not for every (new) variant.
See the tracking issue atomic-desktops#10.
FUSE version 2 has been deprecated and unmaintained for a while so we have removed it from the images. In practice, this means two things:
See the Fedora Change and the tracking issue atomic-desktops#50.
The implications are detailed below.
Some AppImages are still using an old AppImage runtime that relies on FUSE 2 libraries being available on the host. See the discussion thread for examples on how to check the runtime of an AppImage.
If some of your AppImages do not work on Fedora Atomic Desktops 44, we recommend:
KDE upstream no longer recommend using the EncFS nor CryFS backends for Plasma Vaults, notably because they rely on the FUSE 2 libraries.
If you are using one of those backends, you should migrate your data to a new Vault using the only maintained backend (gocryptfs).
Ideally this should occur before the update to Fedora 44.
If you have already updated to Fedora 44 and need access to your data, you can layer the needed packages (cryfs or fuse-encfs) using rpm-ostree install <package>, then migrate your data and finally reset the layers with rpm-ostree reset.
Support for the legacy pkla polkit rules format has been removed. It is unlikely that you were relying on support for those rules as most of the ecosystem has moved on to the new Javascript based format.
See the Fedora Change and the tracking issue atomic-desktops#102.
Fedora Silverblue comes with the latest GNOME 50 release.
For more details about the changes that alongside GNOME 50, see What’s new in Fedora Workstation 44 on the Fedora Magazine.
Fedora Kinoite ships with Plasma 6.6, Frameworks 6.24 and Gear 25.12.
See also What’s new in Fedora KDE Plasma Desktop 44 on the Fedora Magazine.
The brand new Plasma Login Manager replaces SDDM to provide a more integrated experience with systemd and the KDE Plasma session.
See the Fedora Change.
Thanks to the new Plasma Setup, it is now possible to install the system with Anaconda with minimal configuration and then complete the installation on the first boot by creating a new user and selecting the timezone. This is great when you want to install Fedora Kinoite on a computer and don’t want to setup a user in advance.
See the Fedora Change.
Nothing specific for this release.
Fedora Budgie Atomic comes with the latest 10.10.2 Budgie release. This release brings Wayland support to Budgie Atomic. See the 10.10 release announcement for more details.
Fedora COSMIC Atomic comes with the latest 1.0.8 release of the COSMIC desktop. This is now considered stable.
Our friends in the Universal Blue project (Bazzite, Bluefin, Aurora) have prepared the update to Fedora 44. Look for upcoming announcements in their Discourse.
As always, I heavily recommend checking them out, especially if you feel like some things are missing from the Fedora Atomic Desktops and you depend on them (NVIDIA drivers, extra media codec, out of tree kernel drivers, etc.).
If you are interested in contributing to Fedora Atomic Desktops, here are some bugs that we will have to fix in the short term. We would greatly appreciate help with:
Fixing root mount options (atomic-desktops#72): This is a long standing and mostly invisible bug that impacts performance.
Moving away from nss-altfiles (atomic-desktops#108): This is another long standing source of issues that new users regularly face.
Sealed images are now ready for testing! See the other article for all the details.
A lot of work is happening to make the transition to Bootable Containers as smooth as possible for our existing users. You can look at the roadmap for this transition at atomic-desktops#26.
One of the tasks is to move away from our unmaintained installation ISO building scripts to the new image-builder tooling.
This is planned for Fedora 45 for the ostree variants and support for Bootable Container will follow right after.
Another task is to start building the Fedora Atomic Desktops Bootable Container images using the Fedora Konflux instance.
We are looking for contributors to help us make the Fedora Atomic Desktops the best experience for Fedora users.
I’m happy to announce that we have sealed bootable container images ready for testing for the Fedora Atomic Desktops!
Note: You can also read this post on the Fedora Magazine.
Sealed bootable container images include all the components needed to create a fully verified boot chain, from the firmware to the operating system composefs image.
This relies on Secure Boot and thus only supports system booting with UEFI on x86_64 & aarch64.
The components are:
Both systemd-boot and the UKI are signed for Secure Boot. The images are test images so the components are not signed with the official keys from Fedora.
The main direct benefit that we will get from this support is that we will be able to enable passwordless disk unlocking using the TPM in a way that will be reasonably secure by default.
See the instructions at github.com/travier/fedora-atomic-desktops-sealed on how to give the pre-built container and disk images a try and how to build your own.
We welcome testing and feedback! Please see the list of known issues and report new issue at github.com/travier/fedora-atomic-desktops-sealed. We’ll redirect them as needed to the right upstream projects.
Beware, those are testing images. The root account does not have a password set and sshd is enabled, by default, to make debugging easier. The UKI and systemd-boot are signed for Secure Boot but, since those are test images, they are not signed with the official keys from Fedora. Don’t use those images in production.
If you want to know more about how sealed images work (i.e. how we make bootable containers, UKI and composefs work together to create a verified boot chain), see the following presentations and documentation:
Thanks to all the contributors that made this possible, notably (but non exhaustively) from the following projects: bootc & bcvk, composefs & composefs-rs, chunkah, podman & buildah and systemd.
Some time ago I used a feature in KDE called “Run a command” when an event triggered. It triggered for me when a calendar event fired and used Piper TTS to read the event to me out loud. A small popup and a pling don’t work for me.
I tried to get the feature back into KDE, but since the merge request isn’t going anywhere and people don’t give details how to implement it correctly I wrote Sigrun now. It is named after a Norse Valkyrie and is short for Signal Run.
It is a systemd service running as a user and listening on DBus signals. Once it finds a configured one, it runs its command. The desktop doesn’t matter.
Here is the rule that reads my calendar reminders aloud via kde-tts.py:
[[rule]]
name = "calendar-tts"
[rule.event]
type = "notification"
[rule.filter]
app_name = "kalendarac"
summary = "Meeting.*"
[rule.filter.hints]
"x-kde-eventId" = "reminder"
[rule.action]
command = "/usr/local/bin/kde-tts.py"
args = ["-t", "{summary}", "-d", "{body}"]
While working on the new git signing feature for
tumpa-cli I noticed that some of
the commits can not be verified. For a moment I freaked out and then thought it
must be a problem in my code. But, I could not dig enough. Opus 4.7 helped me
to find the eaxct commit in git's history and a reproducer. I reported the issue to the
maintainers
and they are working on a fix.
\xc2\xa7 aka § was the cause for me.
| msg.txt body | sign stdin (tee'd) | stored commit body | verify | |
|---|---|---|---|---|
| git 2.43 (host) | ... 20 a7 0a | ... 20 c2 a7 0a | ... 20 c2 a7 0a | OK |
| git 2.53 (CI, docker) | ... 20 a7 0a | ... 20 a7 0a | ... 20 c2 a7 0a | BAD |
git 2.43 transcoded the message to UTF-8 BEFORE calling the signer;
signer and storage saw the same bytes (c2 a7). git 2.53 hands the
signer the RAW bytes (a7) and transcodes only on the way to the
commit object (c2 a7). The invariant "bytes fed to gpg.program at
sign time equal the bytes a verifier sees when it reads the commit
back" is broken.
git config i18n.commitEncoding iso-8859-1 is supposed to be the configuration
if we have non UTF-8 characters. But, I never knew about this configuration
before I found the bug.
I want to thank my friends in Anthropic for letting me use the tools and techonology to keep building.
If you work with patches and git am, then you’re probably used to seeing patches fail to apply. For example:
$ git am CVE-2025-14512.patch
Applying: gfileattribute: Fix integer overflow calculating escaping for byte strings
error: patch failed: gio/gfileattribute.c:166
error: gio/gfileattribute.c: patch does not apply
Patch failed at 0001 gfileattribute: Fix integer overflow calculating escaping for byte strings
hint: Use 'git am --show-current-patch=diff' to see the failed patch
hint: When you have resolved this problem, run "git am --continue".
hint: If you prefer to skip this patch, run "git am --skip" instead.
hint: To restore the original branch and stop patching, run "git am --abort".
hint: Disable this message with "git config set advice.mergeConflict false"
This is sad and frustrating because the entire patch has failed, and now you have to apply the entire thing manually. That is no good.
Here is the solution, which I wish I had learned long ago:
$ git config --global am.threeWay true
This enables three-way merge conflict resolution, same as if you were using git cherry-pick or git merge. For example:
$ git am CVE-2025-14512.patch
Applying: gfileattribute: Fix integer overflow calculating escaping for byte strings
Using index info to reconstruct a base tree...
M gio/gfileattribute.c
Falling back to patching base and 3-way merge...
Auto-merging gio/gfileattribute.c
CONFLICT (content): Merge conflict in gio/gfileattribute.c
error: Failed to merge in the changes.
Patch failed at 0001 gfileattribute: Fix integer overflow calculating escaping for byte strings
hint: Use 'git am --show-current-patch=diff' to see the failed patch
hint: When you have resolved this problem, run "git am --continue".
hint: If you prefer to skip this patch, run "git am --skip" instead.
hint: To restore the original branch and stop patching, run "git am --abort".
hint: Disable this message with "git config set advice.mergeConflict false"
Now you have merge conflicts, which you can handle as usual. This seems like a better default for pretty much everybody, so if you use git am, you should probably enable it.
I’ve no doubt that many readers will have known about this already, but it’s new to me, and it makes me happy, so I wanted to share. You’re welcome, Internet!
Do you still use the default green-colored Bash prompt?
Then it's time to upgrade to a much improved shell UX using the Bash Color Prompt (bcp).
Results:
Lately, I've been getting annoyed by my current Bash prompt offering me a poor UX when dealing with multiple Toolbox containers. The prompt lacked crucial information: to which of the running containers a given shell belongs to?
I did a quick search to see if there's an easy fix I'm missing out but it turned out there is a long-standing desire to improve Toolbox's UX in this respect and multiple approaches have been discussed/tried. Here are some relevant tickets:
/run/.containerenv file and
set the Bash prompt according to it.After looking around on how to update my Bash prompt to become
"container name"-aware, I came across Fedora's shell-color-prompt package
which was conveniently just a dnf install bash-color-prompt away (strangely,
the source package is named shell-color-prompt while the binary package is
named bash-color-prompt, see also RHBZ #2291024).
My attempts at configuring the Bash prompt to be "container name"-aware with the help of shell-color-prompt didn't look very promising.
I had a little epiphany when discovering that shell-color-prompt's maintainer, Jens Petersen, recently wrote a replacement for it: namely Bash Color Prompt (bcp). Jens describes it as having a cleaner declarative approach for creating one's custom Bash prompt.
Seeing how easy Bash Color Prompt (bcp)'s example.bashrc.sh looked like, I decided to give it a try.
It worked and its declarative approach at creating a custom Bash prompt was really easy to follow and tailor to my needs.
Currently, until the new version of Bash Color Prompt (bcp) is packaged in Fedora (and other distributions), a simple way to install it is to just grab the bash-color-prompt.sh file directly from its GitHub repository and put it somewhere in your home directory.
Afterwards, just source and configure it in your .bashrc file. Here is how
I've done it:
# Use the new Bash Color Prompt (bcp) by Jens Petersen (Red Hat) to handle PS1.
# NOTE: Temporarily, I've just copied the script from:
# https://github.com/juhp/bash-color-prompt/blob/main/bash-color-prompt.sh
if [ -f $HOME/bash-color-prompt.sh ]; then
source $HOME/bash-color-prompt.sh
fi
# Configure bcp.
bcp_layout() {
local exit_code=$1
# hexagon
bcp_container
# opening [
bcp_append "["
# user@host or user@container(host)
local user_color="green"
if [[ $EUID -eq 0 ]]; then user_color="red"; fi
local machine="\h"
if [ -f /run/.containerenv ]; then
container_name=$(grep -oP '(?<=name=")[^"]+' /run/.containerenv)
machine="$container_name(\h)"
fi
bcp_append "\u@$machine " "$user_color;bold"
bcp_title "\u@$machine:\w"
# directory
bcp_append "\w" "blue"
# git status
bcp_git_branch " " "magenta" "yellow"
# status indicator
if [[ $exit_code -ne 0 ]]; then
bcp_append " ✘$exit_code" "red;bold"
fi
# actual prompt char
bcp_append "]\$ " "default"
}
# Initialize bcp.
bcp_init
This is a report created by CLE Team, which is a team containing community members working in various Fedora groups for example Infrastructure, Release Engineering, Quality etc. This team is also moving forward some initiatives inside Fedora project.
Week: 20 – 24 April 2026
This team is taking care of day to day business regarding Fedora Infrastructure.
It’s responsible for services running in Fedora infrastructure.
Ticket tracker
This team is taking care of day to day business regarding CentOS Infrastructure and CentOS Stream Infrastructure.
It’s responsible for services running in CentOS Infratrusture and CentOS Stream.
CentOS ticket tracker
CentOS Stream ticket tracker
This team is taking care of day to day business regarding Fedora releases.
It’s responsible for releases, retirement process of packages and package builds.
Ticket tracker
This is the summary of the work done regarding the RISC-V architecture in Fedora.
This team is taking care of quality of Fedora. Maintaining CI, organizing test days
and keeping an eye on overall quality of Fedora releases.
This team is working on introduction of https://forge.fedoraproject.org to Fedora
and migration of repositories from pagure.io.
This team is working on keeping Epel running and helping package things.
COPR 2026-04-16 – Highlights
If you have any questions or feedback, please respond to this report or contact us on #admin:fedoraproject.org channel on matrix.
The post Community Update – Week 17 2026 appeared first on Fedora Community Blog.
Release Candidate versions are available in the testing repository for Fedora and Enterprise Linux (RHEL / CentOS / Alma / Rocky and other clones) to allow more people to test them. They are available as Software Collections, for parallel installation, the perfect solution for such tests, and as base packages.
RPMs of PHP version 8.5.6RC1 are available
RPMs of PHP version 8.4.21RC1 are available
ℹ️ The packages are available for x86_64 and aarch64.
ℹ️ PHP version 8.3 is now in security mode only, so no more RC will be released.
ℹ️ Installation: follow the wizard instructions.
ℹ️ Announcements:
Parallel installation of version 8.5 as Software Collection:
yum --enablerepo=remi-test install php85
Parallel installation of version 8.4 as Software Collection:
yum --enablerepo=remi-test install php84
Update of system version 8.5:
dnf module switch-to php:remi-8.5 dnf --enablerepo=remi-modular-test update php\*
Update of system version 8.4:
dnf module switch-to php:remi-8.4 dnf --enablerepo=remi-modular-test update php\*
ℹ️ Notice:
Software Collections (php84, php85)
Base packages (php)
You attended the F44 Virtual Release Party!
A bit of a mixed bag today. Good reads are the Agents and the Era of Overproduction and AI reshaping values at Enode blog posts. Podcast-wise, give Driverless World and the Pedro Sánchez episode a listen.
TBM 418: Campfires, Trails, and Quests - collection of how AI affects teams.
Before I knew it, I found myself already at the second day of the FOSSAsia 2026 conference on 10th March 2026 after a rather eventful previous day. While I did plan to wake up a little later, I realized that I had to prepare for my presentation on the Fedora Badges Revamp Project that was scheduled later that day. This was the only proposal that got selected, and the other one on the Fedora Forgejo Migration Project did not make it, so I had to ensure that I was well prepared for this. After some rounds of talk rehearsals and some quick bites of breakfast, Samyak Jain and I exited Lumen Bangkok Udomsuk Hotel to be greeted by comparatively cooler weather on that day. Since it was more of the same choices on the breakfast menu, we were able to leave the hotel as early as 0945am Indochina Time. Climbing the long escalator to finally make it to the FOSSAsia 2026 event venue, we noticed just how much the community booth layouts had changed that day. For instance, the GNOME Foundation community booth was now positioned beside those of the Debian Project and the TeaLinuxOS Project.

I had a mixed feeling about this, while this did allow for the corridor to be widened, the space for volunteer staffing at the community booths of the GNOME Foundation, Debian Project, and TeaLinuxOS Project was severely restricted. This meant that it was quite a struggle for folks who had to move away from or into the booth locations, as that required the folks around them to be moved as well. Just like the day before, we decided to assist Aaditya Singh with booth operations, all while trying to finish off the last hundred Fedora Project stickers that we had saved up from our participation at DevConf.IN 2026. While we had previously signed up for the ExpressVPN-sponsored FOSSAsia hackathon on Internet Security Development Using Artificial Intelligence, we ultimately decided to give away our designated slot to other younger participants who could not register in time. We discussed just how important this hackathon participation was for budding folks who were getting started with free and open source software, but not so much for us, since we have been around the scene for a long while now.

This opened us up further for many impromptu conversations, ideation discussions, booth visits, and of course, scheduled presentations. After leaving Samyak at the hallway track, I headed into a newer arrangement of community booths in the second corridor. Since my work laptop had malfunctioning cooling, which made screeching loud noises, I had to use his laptop to deliver my presentation later that day. Not only did I ensure that I downloaded my slide deck and speaker notes onto his computing device well in advance, but I also took care to avoid display inactivity suspensions and ensure an ample laptop battery charge. After a quick round through the community booths, I headed into the hall where the ExpressVPN-sponsored FOSSAsia hackathon was taking place, purely out of curiosity. Informing one of the co-located volunteers about my involvement as a visitor and not as a participant, they allowed me into the room to chat with the fellow participants. Since the hackathon was duration gated, I took extra care to curb my nosiness and allow the teams to work their magic.

Out of all the participating groups that I interacted with in the competition hall, my conversations with the likes of Saksham Sirohi and Arnav Angarkar about their ideas stayed with me. They knew how they had to limit their actual implementation and were focused on delivering an MVP (Minimum Viable Product) that could be expanded upon. I also took some time to have a quick conversation with ExpressVPN employees who were serving as competition moderators there to understand what they look for in a certain implementation. Getting myself a cup of boba tea (and skipping it because I did not like it), I met up with the likes of Ajinkya R. and Dakshita Thakkar at the hallway track. On returning to the collective, I decided to lend my assistance to Pongsakorn S. at the KDE e.V. community booth as well, besides helping Aaditya at the GNOME Foundation community booth. I learned that he was using OpenSUSE Leap on his personal laptop and was interested in RPM packaging, while I was placing some of the last fifty Fedora Project stickers on the booth table.

In contrast to the unoccupied booths from yesterday, it was endearing to see how the likes of Aaditya and Pongsakorn stood their ground at their respective booths, even when the footfall on the second day was noticeably smaller than the day before. Amidst our conversations, I connected them with each other, and Aaditya even took the chance to showcase the GNOME Foundation community quiz application that he had been working on. Following the community quiz activity idea from the Fedora Project Community Presence at DevConf.IN 2026, he wanted to use the remaining GNOME-styled tee-shirts as prizes for folks who attempted the quiz and got all the answers right at the hardest difficulty. Needless to state explicitly, I gave the quiz a try, not because I wanted another tee, but because I wanted to appreciate what he had been working on. A part of me also wanted to take on this hardest-difficulty question challenge to understand how much I knew about the GNOME Foundation and to see if I could learn things I did not know about the community and its activities in the process.


Collection #1
While I got most questions correct on the first attempt, it took me three attempts to get the satisfaction of having all the answers right. Departing from the booth and weaving through a thin collection of mostly booth attendants, event volunteers, and talk presenters, I made it to Mitchell Yue's talk on the Lynx Framework and how it could be used to move from web development to native applications at around 1000am Indochina Time. I was intrigued to learn about this development library, which made use of native bindings for impressive performance. I was even more taken aback (but positively) when I was gifted a ByteDance tee-shirt for asking how it differed from the usual QtWebEngine bindings, as I was well-versed in Qt. This act of brand advocacy definitely seemed to have encouraged the audience to share more feedback or ask more questions during the talk. Heading back to the Debian Project community booth, I shared my experiences from my splurge at Animate Store from the previous day with Ananthu CV before helping Abhijit PA get some cold coffee from the reception desk.


Collection #2
We were also joined by Shreenivas at the GNOME Foundation community booth, which allowed us to pace ourselves while tending to questions and feedback from booth visitors. During a brief visit from Daniel J Blueman, he connected with Aaditya to understand where he could report bugs and improvements for the GNOME desktop. He seemed to have been plagued by problems with the use of external monitors on his ARM-based (Advanced RISC Machines) SoC-powered (System on Chips) laptop on his fresh Debian Linux installation. Meanwhile, I wasted no time unofficially promoting my presentation on the Fedora Badges Revamp Project to the enthusiastic folks from the day before who visited our booth. Meanwhile, I also gave a quick demonstration on how RPM packaging works to a relentlessly curious Pongsakorn, who wanted to package their Rust application for Fedora Linux. Using one of my own hobby projects, Loadouts for Genshin Impact, I showed him how to write an RPM specfile and how programming-language-specific macros could help make things easier.

As I was not sure if something like the PyProject RPM Macros for Rust existed, I wanted our conversation to be an entry point for them into the RPM packaging tooling ecosystem. While answering his question about Linux kernel package versioning on Fedora Rawhide, I also fielded a question about conflicting packages from Abhijit. Since he was experienced with how the Debian Linux packaging process would handle this, he was curious to know how RPM package management tools would address the situation. Using various examples of how software packages are related to one another, I explained how this linking not only mapped dependencies but conflicts as well. At around 1230pm Indochina Time, I wrapped up my lunch and connected with Simon Strohmenger, who was visiting the community booth then. He shared how he worked on funding free and open source software events across Europe and supporting critical engineering ecosystem resources and was also curious to know more about what I had to share regarding the AI-assisted Contribution Policy from the Fedora Project.

After sharing contact details with each other, he also checked in with me on whether I would be willing to present a proposal on best practices in contributor onboarding and retention in free and open source software communities. In our conversations, we realized just how crucial it had become to discuss governance policies among grassroots collaborators to ensure that their implementation does not come off as a negative surprise. I also shared my approaches to using LLM (Large Language Model) tooling to assist project maintainers and budding contributors by automatically addressing various low-hanging fruits. Empathizing with how stressful and overwhelming it ends up being for the maintainers and newcomers, respectively, AI tooling could be utilized for these positive purposes instead of worrying about its popularized contemporary taboo perspective. While I could not get him to participate in my Fedora Badges Revamp Project talk later that day, he said that he would send over some of his engineering friends, as he found the general idea of awarding contributions fascinating.

Since there were not a lot of folks present at the event apart from those who had some activity to participate in, Aaditya struggled to distribute the remaining GNOME-styled tee-shirts. On the other hand, we had been extremely successful in distributing the stickers, so we were able to advocate for our projects from the community booth. At around 0130pm Indochina Time, I checked in with the event volunteers about the allegedly malfunctioning livestreaming functionality in one of the presentation halls. Amidst my attempts to have that fixed before my talk began in about three hours, I briefly met up with Saksham again in the hallway, who mentioned his plans to expand the hackathon project, and I shared the idea of "releasing (software) fast and releasing often." It was an absorbing sight to see how the participants at the ExpressVPN-sponsored FOSSAsia Hackathon were helping each other with a gargantuan variety of problem statements. Such a sight (and evidence of friendly competition) is something one would rarely get to see outside of free and open-source software communities.

I was thankfully able to get myself caffeinated at around 0330pm Indochina Time, as with the activities moving slowly throughout that that day, I found myself zoning out every now and then. After addressing the livestreaming issue, I met up with some folks from AWS (Amazon Web Services) who were visiting our trinity community booth lineup and had previously worked with the likes of David Duncan and Rich Bowen. Since they had experience with Amazon Linux, they reflected on how Fedora Linux provided them with an innovation-driven upstream distribution to build upon. After that one last conversation with them, Aaditya decided to start packing up the booth at around 0430pm Indochina Time, while I kept myself busy helping him with the tooling. With the time being barely fifteen minutes away from my presentation's commencement, Samyak returned to the GNOME Foundation community booth. Pongsakorn also found his companion back when Tomas returned to the KDE e.V. community booth, and I was ready for my talk to be delivered at the tail end of the event.


Collection #3
As the presentation designated training room had my slide deck already fetched, I did not have to bother with sharing my screen. There were some technical issues with the clicker device, though, as it failed to switch slides when the window was in fullscreen mode. I decided not to settle for windowed mode to save time, while Norbert Preining introduced me in the speaker area at around 0445pm Indochina Time. For a presentation scheduled at the tail end of the conference, it was reassuring to see that I still had around twenty attendees in the hall who were curious to know what I had to offer. Being the deciding moment that I had been practicing regularly for, I wanted to ensure that I was doing justice to their time (and attention) and that of the remote attendees. Thanks to Samyak's laptop and the regular touchups, the fifteen-minute-long presentation went largely well, and I also addressed some feedback and questions from both the in-person attendees and the hall host, Norbert. After finishing my talk, we headed into the competition hall where the four judges had assembled, by then.


Collection #4
After briefly waiting for the participating teams to propose their project ideas and for the judges to complete their evaluations, our collective made our way into the main hall to witness the winner announcements at around 0530pm Indochina Time. As the winners were announced and the event concluded, Samyak and I deliberated on our evening plans, as we did not intend to continue our stay with the FOSSAsia 2026 attendees at the Night Market. After his proposal was turned down by the folks he was planning to invite, we formed a group of four, including him, Soundarya Rangarajan, Aaditya, and myself, to visit Bangkok Chinatown. This idea felt strategically sound as it had become a little too late to visit the riverside for a calming evening boat ride dinner, and coincidentally, Soundarya was staying at the same hotel as Samyak and me. After making a quick drop of Aaditya's belongings at my hotel room and taking some time to re-energize ourselves after the long second day, we started looking for cabs using the Grab application at around 0700pm Indochina Time.


Collection #5
Amidst the heavy Bangkok evening traffic, we struggled to get a ride until we finally secured one after about thirty minutes of waiting through the Bolt service. I was not a fan of the service, as the toll expenses were not accounted for in the final billing, thus resulting in us having to pay for them separately. What I was a fan of, though, was the taxi driver assigned to us, as the cheerful person did not let the piling traffic and the linguistic barrier prevent him from having a friendly chat with us. With the use of Google Translate, he graciously helped us plan our course and described what we could expect in Bangkok Chinatown. And he could not have been more right - because when we got off the Bolt ride, the view of the neon filled, slightly humid hustle and bustle of Chinatown was a scene that nothing could even compare to. Weaving through the visitor crowd at the periphery, our first stop was, of course, a stall selling the world-famous Mango Sticky Rice for just 100 Thai Baht per plate. We finally got to experience firsthand just how true the people were who sang nothing but praises of this exclusive snack!


Collection #6
We dove deeper into our little roadside dining adventure with some Coconut Egg Sweet Crepes and Japanese Fried Octopus Balls before deciding to split into groups of two, as it did not sit right by Aaditya and myself, since the cuisines there were mostly non-vegetarian. As Samyak and Soundarya headed their way at around 0830pm Indochina Time, him and I decided to get even more creative by digging into some Shrimp Meat Dimsum Dumplings and Fried Crab Meat Rolls. After finishing off with some Fried Sweet Maple Fish and Ripe Alphonso Mango Slices, all for amazing bargains, we navigated the confusing pathways to reach Jam Jam Eatery Chinatown. As Aaditya's belongings were in my room, we left for the hotel at around 0915pm Indochina Time without waiting much longer. It was not that I had my fill of exploration, but I felt responsible to ensure he made it back safely at BTS Punnawithi to his hotel room. After hanging out in my hotel room and discussing industrial mentorship ideas, I decided to see him off and call it a day at around 1200am Indochina Time.
The question was simple enough: How good of an image editor can you build with $20 worth of Claude Code Pro subscription?
The answer, after one month and roughly that budget, is: surprisingly good, occasionally wrong about performance, and frustratingly confident about things it hadn’t measured.
RasterLab is a non-destructive RAW image editor written in Rust, built almost entirely by Claude Code. Not prototyped by it, not scaffolded by it — actually built by it, with me driving direction and reviewing the output. One month, four weekly usage blocks, one image editor.
On April’s fool’s day, I shared that syslog-ng can reach 7 million EPS. This test lab result was in part possible thanks to a few performance enhancements coming to syslog-ng version 4.12.
How 7 million EPS is possible? Before diving deeper, let me repeat it: 7 million EPS is just a lab testing result, not (yet) possible in the real world. However, the technologies enabling this are already available on the development branch of syslog-ng, or have been available for ages, just not tested or promoted enough.
Read more at https://www.syslog-ng.com/community/b/blog/posts/new-performance-tuning-possibilities-in-syslog-ng

Scheduled maintenance work on Forge Runners. Runners will be offline. Scheduled jobs or event triggered actions should be picked up when service is back running, however it's not completely excluded that some actions might need to be retriggered. forge.fedoraproject.org will not be affected.
Contact: #fedora-forgejo:fedoraproject.org
Fedora Linux 44 is almost officially here! While our release engineering team and packagers focus on the final touches for F44, it is nearly time for the usual tradition of a Global Virtual Release Party! It is almost time to celebrate! For this release, we will celebrate Fedora Linux 44 slightly ahead of its actual final release.
Regardless of the final calendar date of any Fedora Linux release, every release represents months of hard work, testing, and collaboration from our global community. Whether you are a long-time package maintainer, a dedicated documentation writer, a creative graphic artist, or a brand-new user firing up a Fedora Atomic Desktop for the very first time, this release belongs to you.
To mark the occasion, we are hosting the Fedora Linux 44 Virtual Release Party this Friday, April 24, 2026.
Join us for a half-day of live sessions, recorded deep-dives, and community socialization. We have packed the schedule with updates from the Fedora Project Leader, behind-the-scenes looks at new features like Nix integration and DNF5, and a sneak peek at our upcoming Flock conference!
How to AttendThe event is 100% free and open to everyone, but registration is required to access the virtual venue. We are also happy to continue using our chat communication provider, Element Creations, as the virtual venue for the Global Virtual Release Parties. Thanks Element & Matrix.org for providing us the great tools to bring our global community together!
Register for the Fedora Linux 44 Release Party live event on Matrix! 
Event ScheduleAll times are listed in US Eastern (UTC-4) and UTC.
| Time (EDT) | Time (UTC) | Session | Speaker(s) | Description |
| 09:00 AM | 13:00 | Opening Remarks | Jef Spaleta, Justin Wheeler | Join the Fedora Project Leader and Community Architect as we kick off the celebration, look back on the last release cycle, and share news from around the project. |
| 09:15 AM | 13:15 | FPL Update | Jef Spaleta | Jef Spaleta shares his reflections on Fedora Linux 44, what this release means for the project, and his vision for what lies ahead. |
| 09:30 AM | 13:30 | Packit as Fedora dist-git CI | František Lachman, Laura Barcziova, Maja Massarini, Matej Focko, Nikola Forro | The Packit team walks through how Packit is taking over Fedora dist-git CI, what this change means for contributors, and what’s next. |
| 09:45 AM | 13:45 | Adding Nix to Fedora: we did a thing | Jens Petersen | A behind-the-scenes look at bringing the Nix package tool to Fedora 44 — what it took, what it unlocks, and what it means for reproducible environments. |
| 10:00 AM | 14:00 | PackageKit with DNF5 and KDE Integration | Neal Gompa | Dive into the integration of PackageKit with DNF5 and KDE in F44, what changed under the hood, and what it means for the desktop experience. |
| 10:15 AM | 14:15 | Server WG | Peter Boy | An overview of the Server Working Group’s initiative to create a dedicated home server spin, driven by community home lab feedback. |
| 10:30 AM | 14:30 | Break | None | Take a screen break, grab some coffee, or merge that Pull Request. We will be back with more programming soon! |
| 11:00 AM | 15:00 | Fedora Docs | Petr Bokoc, Peter Boy | An update on the state of Fedora Docs and the ongoing Docs Initiative — where things stand today, and how you can get involved. |
| 11:15 AM | 15:15 | What’s new and what’s next for the Fedora Atomic Desktops | Timothée Ravier | Discover what is new across the Fedora Atomic Desktops family (Silverblue, Kinoite, Sway, Budgie, COSMIC) and the roadmap toward Bootable Containers. |
| 11:30 AM | 15:30 | Flock Preview | Justin Wheeler | With Flock just weeks away, get an early look at what to expect — sessions, highlights, and reasons to get excited about this June’s event. |
| 11:45 AM | 15:45 | TBA | TBA | Stay tuned! |
Don’t miss out on the chance to connect with the people who build Fedora. Grab your ticket, share the link with your friends, and get ready to celebrate Fedora Linux 44.
The author of this article utilized generative AI (Google Gemini 3.1 Pro) to assist in part of the drafting and editing process.
After kicking off the FOSSAsia 2026 proceedings with the "choose-your-adventure-flavoured" Community Day, the first day on 09th March 2026 began for me at around 0730am Indochina Time. It took me quite a lot of strength to get myself moving out of bed, but I knew that I had to make it to the conference venue by 1000am Indochina Time to appear in the group photo. I rang up Samyak Jain and we planned to rendezvous downstairs at the hotel reception in about ninety minutes from then. While I had my talk on the Fedora Badges Revamp Project scheduled for the next day, I wanted to ensure that I was able to find some time to rehearse it at least twice per day leading up to the scheduled time. Fifteen minutes were not enough for the number of topics I wanted to cover as a part of my presentation, and as such I wanted to use it as an appetizer or an entry point for those interested to explore the project by themselves. I headed downstairs to get some breakfast bites after having connected with my friends and family from back home. The choices were more or less the same as yesterday, but this time around, I elected to stick with the greens as much as possible, just as Samyak did.

This change in my dietary preference gave me a first-person perspective on just how difficult it ends up being for someone who is vegetarian. While Samyak had his collection of "just-pour-warm-water" packaged food, I had to stick mostly with noodles and salad for the most part. When it came to the origin of the meat products, you would not want your guesses to go wrong, and the language barrier could pose a real problem for the specifics. After an okayish refueling, we stepped out of the hotel premises into a humid Thailand at around 0945pm Indochina Time and were able to reach True Digital Park West in about ten minutes from then. Thankfully, the venue was a whole lot more crowded than the previous day, with a variety of community booths either getting placed or attended by visitors. While this day (and the next day) were exclusive to ticketed personnel, the crowd was definitely a whole lot more than the free-of-charge Community Day (of the day before). I split from Samyak as he left with one of the FOSSAsia 2026 community volunteers to obtain his attendee badge. As I already had the speaker badge, I could dive in headfirst into all the booths and friends that we had on the ground there.

One of the first folks I met up with was, of course, Aaditya Singh from the GNOME Foundation, who was setting up the community booth by himself. With one more pair of hands, we made quick work of the booth setup as we placed both the Fedora Project stickers beside the GNOME Foundation stickers on display there for picking. Aaditya mentioned that he had been still rocking the Fedora Workstation installation that he had set up during the GNOME Asia 2024 event, where I gave away flash drives with distributions having the premier GNOME desktop environment. As he set up his exhibition laptop, he wanted to promote the vanilla GNOME desktop experience which, in his opinion, could not be done better than it was on Fedora Workstation. Funnily enough, we had the KDE e.V. community booth a corridor apart from the GNOME Foundation community booth. Trying not to read too much into the poetically inclined booth arrangements, I decided to visit them next once I was done setting up the operations with Aaditya. Amidst all the upcoming developments there, Tomas was pleased to share his work on the Konsole KDE terminal emulator with various advancements.

From live previews of image thumbnails and semantic colors to proactive sharing of directory layouts and drag-and-drop operations, Tomas' improvements felt really impactful in the quality-of-life areas for any general terminal emulator user. This put the Konsole KDE terminal emulator miles ahead of the container-first terminal emulator, Ptyxis, in my personal opinion. I did have questions about the security of this approach, as some files could be malicious in nature (especially SVG files when it came to image assets), but since Konsole relied on pre-generated thumbnails, this was not a problem. No files were executed in the attempt to make the proactive previews available to the users, and that gave me peace of mind to top off my already excited overall feeling about the feature improvements. After meeting up with Pongsakorn S., another KDE e.V. community member, and picking up some stylish badges, I left a portion of the Fedora Project stickers to be shared from their community booth as well. It was important to commemorate the fact that since the release of Fedora Linux 42, the KDE Plasma variant of Fedora Linux had stepped up to become an actual edition instead of just being a spin.

With Samyak returning to the GNOME Foundation community booth, Aaditya was no longer staffing by himself, so I went ahead to have a conversation with the folks at the Debian Project and the TeaLinuxOS Project community booths. It was interesting to note that they had their ground operations set up as early as 0830am Indochina Time. Chatting with the likes of Ananthu CV and Harry LBI from both the community booths respectively gave me distinctive perspectives on their projects and their involvements. While the stalwart Debian Project community has been around for a while, my interest was piqued to see just how TeaLinuxOS made the Arch Linux distribution usable by normal users with the use of the Calamares Installer and curated OOBE tooling. On being requested for advice from our steadfastly evolving Fedora Linux operating system, I emphasized just how important it is for the documentation to have clarity and to be accessible. A lot of contributors to a free and open source software project come from its pool of users, so it cannot be understated just how important it is to ensure that users are treated as first-class citizens and that their issue statements are taken seriously.

I departed further into the wider collective of community booths after having taken some pictures with these folks at their community booths. I was halted at the entrance by Wendy Ha, who was just arriving at the conference venue then and wanted me to meet a CNCF ambassador from Japan. Our (rather, short-lived) conversation was put to an end by Rajan Shah, who was seeking out folks from Red Hat and IBM Corp for a group photo. I pulled in Samyak, and with the likes of Shivraj Patil, Veerkumar Patil, Deepesh Nair, and Gaurav Kamathe from Red Hat and a bunch of others from IBM Corp, we had photographs both in front of the FOSSAsia conference venue entrance and in front of the long escalator. This is where I met Soumyadip Choudhury, another colleague from Red Hat, from my hometown, and we spent some time chatting before we headed into the large hall for the FOSSAsia community group photo. I was glad to note that, in an attempt to avoid large commotion, the participants could stay right where they were in their seats and the photo would be taken from the event stage, while only those who were at the extreme periphery were requested to make it to the middle part of the hall.

As most of our Red Hat collective was already together at the center, all we had to do was enjoy the event hosts' excellent oration while we put out our best poses for the cameraman. We also did not miss the opportunity to take some selfies by ourselves while we were at it, because the audience population was only going to get thinner from here on out for the day. Once we were through with those pictures, Samyak and I ran into the likes of Pritesh Kiri and Dakshita Thakkar, who happened to be attending the event for the first time just like us. Heading out to visit the community booths together, we began by interacting with folks working on ESP32x-powered 2D robotic drawing computers and 3D-printer-powered accessibility-focused appliance designs. Not only did we get to see the demonstration, but we also got to experience how a differently abled person could use a Sony DualShock 4 controller with just one hand and how a pencil-triggered nail cutter could prevent folks from getting hurt. This was followed by a visit to the VideoLAN Project community booth, consisting of what looked to me like mostly disinterested staff members tending to the booth visitors as and when they saw fit.

After helping ourselves to the postcard prints featuring the famous VideoLAN Project's parodies of famous movies, we moved over to the Matrix Project and database-related community booths. With their booth placements also made beside one another, it was interesting to know from them what they had to offer that their alternatives did not. Skipping through the dishearteningly unoccupied FLOSS Fund community booth, I met Samyak again at the Google Summer of Code community booth. It was great to catch up with Stephanie Taylor after having met her during FOSDEM 2025, and she also graciously provided us with some swag for having participated in the program as a mentee (for Samyak) and as a mentor (for myself). We returned to the GNOME Foundation community booth while being surrounded by an enthusiastic group of folks who were contributors to both the GNOME Foundation (through donations) and the Fedora Project (through contributions). Not only were they thrilled to find us there, but they also went a step ahead by proudly showing off their Fedora Linux installations (most prominently of which were Fedora Silverblue and Fedora Kinoite) from their laptops.


Manifest #08
Stellar moments like these always end up re-energizing my resolve to support free and open source software in ways that I could. Getting to interact with folks like these who were just as resolute about the Fedora Project as I was made this trip already worthwhile, even though we had a couple of days ahead of us. They mentioned just how they regularly organized their local Fedora Linux installfest events to help with the adoption of our Fedora Project's primary offerings and also participated in the regularly organized testing events in our engineering community. While I could not provide them with the means to make donations as they wanted to contribute further, I requested them to propose their event through the officially ratified Fedora Mindshare event process. Being the model open source citizens that they were, I wanted to ensure that they did not have to spend out of their own pockets to organize Fedora Project events while ensuring that they were well equipped to host one in the already underrepresented APAC region. Increasing the Fedora Project's APAC representation was already at the top of my list, and this interaction only ended up centering my commitment towards this mission.

They magically had the intrinsic understanding of just how badly we need contributors to be onboarded and retained within the community to ensure the longevity of a project. A lot of times, it eventually ends up falling on the shoulders of a flywheel person when someone from the community has to depart for some reason. We did not even have to share what was upcoming in Fedora Linux, as they even had an alternate Fedora Rawhide installation handy for development purposes. As the event owner for the Fedora Project's community presence at FOSSAsia 2026, a part of me still felt bad to have missed out on having a dedicated separate community booth at the conference. Sure—we felt right at home with the folks from the GNOME Foundation and KDE e.V., but we could very well have taken advantage of being the prominent RPM-based distribution there on site. A great deal of the community conversations would have then made their way directly to our community booth instead of us having to seek them out as event prospectors in all of our interactions. Instead of dwelling on this situation, I wanted to interact with the booth visitors from a Fedora Project governance member's perspective.

As I was both a part of the Fedora Mindshare and the Fedora Council governance bodies at the time, I used my event presence to hear more about potential opportunities as well as possible approaches towards community outreach from grassroots contributors. Depending on the matter at hand, I could either choose to address the same or pass it over to the teams responsible as a conduit. Finishing off with this heartfelt interaction, I headed over to the OpenKylin Project community booth to connect with the folks working on this downstream distribution of Debian Linux. Apart from its absolute resemblance to the Windows 11 user interface, this provided folks with a pleasant slope of technical learning while they executed their departure from Microsoft's popular operating system. On my technical suggestion, they made it a point to reach out to the downstream packagers because, from the Fedora Project packaging perspective, their packages were either wildly outdated or simply broken, thus hurting their adoption in the RPM-based universe. I also proposed the utilization of Fedora Linux for the development of the project's codebase, as we provided a speedy-moving development toolchain with updates that they could utilize.

While there were presentations and workshops scheduled throughout the first day, I still found myself spending most of my time in the hallway track. It was so easy to find someone I knew (or wanted to connect with) or something I knew (or wanted to know about) in all of the conversations. I also appreciated just how my fellow colleagues from Red Hat participated as volunteers for the event, all while sharing goodwill for potential long-term collaboration with the FOSSAsia organizers. This was around the time that the queue for availing lunch started forming at around 1200pm Indochina Time, so Samyak and I ended up joining in. Unfortunately, we had to push our lunch plans for later to make it to Praveen Kumar's interactive talk on Creating Custom Linux Images Using Bootc Technology and Podman Desktop. Delaying our lunch meals by about fifteen minutes or so allowed us to skip waiting in a long queue, but we did end up getting disappointed to notice that the menu had not changed for that day too. Do not get me wrong - the meal was okay for an obligatory refueling, but it left a lot to be desired when it came to ensuring that whatever ended up on our palate tasted good.

Finishing our meals allowed us to obtain some more energy to visit the community booths that seemed to have popped up later or had absent attendants. We started off with the MapConductor Project community booth, staffed by Masashi Katsumata, who briefed us about how they worked towards unifying all mapping APIs (Application Programming Interfaces) for mobile application map SDKs (Software Development Kits). It was rather intriguing to know how these APIs could be integrated with MCPs (Model Context Protocols) to allow mobile applications to have a semantic understanding of the navigation utility. Using the puzzle question about the number of post offices in Japan was a fascinating way to draw a picture of the important purpose of this community project. We sifted through the ARM Project and RISC-V Project community booths with our discussions around the necessity of an alternative architecture for desktop computing amidst the exorbitant prices of the year 2026. The demos were made using a custom AlmaLinux distribution image flashed on a Raspberry Pi 500 Plus, which had the SBC (Single Board Computer) inside a mechanical keyboard.

This reminded me of the SyncStar Project that I worked on a couple of years back, when the Raspberry Pi computers were still generally accessible and not prohibitively expensive, as they had become by then. Returning to Aaditya allowed us to meet Rich Bowen, who was just heading out from the workshop at around 0100pm Indochina Time. It was delightful to catch up with him after having met him the last time during CentOS Connect 2025, and he was just as surprised to meet me here, since I was attending FOSSAsia for the first time. Wrapping up our conversations here, Samyak and I headed over to attend Shivani Bhardwaj's demonstration of Hands-on Network Security With Suricata. As someone who primarily works on infrastructure architecture, it was compelling to see just how the project worked both as an intrusion detection system as well as an intrusion prevention system. While her presentation was plagued with a bunch of technical issues related to the presenter's laptop inadvertently suspending, Shivani kept the audience engaged with her admirable showpersonship and funny jokes, giving me something to learn about too when it came to speaking.

We stayed back in the same hall to attend Peter Membrey's talk on Open Sourcing Secure GPU Workloads in Enclaves as it shared a similar cybersecurity-inclined theme. As he was the Chief Research Officer of ExpressVPN, an organization that was also one of the primary sponsors of the conference, I wanted to understand just how bought in they were with the free and open source software mission. Having gotten a satisfactory answer, I left to get a FOSSAsia tee-shirt and necessary caffeine, since I did not have to join the queue for the identification badge that morning. I was pleasantly corrected when they declined my payment for the FOSSAsia tee-shirt and informed me that this was available at zero extra cost for the event participants. The lack of a proper lunch did manage to keep me awake, but a proper shot of cold coffee was able to finally get me back into the game once I was done exchanging the participant coupons. I hung out with my fellow Red Hat colleagues at the OpenEuler Project community booth before going back to Aaditya's place. Since he had kept a medium-sized GNOME tee-shirt aside for me, it only made sense for me to pledge a certain amount to the community.

And might I add just how attractive of a bargain 100 Thai Baht was in exchange for a white GNOME round-necked tee-shirt, all while supporting the great work that they did! It could not be understated just how often free and open source software communities end up giving you friendships that you cherish for your lifetime. At around 0215pm Indochina Time, we attended Dakshita's presentation on Observability for Backend Developers before reaching out to the FOSSAsia volunteers regarding my concerns about the absence of speaker desks in various smaller-sized halls. While my concerns were accounted for by the staffing volunteers there, it was only when Rajan entered the scene that it was not only resolved for the training room that I had my presentation planned for the day after, but also for other associated halls. He mentioned to me how my preemptive concern about my deferred presentation allowed others to benefit too, since it had become unwieldy to use a lower-height general table. After having attended Joe Blubaugh's talk on SQL Expressions in Grafana Dashboards at around 0240pm Indochina Time, I decided to return to the hallway track to have some more conversations.

One discussion on internet technologies with the likes of Ananthu and Deepesh at the Debian Project community booth later, we were joined by an extremely enthusiastic Daniel J Blueman at the GNOME Foundation community booth. Since he had presented his talk about Linux on ARM (Advanced RISC Machines) Laptops earlier that day, he wanted to have a comparative study of how different both the architectures were on a quantitative level. While our collective had a qualitative understanding of just how efficient ARM-based SOCs (System On Chips) were as compared to the widely available x86-based CPUs (Central Processing Units), stress-ng testing (as requested by him) could help paint an accurate picture. While the tests were running on both Aaditya's power-deprived laptop and Daniel's cool-running laptop, he was appreciative of the attempts made to make Fedora Linux work on ARM-powered Apple MacBooks and general-purpose ARM-powered laptops. Although the initial tests gave us an idea about the performance per wattage across both the architectures, we had to dispose of the results as the x86-based laptop was not getting power from the wall, unlike the ARM-based one.


Manifest #17
Of course, we had to have a rematch - not specifically for getting different-looking results but getting genuinely computed ones, and this time around it was the turn for Samyak's Lenovo ThinkPad P16v Gen 1. After making sure that both of these devices were getting enough juice from the power outlet and shutting down the unnecessary applications, we were able to obtain an anecdotal result that put the AMD (Advanced Micro Devices) Ryzen 7 Pro 7840HS CPU miles ahead of the Qualcomm Snapdragon X Elite SoC. This kind of unstructured research was able to draw quite a huge crowd in the second half of the day, and we deduced that while Samyak's laptop was drawing at most about 100 watts from the wall, Daniel's laptop was barely pulling at most around 25 watts. With improved compatibility across various applications and additional efforts in hardware enablement, the ARM-based SoCs could very well be the future of sustainable domestic general computing. After sharing contact details, Ananthu and I had a quick discussion on exploring places that exhibited (or franchised) anime-related swag, especially those from the popular Genshin Impact brand at around 0400pm Indochina Time.


Manifest #18
With Samyak on his way back to the hotel room, Aaditya, Ananthu, and Shreenivas (who began hanging out with us at around that time) and I decided to stay back to attend the FOSSAsia Evening Social Event. I seemed to have chosen wisely, as I ended up running into Harish Pillay shortly after in the hallway track, whom I had last met during CHAOSScon EU 2024. In our friendly exchange, he shared how he used Anthropic Claude to generate slide decks based on the speaker notes that he had manually prepared, a process that felt flipped to me, but I was captivated to see the results that he had on display. I noted the effective learning for times with a shorter timeline for proposal submissions without having to compromise on the overall quality. After a couple of photographs with him, I joined back our little gang for a quick round of instant photoshoots with a hilarious set of props. Aaditya and I decided to split from the likes of Shreenivas and Ananthu for the day to have our snack bites at around 0700pm Indochina Time. We enjoyed the Thai cultural performance presented by the event volunteers before I decided to depart swiftly for the newly discovered Animate Store at the MBK Center.

Coordinating with Samyak again, I dropped off my stuff in my hotel room before leaving for the BTS SkyTrain at around 0715pm Indochina Time. Since MBK Center was around a ten-minute walk away from BTS Siam, and so was the Jain temple that he wanted to visit, we decided to travel together. Having one of those rare moments where Samyak was ready to leave before I was, we headed to an ATM to withdraw some money. Just like my arrival day, we had to cough up around 250 Thai Baht besides the actual amount, but that had nearly become something that we had gotten used to. Cash was indeed king in Thailand, and we had to ensure that we had enough money to not require another round of ATM withdrawals. After a forty-five-minute-long journey from BTS Udomsuk, we split at BTS Siam after deciding to meet up there at around 0930pm Indochina Time. Getting local Thailand SIM cards proved to be extremely beneficial, as that allowed us to stay connected with our friends and families while we were navigating the hustle and bustle of Bangkok. We did not worry about the possibility of getting lost during transit, as we could always find our way back to each other.

I was able to make it to Animate Store on the seventh floor of the sprawling MBK Center mall by around 0830pm Indochina Time, giving me roughly half an hour before the shop shuttered. Connecting remotely with Shounak Dey from back home, I made the best use of the time to collect some Genshin Impact collectibles for us. After making quite a lot of purchases from the official miHoYo swag catalog, I decided to be on my way back to BTS Siam right when Animate Store closed down for the day at around 0900pm Indochina Time. An uneventful BTS SkyTrain journey later, we found ourselves at BTS Udomsuk at around 1015pm Indochina Time with few choices for Indian cuisine, as most adjacent shops were closed and the farther restaurants would be closed by the time we would have gotten there. After a quick trip to an adjacent SevenEleven outlet and helping Samyak with packaged mineral water bottles from the day before, we decided to order some Indian cuisine takeaways from the Grab application. We decided to call it a day at around 1130pm Indochina Time after sharing a light dinner of some homestyle Indian meal together in my hotel room and going through a round of presentation prep.
The Fedora Project is proposing a new contributor status called “Fedora Verified” to better recognize all forms of community contribution, and we need your feedback. Following the Fedora Council 2026 Strategy Summit, Fedora leadership is reflecting on how we recognize, support, and empower the people who make Fedora possible. Please read through our proposal below and share your thoughts in the Fedora Verified community survey.
As the global open source community grows, the Fedora Project needs to ensure that our systems for recognizing contributors keep pace. Historically, open source recognition has leaned heavily on easily-quantifiable systems such as git repository commits and Pull Requests. But Fedora is built on much more than just code. We want to implement a more human-centered approach that equally values all forms of contribution including mentoring, documentation, design, event organization, and community support.
To help us get there, we are proposing a new contributor status called “Fedora Verified” (Name TBD – feedback welcome!). But before we finalize this model, we need your feedback.
“Fedora Verified” is a proposed membership-driven approach for the Fedora Account System that distinguishes highly engaged, committed contributors from tens of thousands of standard registered accounts.
How is “Fedora Verified” different from a standard account? Anyone can create a new account in the Fedora Account System (FAS) to begin their journey, file bugs, or make initial contributions. A FAS account is the equivalent of a digital passport to access various Fedora-hosted applications and services for users and contributors alike. “Fedora Verified” represents the next step: a mutual commitment between the contributor and the project, recognizing a sustained track record of positive impact and adherence to our core principles as a community: the Four Foundations (Freedom, Friends, Features, First).
What are the proposed benefits? The primary motivation behind “Fedora Verified” is to build trust-based recognition that grants elevated, privileged rights within the project. Most notably, this status would determine eligibility for strategic governance activities, such as:
To ensure fairness and transparency, we are proposing a set of baseline metrics that a contributor must meet before their request for “Fedora Verified” status goes to a human review. The proposed baseline includes:
While we have a framework, there are several major questions we need the community to answer before we move forward. Specifically, we want to know:
We want to make sure this proposed membership model is fair, sustainable, and truly represents what our contributors value. Your feedback will directly influence how this policy is drafted and implemented.
Take the Fedora Verified Community Survey!
The survey will be open until Sunday, 5th May 2026 at 23:59 UTC. Thank you for taking the time to share your perspective, and for everything you do to make Fedora an amazing community!
The post Fedora Verified: Help Shape a New Way to Recognize Fedora Contributors appeared first on Fedora Community Blog.
The FOSSAsia 2026 event prioritized the friendly community interaction aspect of a free and open source software event by frontloading it on 08th March 2026 as a freely accessible event. This was surely a great example to learn from and implement at our flagship annual community conference, Flock To Fedora. I started the day by waking up as early as 0700am Indochina Time and giving Samyak Jain a wake-up call. As the timezone offset from Indian Standard Time was ninety minutes, I had to wait for some time before I could connect back home with my family members. Thankfully, the breakfast meal provided at Lumen Bangkok Udomsuk Station was a lot more flavourful and distinctive as compared to those that I was habituated to having at hotels in Europe. While there was a lot to be had for my dietary preferences, it was a tough time for Samyak in a mostly non-vegetarian selection of meals, so he had to request a custom vegetarian meal. Once we were through with breakfast, we started off for the FOSSAsia 2026 conference venue, True Digital Park West, at around 0930am Indochina Time.

As the venue was barely under a kilometer away from our stay, we were able to make it to the event on foot. While the summer season had not arrived in Thailand, it was still extremely humid, and we wanted to head indoors as soon as possible. Skipping the first couple of early sessions allowed us to get enough respite after a long yesterday, and that would mean that we could also last longer that day. After skipping through some floors of the True Digital Park Campus and connecting with the folks from the FOSSAsia Community Cycling Trip in the Telegram chat, we were finally able to make it to the right location. Following Mishari Muqbil's advice, we took the long escalator that skipped a couple of floors entirely to get us right in front of the event reception. The one thing that took me by surprise in Thailand was the fact that the ground floors were counted as the first floor, and there was no concept of a zero level. It was a pretty interesting observation that we had to mentally train ourselves to get used to, if we wanted to not get lost while visiting most (if not all) multi-storeyed buildings in Bangkok.


Collection #1
Walking through the premises, which astonishingly enough consisted of both student laboratories and shopping stores in equal measure, we entered the hall selection where FOSSAsia 2026 was organized. Surely, the volunteers could have done better at pointing folks to the correct place, but this feeling was quickly overridden when I ran into a bunch of acquainted and recent community friends. Getting to meet these people, mostly from APAC, at the event felt like making a victory lap to celebrate all the connections that I had painstakingly built throughout the last five years or so. Trying not to get swept away in all the interesting conversations, we were guided by an adjacent volunteer to get ourselves the badges, wristbands, and swag kits from the event reception. I was able to get those for myself as I was also participating in the conference as a speaker, but Samyak had to wait until the next day to obtain his badge. While we had the agency to join one of the running workshops at the time, we still decided to hang out at the hallway track a little longer to discuss open source strategies with the community.


Collection #2
After a brief catch-up with Ananthu CV from the Debian Project, whom I was meeting here after having done so during GNOME Asia 2024 and FOSDEM 2025, Samyak and I made it into the Eventyay Developer Workshop room at around 1045am Indochina Time. Although we initially thought that we were late, Norbert Preining, Srivastav Auswin, and Mario Behling were just getting started conducting a round of introductions for the folks present there. Since I was representing the Fedora Infrastructure in this workshop, I presented the case of how we used a similar event service platform, Pretalx, for our use case and how I was interested in learning all the various things EventYay had to offer. Amidst the feature run-through and tooling integrations, the workshop organizers had an interest in how the Fedora Project used Pretalx for running events like CentOS Connect and Flock To Fedora. Amidst the lack of proper seating in the room, I also got to meet Deepesh Nair briefly before agreeing to connect with Mario later for deeper discussions on how we could further collaborate in the future.


Collection #3
Hong Phuc Dang entered sometime later with an announcement for refreshments and some more seats for the workshop hall. This allowed the hall to be filled up a whole lot more as we headed back to the hallway track to meet with the likes of Aqsa Aqeel from DigitalOcean and Leon Nunes, who had begun his open source contribution journey. It was interesting to notice how the Fedora Project was the only RPM-based distribution on the scene there apart from our reliable downstream, Red Hat Enterprise Linux representatives. The distribution aspect of the event felt mostly occupied by the folks from Canonical and Debian, thus giving us the chance to capture some relevance. I briefly met up with Anuvrat Parashar and Shivani Parashar before having a conversation with Hong, who was surprised to note that the Fedora Project did not have a community booth. It turned out that there was another Call For Proposal form that was opened up closer to the commencement timeline, apart from the one for presentations and workshops, that I, as the event owner from the Fedora Project, was not even aware of.

This was also hinted at by Mishari in one of our conversations from the day before about the community booths being made available for free while the commercial booths were being charged a reasonable payment. In our discussions, Hong assured community support, and I discussed strategic collaboration before we had some quick photograph clicks to end the conversation. While not being able to have a community booth during FOSSAsia 2026 felt like a missed chance, I wanted to make the most of my participation as a community prospector for the Fedora Project's future participation in the event. We did not really have concrete plans on which workshops we would want to attend that day and were making our agenda on the fly. Making it back to the hallway track at around 1100am Indochina Time, I met up with the likes of Rajan Shah, Shivraj Patil, Veerkumar Patil, and Gaurav Kamathe from Red Hat, who were just arriving at the conference venue. It was hilarious to catch up with them here barely a couple of weeks after meeting them at the DevConf.IN 2026 conference in Pune, India.


Collection #4
At around 1130am Indochina Time, Samyak and I headed back into the rooms to attend presentations delivered not just in English but in a variety of languages. It was interesting to see just how open the FOSSAsia conference was to folks wanting to present their work in their own language, and when paired with a technology that live-transcribes it to a more generally known language, this could significantly lower the barrier of entry for these conferences. Most sessions, if not all, were bite-sized portions of fifteen minutes, and depending on who you are, the time management aspect of the presentation could be difficult. After exchanging the coffee coupon for a Dunkin-branded cold coffee and the meal coupon for a packed lunch, we headed out to chat with one of Samyak's friends from the Debian Project. We were able to make it back to Wendy Ha's talk on her experiences as an APAC Kubernetes Community Contributor, followed by Punsiri Boonyakiat's talk on balancing motherhood responsibilities with open source, which became a key highlight for International Women's Day.


Collection #5
On the second half of the community day, the host was doing absolutely terrific work keeping the audience entertained while the volunteer team looked into the technical problems that had crept in. Her fantastical oration with a deadpan expression did not allow our spirits to sour when the fixes were taking a little while longer. Contrary to my belief, the community day was not as occupied as I thought it would be due to its free-of-charge participation, but we were able to make the best of whoever was present on the ground. We also ran into Aaditya Singh from the GNOME Foundation and Mishari, who were just arriving at the conference venue in the afternoon, both of whom we were quite pleased to catch up with. After a quick conversation with Tamhant about the technological industry, I met up with Mario again to discuss how the Fedora Project evolved from a fragile ticket-based system to a robust dedicated event platform. It helped me understand how EventYay's Developer Preview Service could be accessed without having to establish the project environment locally.


Collection #6
As the Fedora Infrastructure's CommuniShift Project allowed for getting these deployments up and running in no time, I suggested the same to the EventYay developers, as they could find it handy to demonstrate this to the Fedora Project community. After running into Hong again, Samyak and I talked about how we could further push the adoption of the community day using interactive activities for folks hanging out on the hallway track. This would allow for greater participation on the community day, not just by those who have their talks planned but also by visitors. At around 0120pm Indochina Time, we made it to Peter Zaitsev's talk on Percona Project's attempts to make MySQL Community stronger. Apart from the obligatory jabs hurled at Oracle, there were also some tricky questions around the choice between MariaDB and MySQL if PerconaDB was not an option, addressed. We went on to attend Michael Meskes' talk on Open Source Business Models, which felt quite at home for us working with the Fedora Project as part of our responsibilities.

The "free" in a free and open source software project always comes with an asterisk marked hidden condition about the price being paid by someone else. The price is also usually not in the form of monetary payment, and that is what ends up hurting the longevity of these communities the most. After an insightful conversation on onboarding and retaining contributors, we headed out at around 0200pm Indochina Time to get another round of caffeine. We had discussions around the state of the warring world and how multiple cancelled flights through the Middle East affected selected speakers at FOSSAsia 2026. Aaditya welcomed us to station Fedora Project folks at the GNOME Foundation community booth since there would be enough space for both. As Samyak had stickers saved up from DevConf.IN 2026's Fedora Project community representation, we could use those here. Unfortunately, we could not meet Asmit Malakannawar, who planned to tag along with Aaditya, as his flights through the Middle East were also cancelled, and there were zero provisions made for remote talks.


Collection #7
Samyak and I headed back into the presentation rooms to attend Louis Yoong's talk on Building AI-Powered Interactive Maps with Open Data at around 0300pm Indochina Time, before attending a series of lightning talks from the Women In Tech category. After attending Velia Dang's informative talk on the reviewer's point of view when using EventYay and obtaining learnings to apply for Flock To Fedora, we decided to depart from the conference after the closing notes. While there were some activities planned at Lotus' Eatery for the Pre-Event Community Gathering, we wanted to use the remaining time of the day to explore what Bangkok had to offer. We were able to make it back to Lumen Bangkok Udomsuk Station by 0445pm Indochina Time before leaving again for the BTS SkyTrain after around forty-five minutes. With a rigid agenda to go with, we planned to visit Masaru, an anime collectible franchise store located near BTS Phra Khanong, which was barely four stops away from BTS Udomsuk. Ten minutes of train ride and five minutes of walking got us where we wanted to be.


Collection #8
We spent a little too long there than we could care to admit as Samyak got himself busy finding figurines while I was hunting for some Genshin Impact branded collectibles. Surprisingly enough, I was able to get a brokered change for the 1000 Thai Baht note that I was carrying while we were booking BTS SkyTrain tickets from BTS Phra Khanong to BTS Siam once we were done with our purchases at Masaru. It was going to be a comparatively longer ride with around eight stops and then a ten-minute walk to the world-famous MBK Center. At around 0800pm Indochina Time, we made it to BTS Siam station, but we kept getting sidetracked by the sprawling independent weekend market that we wanted to explore. With Samyak and me splitting again to find our pickings, I could not help but purchase some gifts for my family, including Hawaiian shirts, tourist hats, ornamental pendants, and much more. I was glad to have my wits about me while driving a hard bargain for making all these purchases because that would allow me to eventually purchase some more things in the end.


Collection #9
The humidity was making it tough, but we were finally able to push through to the indoors of the MBK Center by around 0845pm Indochina Time. Coincidentally, this is where we ended up finding one of the best independent handicraft stores, where Samyak and I got ourselves a wide assortment of personalized keychains and magnets for our friends and families. As our order was a massive one, with six for myself and six for him, we decided to depart after placing the order to see if we could make it to the anime café in time. The sprawling layout of the MBK Center only made it difficult for us to find our way - apart from, of course, getting sidetracked into purchasing Thailand incenses and local handicrafts. Splitting from Samyak again, who got busy with his purchases, I was able to make it to the place. To my utter disappointment, the shop had already closed by 0830pm Indochina Time, and I reached almost an hour late. I backtracked to where I last saw him, and we decided to head back to the handicraft store as we had to retrieve our orders right before the operations wrapped up for the day.

The enormous mall could be a treasure trove for those who know what they are looking for, but for the two of us exploring, it felt like an overwhelming labyrinth with similar looking pathways and deeply confusing corridors. We were finally able to make it to the handicraft store by around 0945pm Indochina Time, right at the time of the last handicraft being finalized. Since we had already cleared the bill, we headed back to BTS Siam while clicking pictures and discussing plans, only to end up boarding a rather crowded BTS SkyTrain on a weekend. By around 1045pm Indochina Time, we made it back to BTS Udomsuk, but after exploring the surrounding places to see where we could dine, we decided to order some food from the convenience of our hotel rooms using the Grab application. I got myself freshened up after requesting the hotel reception to retain the order, and I was glad to have ordered a carton of mineral water too. After a quick bite for dinner, I decided to call it a day with multiple precautionary alarms enabled to ensure that I woke up early the next day for the group photo.
Another frozen week before the Fedora 44 release, just a few notable things:
Openssl4 landed in rawhide and caused some issues and then was pulled back out by FESCo. We definitely do need to move to it for Fedora 45, but hopefully we can land it in a way that doesn't break as many things as this last time.
Folks are working on it and I expect we will see it soon.
We had a aarch64 builder virthost fail to reboot with memory errors a few weeks ago. Finally got someone onsite to pull and reseat all it's memory and that seems to have done the trick. We are back to full on aarch64 builders again. Of course we had enough that I doubt anyone actually noticed that some were down.
I also brought up 3 more big x86_64 builders. They should be added after freeze/sometime soon. Nice to have extra capacity there even thought we aren't hurting for x86_64 builders.
Yesterday our wiki was up and down in the morning. Seems scrapers not only found the wiki, but also found that they could query time ranges for changes in Special:RecentChanges.
We put in some blocking and then increased a bunch of cpu on the backend and everything seems to be back to 'normal' now.
Until the next time...
I will be out on a family vacation next week. Our plane leaves super stupid early on tuesday morning and I will be packging and such on monday. So, please don't ping me: file tickets or ask others to take care of any fedora issues you might have.
Hopefully when I am back we will be go for Fedora 44 release!
As always, comment on the fediverse: https://fosstodon.org/@nirik/116426624580451263
As my arrival flight to Bangkok was scheduled to depart at around 0700am Indian Standard Time, I had to wake up as early as 0230am Indian Standard Time on 07th March 2026. The packing had already been taken care of previously, so all I had to take care of was ensuring that I got myself an Uber ride to the Pune International Airport in time. Thankfully, unlike my experiences from DevConf.IN 2026, I was able to get one pretty quickly, and at around 0400am Indian Standard Time, I reached the airport. The check-in process went smoothly, since I was not carrying much luggage anyway to begin with on my Air India Express flight. With my physical boarding pass in hand, I headed upstairs to wait for the immigration booths to open for the day. After forty-five minutes of waiting, the gates finally opened up, and I made it to the security check after smooth processing. The fact that I had my Thailand Digital Arrival Card registration done in advance helped me get through to the designated gate 1A without much hassle. I ended up having a lot of time left on my hands, so I decided to connect with the Egencia service about the troubled accommodation booking while I waited for boarding to begin.


Collection #1
Moving away from two noisy groups of travellers - one with senior citizens and one with rowdy men - I got myself a place to sit as I rang up the Egencia customer care helpline. Since 0530am Indian Standard Time was still a little early for their working hours, it took me a while to connect with a human representative. A helpful attendant attempted to connect with the Lumen Bangkok Udomsuk Station hotel employees, but that did not work out. I decided to board the flight anyway at around 0645am Indian Standard Time and leave the concerns about the troubled accommodation booking for when I would have reached Bangkok. There was not much that I could have done at that time to help the situation, and besides the issues that we had with Egencia regarding the flight confirmation, this worry would most likely have soured my entire experience. After a quick switch of seats from 7A to 6A, on a fellow passenger's request, I decided to watch some movies, such as Code 3 (2025) and Zootopia 2 (2025), on my phone. The Kebab Platter was soon served, and that allowed me to catch up on some rest that I was lacking due to having to wake up early in the morning just to make it to the airport.


Collection #2
The flight soon landed at the Suvarnabhumi Bangkok International Airport at around 1230pm Indochina Time, and after connecting with my family to let them know about my safe arrival, I headed swiftly into the immigration queue. The overcrowded traveller collective took me about forty-five minutes to make it through to the other side, where I found that the designated luggage belt #21 had finished delivering all of its luggage. After crossing a big group of Chinese travellers, I got myself a data plan from True 5G at the airport exit. 599 Thai Baht for 8 days of unlimited service was a great deal, and that allowed me to stay connected with both my family and friends, and with Samyak Jain, with whom I was representing the Fedora Project APAC community at FOSSAsia 2026. The humidity I faced after stepping out of the airport took me by surprise because it was even warmer there than it was in India. After unsuccessfully looking for a Grab ride that I had booked for about thirty minutes or so, I finally got one whose driver did a great job crossing over the language barrier and explaining where to find them amidst a rather crowded station of rides available for hire and buses that went into the city.


Collection #3
I connected with Samyak while I started off on the road at around 0230pm Indochina Time to instruct him about Airport Gate #4, where he could avail himself of a Grab ride, and that way he could avoid wasting that thirty minutes like I did. While connecting with my family during the Grab ride, I also commended the driver for just how clear they were with their communication while making the best utilization of the Grab application's live message translation feature for international travellers. I did not realize how swiftly I managed to reach the hotel at about 0315pm Indochina Time after all the immigration and cabbing troubles. Thankfully, the folks at the Lumen Bangkok Udomsuk Station gave me no trouble with the booking, and I was able to check into my room #703 rather swiftly. In contrast to the experience that I thought I would have, they also topped off their welcoming gesture with a cool popsicle-like snack as I headed downstairs to fetch the passport that I had left at the reception. With one less thing to worry about, Samyak and I still had to make it to the BTS Chit Lom station by 0430pm Indochina Time to meet up and join the FOSSAsia Community Cycling trip.


Collection #4
As Samyak touched down at around 0230pm Indochina Time, I had to proceed by myself to meet up with Mishari Muqbil after a quick changeover in my hotel room. After a brief struggle with finding an ATM and then losing about 250 Thai Baht for the international conversion, I made it back to the BTS Udomsuk station, which only accepted cash (and did not even provide receipts to track expenses!) but was thankfully situated right in front of my hotel. It took me thirty minutes to make it to the BTS Chit Lom station at around 0415pm Indochina Time, but I had to spend the remaining thirty minutes chasing Mishari's waypoint on Google Maps. As I flew in my bicycle helmet all the way from Pune, I had no plans of skipping the FOSSAsia Community Cycling Trip, and while it was my first time visiting Thailand, I did find myself audaciously picking trains and walking through as if I was exploring my backyard. I think I might have to credit the hotel reception and the BTS security for their welcoming behaviour, which made me want to leave the hotel room right after arrival because I genuinely wanted to experience more of what Bangkok had to offer, even when I was dead tired from the travels.


Collection #5
I was finally able to make it to the river jetty, where I met up with Mishari, Michael Christen, Anuvrat Parashar, Shivani Parashar, and others. The adventurous ordeal of catching up with them was rewarded with a scenic boat ride to the starting point of the FOSSAsia Community Cycling Trip. I managed to learn more about what Michael does with his work on YaCy and shared what I do as a part of the Fedora Council, the Fedora Mindshare, and the Fedora Infrastructure teams in the Fedora Project community activities. There were folks there for the first time like myself, and there were also those who had attended FOSSAsia since its beginning, so it was enlightening to know their experiences from this conference. During a brisk walk-and-talk with Anuvrat and other participants to the starting point of the cycling trip, I got to know about his frequent involvement in the PyCon organization and DGPLUG communities. Once we were joined by a couple of Mishari's friends and Wendy Ha, we began unlocking the rental bicycles using our HelloRide application, and Mishari gave us a quick orientation about street safety regulations at around 0530pm Indochina Time.


Collection #6
And there began our slow-paced ride through the alleys and streets of Bangkok! With Mishari and his friends leading our collective, I found myself at the start of our sequence, discussing with Bee about his involvement in technology. As a proving ground for their cartographic skills, we wove through a lot of parks, and I noticed a great number of cats along the various pathways we took. Since the cycling trip did not have many elevation changes to deal with, I took the liberty of falling behind in the sequence to chat with the likes of Wendy and Michael every now and then. At around 0630pm Indochina Time, we made our first stop at an independent family-owned chocolate store where we sampled many chocolates and purchased some beverages too. We were able to keep our rental bicycles safe using Mishari's (as Shivani hilariously named) "CYCLE-ogical protection," which mostly consisted of a loosely placed rope. This first stop also allowed Samyak to finally catch up with our collective, as I discovered him coincidentally heading in the opposite direction when we were on our way out. Tracking his location over WhatsApp's location sharing definitely seems to have been the right choice.


Collection #7
It was rather funny to finally catch up with Samyak on a random Bangkok evening street after having missed the chance at the airport and at the hotel. Our collective made the next stop at another independent family-owned ice cream parlour located in a deep alleyway at around 0700pm Indochina Time. While the location was tucked away in a seemingly long-forgotten corner of Bangkok, the place definitely had a very home-like feeling to it. We, of course, got busy sampling undiscovered flavours and ordering favourite ones for the break. After spending another thirty minutes there with my Butterfly Pea cold cone (and some obligatory badly written jokes by Mishari), we had a bunch of photographs clicked. Once we departed from the ice cream parlour, we found ourselves pausing every now and then due to certain cartographically inclined confusions, but our "vibe-riding" (as I hilariously named our fun experience) never had a dull moment. Through the riverside pathways to a restricted university, we seemed to be in the front seat of exploring what stories these streets and alleys of Bangkok had to offer—and as tired as I was, it still felt like we were just getting started and there was more to discover!


Collection #8
While we did have a bunch of registrations for the FOSSAsia community cycling trip, we barely had half of them turn up, so Mishari decided that it would be best if we found ourselves a dining place. After he quickly helped Samyak with his bicycle height, we caught up with the remaining group for yet another round of photographs - this time in front of a Bumblebee statue at a cross-section, if you can believe it. We also halted in front of the Royal Palace for a quick shoot before inching closer to the nearest drop-off point for the HelloRide rental bicycle center. Thankfully, I was able to take Bee's sweet custom ride for a quick spin before leaving, as it was filled to the brim with all the bells and whistles for an exhilarating street cycling experience. With about 80 Thai Baht spent for the HelloRide trip, some of us made it to the KemKon Vegan Experience Restaurant. As both Samyak and I had skipped lunch and exerted ourselves since the morning, we were starving. I was glad to note that while the menu was completely vegan, I still ended up liking the Make-believe Fried Fish Fritters that I had ordered for myself, both for the delicious taste and the quick service.


Collection #9
Apart from the nutritional values, of course, it was astonishing to notice just how close the vegan dish ended up tasting like a non-vegetarian one. Adding some spicy chili-flavoured oil on top of it all made it taste like heaven, and I could not see a better way to end the night than with this amazing meal. After clearing our bills, Shivani and Anuvrat stayed back at the market to explore some more, while Samyak, Mishari, and I headed back to the main road to catch a Grab ride to the hotel. It was magical just how we felt at home connecting with folks from various free and open-source software communities, all while doing activities like riding bicycles or sharing meals. The two of us were soon back in our hotel rooms, and apart from one misadventure of requiring the hotel staff's assistance to unlock the heavily jammed bathroom door, our arrival in Bangkok was super awesome. After a quick message to Julia Bley to inform her about our safe arrival at the conference and to conclude our saga of troubled travels, I called it a day at around 1130pm Indochina Time. There was so much to look forward to at the Community Day in FOSSAsia 2026, and I wanted to ensure that I was rested enough to experience the same.
The Fedora wiki is under very heavy load due to a large number of requests from bots.
This is a report created by CLE Team, which is a team containing community members working in various Fedora groups for example Infrastructure, Release Engineering, Quality etc. This team is also moving forward some initiatives inside Fedora project.
Week: 13 – 17 Apr 2026
This team is taking care of day to day business regarding Fedora Infrastructure.
It’s responsible for services running in Fedora infrastructure.
Ticket tracker
This team is taking care of day to day business regarding CentOS Infrastructure and CentOS Stream Infrastructure.
It’s responsible for services running in CentOS Infratrusture and CentOS Stream.
CentOS ticket tracker
CentOS Stream ticket tracker
This team is taking care of day to day business regarding Fedora releases.
It’s responsible for releases, retirement process of packages and package builds.
Ticket tracker
This is the summary of the work done regarding the RISC-V architecture in Fedora.
This team is taking care of quality of Fedora. Maintaining CI, organizing test days
and keeping an eye on overall quality of Fedora releases.
This team is working on introduction of https://forge.fedoraproject.org to Fedora
and migration of repositories from pagure.io.
This team is working on keeping Epel running and helping package things.
This team is working on improving User experience. Providing artwork, user experience,
usability, and general design services to the Fedora project
If you have any questions or feedback, please respond to this report or contact us on #admin:fedoraproject.org channel on matrix.
The post Community Update – Week 16 appeared first on Fedora Community Blog.
Rotation of Copr server credentials. There is possibility of short outage.
Affected Services:
copr-frontend - https://copr.fedorainfracloud.org copr-backend - https://copr-be.cloud.fedoraproject.org/
I caught up with my YouTube, two good videos from NotJustBikes about cars and oil being bad.
I really liked all the links in leadership this week, all highly recommended.
The Vasa - “Understanding why the Vasa sank also gets to something important about how organisations fail.”
This Monday I talked to a couple of friends about work while wearing my Genesi t-shirt. A teacher going back to school after Spring break and an IT guy explaining the nightmare of RTO threat. I love coincidences :-) Why do I say that?

As I wrote a few years ago about working from home: “After graduating from university, I worked from home for a small US-based company. I never met my boss while working there and met only one of my colleagues at a conference in Brussels. I eventually met my boss some seven years later, when I gave a talk at a conference in Washington, D.C.” The company was Genesi, and that was the work culture which defines me. I received the t-shirt on the photo during my visit to Washington, D.C.. Luckily, I’m still living mostly this way, visiting the office 1-2 times a week: working hybrid.
Imagine the contrast I felt, when I realized that I’m talking to someone who works on a very strict fixed schedule. For a teacher vacation is only possible when there is no school, like Spring break in Hungary last week. There is a fixed schedule all year around. Compare that to my Genesi years: no regular meetings, communicating by e-mail & chat, and working when it was the right time for me: sometimes in the morning, other days during the night. It was fantastic, especially with small kids. I have been working on flexible hours ever since, limited only by meetings.
COVID made remote work less of a niche. Sometimes even mandatory. Many people in IT started to work remotely. Most of our work does not require a fixed place or time. On-line meetings became the norm, teams are often not location based anymore but scattered around the globe. As long as you have an Internet connection and a noise canceling microphone you can join a meeting from anywhere, even from the top of a mountain. It is easy to get used to this flexibility and very difficult to give it up.
RTO became a periodical threat. It’s a lot cheaper to announce RTO and let people leave voluntarily than sending them away. Quite a few friends write me every once in a while that they have to return to the office starting in a few weeks time. Then, a few weeks later they happily share: they gave me an exemption, so they do not want me to leave…
Wearing my Genesi t-shirt all these problems feel so distant. I hope that it stays this way!
For a couple of months now, we have been experimenting with using AI/LLM tools for code review for our projects under the storaged.org umbrella. We originally started with testing various tools for pull request reviews available on GitHub. For that, we are currently using CodeRabbit.
But the newly added code is not the only code that should be reviewed. Our projects have a long history: the first version of libblockdev was released in 2014 and the history of UDisks goes back to 2008. The code, of course, went through many changes and rewrites since then and many people reviewed it. And we have tests and use static analysis and other tools to ensure code quality. But that doesn’t mean there are no bugs or other issues. We know there are, our users report them somewhat regularly.
Libblockdev together with UDisks contain about 100 thousand lines of C code. Doing a full review of all that code would be a huge undertaking. For a human. A machine can do the review in a matter of minutes.
This is a good place to stop and address some questions and concerns you might have. Yes, LLMs, or what most people call AI these days, are not perfect, they make mistakes and results can be unpredictable or completely hallucinated. These are valid issues, but for this specific use case – reviewing existing code written by humans – these are less of a concern. Each issue found was first reviewed by the developer to make sure it is actually a real issue and the proposed fix was also reviewed before applying. And the resulting change went through the standard process before including it in the project – pull request with a review by another developer.
We are also well aware that the code having been reviewed by these tools doesn’t mean there aren’t any issues anymore. But the issues that were found and fixed were real issues and fixing even one issue in any software is always a net positive regardless of whether the issue was found by tests, static analysis, a quality engineer or in this case an LLM/AI tool.
With this out of the way, let’s move to the actual review process, and more importantly, the findings.
The review process itself is pretty simple. We are currently using Anthropic’s Claude with the Opus 4.6 model. We did experiment with other tools and models as well, but so far Opus 4.6 seems to produce the best results (as of March 2026).
We used some basic phrases as prompts, without asking for any specific areas to review. A basic variant of “do code review of the existing code base” was enough for our use case (more about prompts and some unexpected results later). The produced review is usually a formatted numbered list of issues sorted by priority and well documented, with arguments supporting the findings. The short description of the issue was usually good enough for deciding whether the issue is real or a false positive.
Crypto: Wrong strerror_l usage for ioctl errors in OPAL
File:
src/plugins/crypto.c:3911
ioctlreturns-1and setserrno, but the code usesstrerror_l(-ret, ...)which givesstrerror_l(1, ...)(alwaysEPERM).Fix: Use
strerror_l(errno, c_locale).
Example of an issue found in libblockdev
One thing we noticed quite quickly was that one review is not enough. The model seems to stop after a certain number of issues and after fixing these, running the prompt again produces a new set of issues. We did a number of runs for each project (both with and without a new context) and by the last run, the number of real issues found was close to zero. With a decreasing number of “real” issues in the code, the false-positives and “nitpicky” issues in the reports seemed to outnumber the newly discovered issues. This is understandable and kind of expected behavior and we’ve definitely seen human reviewers behave similarly when reviewing code that didn’t have any glaring issues to point out.
We also experimented with limiting the review scope in the prompt: either to one specific module or plugin, or even to just one file. This approach seems to produce slightly better results within the limited scope, but also produces more false-positive reports as the model doesn’t typically recognize relationships and implications at a global scale. But even with this caveat, this is still very well worth the effort.
Now, let’s talk about the issues that were found and fixed. In total, in all of the “review runs”, 235 issues were reported: 110 in libblockdev and 125 in UDisks. (We didn’t stop at these projects, our other projects went through the same process, but for the sake of simplicity, we’ll focus only on these two here.)
Out of these, the largest group, 41 in total, were related to improper resource handling, mostly memory leaks, closely followed by various error handling issues. And because none of the developers working on these projects are native speakers, third place belongs to fixing typos, grammar and documentation with 27 issues fixed.
The “winners” here show where most of the existing issues in our code are hidden: in various error paths (upon closer inspection, most of the memory leaks were located in the error paths as well). This makes sense. These are hard to cover in automated tests and finding issues in error paths is mostly relying on manual testing, static analysis and of course, on code review.
Fixing grammar and typos might not seem that important, but these also include public API documentation where a misunderstanding about usage or memory ownership can lead to bugs in other projects using this API. And we even found a few nonsensical error messages that would definitely confuse users when shown.
Crypto: Stale
strerror_l(0)produces “Success” in error messageFile:
src/plugins/crypto.c:2255
retis 0 (from successfulcrypt_load), so the error message reads “Label can be set only on LUKS 2 devices: Success”.Fix: Remove the
strerror_lcall from this error message (it is a type-check error, not a syscall error).
Again in libblockdev. This very much resembles the famous "Task failed successfully" error message.
Memory management, error paths and grammar. Important issues and it’s good these are being fixed, but what about something more serious?
During our experiments with AI review, we didn’t find any serious flaws or security issues. That, of course, doesn’t mean there are none in our code, but if we knew about issues like these, we wouldn’t be doing this exercise. What we can do is test the tools on a security issue we know exists. Or to be more precise, a security issue that existed.
In February, two related security issues were reported against UDisks – CVE-2026-26103 and CVE-2026-26104. Both were caused by missing authorization checks in some UDisks public functions. UDisks is a daemon that runs with root privileges and allows unprivileged users to perform certain operations when some conditions (based on the operation, type of device, type of session the user is logged in, etc.) are met. We use Polkit for this. And in these two cases the authorization check was missing in the code. This went through review by two developers, and neither of us noticed. Now the question is, would AI notice?
Short answer is: yes, it would:
Missing PolicyKit Authorization on LUKS Header Backup/Restore
Files:
src/udiskslinuxencrypted.c:1356-1442(handle_header_backup)src/udiskslinuxblock.c:4238-4312(handle_restore_encrypted_header)Both D-Bus method handlers perform privileged operations without any PolicyKit authorization check. Compare with other handlers in the same file (lines 524, 784, 974, 1133, 1288) which all call udisks_daemon_util_check_authorization_sync(). There are also no PolicyKit action IDs defined in data/org.freedesktop.UDisks2.policy.in for these operations.
Severity: CRITICAL
Both CVEs clearly pointed out in the review
But even if this result looks impressive, it was not that easy. As I already mentioned, running the reviews multiple times and with slightly different prompts can change the results a lot. And the same happened here. The first two general “do code review” prompts did not uncover these issues. For the third run (with a new context), I explicitly asked to focus on potential security issues, and the tool happily obliged:
I’ll conduct a security-focused code review of the UDisks codebase. Since this is a privileged system daemon, security issues are critical. Let me explore several attack surface areas in parallel.
and one of the spawned agents explicitly did “review privilege and auth issues” in order to finally find this.
So even AI isn’t infallible and doesn’t just spot everything every time. On the other hand, neither did two senior software engineers working on the project when reviewing the pull request introducing this, so maybe we shouldn’t start throwing stones either.
As mentioned multiple times already, the exact prompt wording matters. We have found that what we simply refer to as “code review” can lead to different results based on the wording and context.
Asking simply to “Do a code review on this project” usually leads to a high-level description of the project with a few key findings. These might include some specific issues and bugs, but more often the result is just a summary of overall “code health” of the project. In our case, praising our projects for clean and readable code and good test coverage. This is nice to read, but doesn’t help much.
Overview
UDisks is a well-maintained ~160-file C project (v2.12.0) implementing a root-running storage management daemon using GLib/GObject and D-Bus. The codebase is in active maintenance with recent work focused on stability and safety hardening rather than new features.
…
Security (Strong)
For a daemon running as root with direct hardware access, the security posture is excellent:
…
Code Quality Observations
Positives:
- Consistent coding style (GNU C, spaces-only, proper emacs modelines)
- Naming conventions (
UDisksprefix,snake_casefunctions) applied uniformly…
Technical Debt:
- ~80+
TODO/FIXME/XXXcomments scattered across the codebase. Notable clusters:…
Summary
This is a mature, security-conscious codebase. The main areas for improvement are clearing the TODO backlog and tightening a few intermediate NULL checks in getter chains. No critical issues found.
Code review of UDisks. Some TODOs and FIXMEs, but other than that it's perfect, right?
But if we ask to “Do a code review of the existing code base and report any issues or bugs you find” instead, it actually goes through the code more thoroughly and reports individual issues. And suddenly, our great project now has 16 issues, three of them high priority.
Summary
16 issues found across the daemon core, modules, client library, and CLI tool.
Severity Count Key areas High 3 NULL derefs in partition ops (daemon crash), use-after-free in LVM2, double-close in LSM Medium 9 Dead code in flag checks, empty DM name, insecure passphrase handling, memory leaks, wrong D-Bus completion Low 4 Missing bounds checks, minor memory leaks in CLI tool, TOCTTOU race
16 new issues found. Not that perfect after all.
Another interesting thing we found is that “code” in the code review often doesn’t include tests. This might be a specific problem for our projects, where tests are somewhat separated: for both libblockdev and UDisks the test suite is part of the same repository, but it is written in Python (in contrast to C) and for UDisks the biggest part of the test suite uses the DBus API, making it even more distinct. But that doesn’t change the fact that without explicitly asking for “code review of the test suite”, the tests were completely ignored. If included, issues in tests are actually the second biggest part of the issues found in this experiment. Looks like we might need tests for our tests. And that’s only for libblockdev, UDisks “test suite code review” is still in progress.
And the same “tests are not code” issue exists for the other non-code parts of the project: documentation, man pages and various YAML config files. This shows that a simple one line prompt is not enough. For a one time review of the entire project as we did here, writing the prompts manually one by one and trying different tactics and approaches might be a good enough solution, but in the future more detailed instructions will surely be needed.
The obvious next step is to prepare a skill for Claude that would include all these instructions for future code reviews. We don’t intend to do a thorough code review of the entire existing code base for every change (that’s where the AI-assisted code review on pull requests takes over), but we definitely intend to continue with this in the future and are looking forward to new and better models.
Even though this started just as an experiment with the tools that are currently available for us, using Claude and Opus 4.6 for code review showed some really interesting results. We were able to quickly fix more than two hundred issues in our code (and that’s counting only libblockdev and UDisks) and even though these were not critical or security bugs, this will definitely improve the project. We will continue working with AI/LLM tools and hopefully eliminate even more issues and also speed up bringing new features – both by directly implementing these with the help of Claude and simply by offloading some of the other work to it.
This outage impacts the copr-frontend and the copr-backend.
One of the main abilities of a debugger is setting breakpoints.
GDB: The GNU Project Debugger now introduces an experimental feature
called source-tracking breakpoints that tracks the source line a breakpoint
was set to.
Imagine you are debugging: you set breakpoints on a bunch of
source lines, inspect some values, and get ideas about how to change your
code. You edit the source and recompile, but keep your GDB session running
and type run to reload the newly compiled executable. Because you changed
the source, the breakpoint line numbers shifted. Right now, you have to
disable the existing breakpoints and set new ones.
GDB source-tracking breakpoints change this situation. When you set a
breakpoint using file:line notation, when this feature is enabled, GDB
captures a small window of the surrounding source code. When you recompile
and reload the executable, GDB adjusts any breakpoints whose lines shifted
due to source changes. This is especially helpful in ad-hoc debug sessions
where you want to keep debugging without manually resetting breakpoints
after each edit-compile cycle.
To enable the source-tracking feature, run:
(gdb) set breakpoint source-tracking enabled on
Set a breakpoint using file:line notation:
(gdb) break myfile.c:42
Breakpoint 1 at 0x401234: file myfile.c, line 42.
GDB now tracks the source around this line. The info breakpoints command
shows whether a breakpoint is tracked:
(gdb) info breakpoints
Num Type Disp Enb Address What
1 breakpoint keep y 0x0000000000401234 in calculate at myfile.c:42
source-tracking enabled (tracking 3 lines around line 42)
Now edit the source — say a few lines are added above the breakpoint,
shifting it from line 42 to line 45. After recompiling and reloading the
executable with run, GDB resets the breakpoint to the new line and displays:
Breakpoint 1 adjusted from line 42 to line 45.
Run info breakpoints again to confirm the new location:
(gdb) info breakpoints
Num Type Disp Enb Address What
1 breakpoint keep y 0x0000000000401256 in calculate at myfile.c:45
source-tracking enabled (tracking 3 lines around line 45)
As you can see, GDB updated the breakpoint line to match the new location.
The matching algorithm requires an exact string match of the captured source
lines. Whitespace-only changes or trivial reformatting of the tracked lines
will confuse the matcher and may cause the breakpoint not to be found.
GDB only searches within a 12-line window around the original location. If
the code shifted by more than that — for example, because a large block was
inserted above — the breakpoint will not be found. GDB will keep the
original location and print a warning:
warning: Breakpoint 1 source code not found after reload, keeping original
location.
Source context cannot be captured when a breakpoint is created pending
(e.g., with set breakpoint pending on), because no symbol table is available
yet. When the breakpoint later resolves to a location, it will not be
source-tracked.
Source tracking is not supported for ranged breakpoints (set with
break-range).
Breakpoints on inline functions that expand to multiple locations are not
source-tracked, as each location may have moved differently.
This feature is not yet available in a stable GDB release. There are two
ways to try it.
A pre-built package is available through a COPR repository. Enable it and
install:
sudo dnf copr enable ahajkova/GDB-source-tracking-breakpoints
sudo dnf upgrade gdb
To disable the repository again after testing:
sudo dnf copr disable ahajkova/GDB-source-tracking-breakpoints
The COPR project page is at:
https://copr.fedorainfracloud.org/coprs/ahajkova/GDB-source-tracking-breakpo
ints/
GDB source-tracking breakpoints are an experimental feature currently under
upstream review and not yet available in a stable GDB release. This link
https://sourceware.org/gdb/current/onlinedocs/gdb.html/Set-Breaks.html
covers all available breakpoint commands. If you try this feature out and
hit any kind of unexpected behavior, feedback is very welcome — you can
follow and respond to the upstream patch discussion on the GDB mailing list
at https://sourceware.org/pipermail/gdb-patches/2026-April/226349.html
There's a disconnect in the AI Engineering space right now and I think that the open source community has alread risen to the occasion to bridge the gap, but I don't see any signal that it's well understood or widely adopted. The industry is overwhelmingly focused on building agents from scratch via custom frameworks, bespoke orchestration layers, hand-rolled tool-calling loops, etc. when many of the hard problems have already been solved in that layer of the stack. The building block exists. It's open source. It's called goose.
I think for over 90% of use cases, if you're spending your time implementing an agent from scratch, you're already behind or potentially have already lost the race. My hypothesis is that Goose is the building block. It's the small, composable thing that becomes powerful when you wrap it in what the industry is rapidly agreeing is called the Harness.
Most people hear "goose" and think either "another AI coding assistant" or "another AI chatbot" (depending on how they came across goose and how they use it). That misunderstanding is the problem. Goose is not a coding assistant. It is not a chatbot. It is not a Claude Code competitor, though it can be configured to act as all of those things. At its core, goose is a small, configurable agent runtime with an extension-based architecture that can be composed into virtually anything.
It operates on three components:
Interface: Desktop app or CLI/TUI that collects user input and displays output.
Agent: The core logic engine that manages the interactive loop: sending requests to LLM providers, orchestrating tool calls, and handling context revision.
Extensions: Pluggable components built on the Model Context Protocol (MCP) that provide specific tools and capabilities.
A small core with a lot of power delivered through native extensions, external plugins, and configuration options. The agent core itself is minimal, it's an interactive loop plus context management. That's it. All capabilities come through the extension system.
You can strip goose down to nothing. No external capabilities. No tool calling. No skills. No plugins. You can even configure it so it cannot access the internet, only the inference service to talk to the model (which can be local). At that point, it's a plain chatbot with no agency whatsoever.
Or you can go the other direction entirely.
Configure goose with the Developer extension, Computer Controller, Memory, and a handful of MCP servers and you have a working replacement for Claude Code, Codex, Gemini CLI, OpenCode, or any other similar tool. Same capabilities, no vendor lock-in, and you choose your own inference provider from over 25 options (at the time of this writing)including Anthropic, OpenAI, Google Gemini, Groq, Mistral, and more. You can run fully local inference via goose's native inference provider, or offload to Ollama, Ramalama LM Studio, or Docker Model Runner. The full list of providers is in the goose documentation.
If you put this together, you're well on your way to unlocking the full potential but you're just getting started.
Where goose gets interesting is its composition model. Goose Recipes are reusable, shareable workflow definitions that package together instructions, extensions, parameters, provider settings, retry logic, and structured response schemas. A recipe can be as simple as a single prompt with a specific extension configuration. Alternatively it can be sophisticated, composed of subrecipes where each subrecipe is effectively another goose agent with its own configuration: its own extensions, plugins, inference provider, system prompt, and skills.
Subrecipes run in isolated sessions with no shared conversation history, memory, or state. The main recipe's agent decides when to invoke them, can run them sequentially or in parallel, and chains their outputs through conversation context. Compositional agent orchestration without writing a single line of framework code.
You're not writing an orchestration layer. You're not building a DAG executor. You're not implementing tool-calling logic. You're writing YAML that describes what you want done and goose handles the how.
If want to take this all the way to the extreme of a fully autonomous software factory like the one Steve Yegge outlines in his now infamous blog post, "Welcome to Gas Town", and implemented via his Gastown project. Gastown is a multi-agent workspace manager for orchestrating Claude Code, GitHub Copilot, Codex, Gemini, and other AI agents with persistent work tracking. It's a Go application with concepts like Mayors, Rigs, Polecats, Hooks, Convoys, and Beads. It's a real engineering effort to coordinate 20-30 agents on a codebase.
You can do exactly that by using goose as the building block. The open source community did it. They looked at Gastown and re-implemented its core concepts using goose's native capabilities. The result is Goosetown. Goosetown is a multi-agent coordination system that orchestrates "flocks" of AI agents (researchers, writers, workers, reviewers) to decompose and execute complex tasks. Goosetown uses goose's subagent delegation, skills system for role-based specialization, inter-agent communication via a broadcast channel called the "Town Wall," and multi-model support for adversarial cross-reviews where different LLMs review each other's work.
If you look at the code, it's just a few flat files, some shell scripts, some skills markdown, and some agent definitions.
All of this built on top of goose. Not alongside it. Not wrapping it. On it. Using the primitives goose already provides: skills, subagents, extensions, and recipes.
Goose also runs as a daemon, exposing itself to other applications via the Agent Client Protocol (ACP) (a standardized JSON-RPC protocol developed by Zed Industries). ACP does for AI agents what LSP did for language servers. ACP decouples agents from editors and frontends, so goose can be embedded directly into Zed, JetBrains, Neovim, or any ACP-compatible environment.
The composability runs both directions. Goose can also consume other ACP agents as providers, routing its LLM calls through Claude Code, Codex, or Gemini while keeping its own extension ecosystem and UI. As Adrian Cole wrote in his blog post "How to Break Up with Your Agent":
"Pick the UI you like. Pick the agent you like. They don't have to be the same thing."
This bidirectional composability — goose as a component and goose as an orchestrator — is what separates it from other agent tools.
Goose is fully open source under the leadership of the Agentic AI Foundation (AAIF), which provides vendor-neutral governance under the umbrella of the Linux Foundation. AAIF also hosts the Model Context Protocol (MCP) itself, so the standards goose builds on are governed with the same neutrality.
This matters. When you build your workflows on goose, you're building on a foundation governed by a neutral body with a Governing Board, a Technical Committee, and a transparent contribution model. This is the same open, collaborative, and neutral model that made Linux and Kubernetes into reliable core components of the entire software industry, and it's the same reason I think it's worth investing time and energy into.
It's no secret I'm an open source nerd, and goose checks all the boxes.
We've collectively been on a journey. First it was Prompt Engineering, crafting the right words to get the right output. Then it was Context Engineering, making sure the model has the right information at the right time. Now, it seems we've arrived at the next turn in this adventure we all find ourselves in: Harness Engineering.
Ralph Bean nails this in his blog post "What Even Is the Harness?". The harness is the enablement layer. It's everything you add to the agent runtime that gives you control over your outcomes:
"Harness — the enablement layer. AGENTS.md files, skills, custom tools, hand-crafted linters, system prompts for task-oriented agents. These are the things you engineer, iteratively, to increase the chances the agent gets things right. This is what Birgitta Böckeler calls the user harness and is where Mitchell Hashimoto's attention lives."
—Ralph Bean
Read that again. The harness is not the agent. The harness is what you add to
the agent. The AGENTS.md files. The skills. The custom MCP tools. The
hand-crafted linters. The system prompts. The recipes and subrecipes. The
extension configurations. The provider choices. The permission policies.
This is where your engineering effort belongs. Not in building the interactive loop, or implementing tool-calling JSON parsing, or writing context window management, or building MCP client libraries. Goose already does all of that and does so with the full backing of the AAIF, the Linux Foundation, and a vibrant open source community.
In most cases, and I'd argue almost all cases, your job is to build the harness.
I think for over 90% of use cases where someone is building an agent today, goose is a better starting point than a blank text editor or a vibe coding session (are we calling it Agentic Engineering yet?).
If you need a coding assistant, goose does that. If you need a research agent, configure goose with web scraping extensions and a research-focused recipe or skill. If you need a CI/CD bot, run goose in daemon mode with ACP or orchestrate it with scripts/recipes in your CI job runner of choice. If you need multi-agent orchestration, compose goose instances with subrecipes or build a Goosetown-style flock. If you need local-only, air-gapped inference, point goose at Ollama, Ramalama, LM Studio, or its native inference provider. If you need to integrate with your existing editor, goose speaks ACP natively or you can set GOOSE_PROMPT_EDITOR and run the whole flow from inside your editor of choice. If you need vendor-neutral governance, it's under the Linux Foundation umbrella via AAIF.
The remaining 10%? Those are the genuinely novel agent architectures, the research projects pushing boundaries, the use cases where you do need to control every byte of the agent loop. For those, build from scratch. For everything else, build the harness. I'm not saying you can't build agents from scratch. I'm simply suggesting that you probably don't need to.
If you're a professional technologist or an aspiring AI Engineer, I'd encourage you to shift your mental model. Stop thinking about building agents. Start thinking about harnessing them. At this point in the AI hype cycle, the agent is mature enough to be the commodity. The harness is your competitive advantage.
Install goose. Strip it down to nothing and build it back up. Write a recipe. Compose some subrecipes. Add skills. Configure extensions. Point it at different providers. Run it as a daemon. Embed it in your editor. Build a flock. Engineer the harness.
Go forth and harness your agents.
Happy hacking. <3
Version 4.11.0 of syslog-ng contains contributions from Databricks related to OAuth2 authentication. Recently, they published a blog about how this enables their customers to send logs to their data lake using syslog-ng and the OpenTelemetry protocol.
The syslog-ng project received two contributions from Databricks in the last weeks of 2025. The first one turned the already existing OAuth2 support generic and extensible, so it can be used anywhere, not just with Microsoft Azure (but of course, Azure compatibility was preserved). The next pull request was built on the first one and enabled OAuth2 support for gRPC-based destinations, like OpenTelemetry, Loki, BigQuery, PubSub, ClickHouse, etc. These changes were released as part of the syslog-ng 4.11.0 release. You can read more about these in the release notes at https://github.com/syslog-ng/syslog-ng/releases/tag/syslog-ng-4.11.0
Besides an excellent overview about syslog-ng, the related Databricks blog also provides step-by-step instructions on how to use syslog-ng with their product. You can read it at: https://community.databricks.com/t5/technical-blog/streaming-syslog-ng-data-to-your-lakehouse-powered-by-zerobus/ba-p/153979

Originally published at https://www.syslog-ng.com/community/b/blog/posts/streaming-syslog-ng-data-to-your-lakehouse-using-opentelemetry
The best way to setup a kanban board is…whatever way works best for you. I have at least two distinct styles of board setup across three tools (yes, I have a problem) because that’s what works best for me. How you setup your boards is a matter of style. The most important thing is to set it up in such a way that you’ll actually use it — an unused board is full of lies and can cause confusion with your collaborators. Since I often get asked for help on this, it seems worth writing down some considerations.
Most tools I’ve used assume a left-to-right flow. I recently switched to using a right-to-left flow after reading Philippe Bourgau’s post on the topic. Using right-to-left means you’re starting with the stuff that’s in progress instead of the backlog. You “pull” cards instead of “pushing” them. I made the suggestion at work, and folks seem to generally like the change. The main problem is that a lot of tools I’ve used treat the leftmost column as the default starting point, so the card creation experience involves an extra click or two.
This is one that I’ve often seen people overthink. In the simplest configuration, you have “to do”, “doing”, and “done.” That’s often enough. Simplicity is good. On the other end, one of the boards I use at work has something like a dozen columns. Not every card flows through every column, so it’s not as wild as it sounds. Much of the column sprawl is because the tool we use doesn’t support swimlanes. I would normally not suggest double-digit columns, but it works well in this case.
A good rule to determine if a column is necessary is if it never has any cards, or cards only briefly live in the column, it’s not worthwhile.
Here are a few examples of how I have different boards set up.
The meal planning board my wife and I use to plan the week’s dinner has three columns: ideas, need ingredients, and planned. “Ideas” is the backlog, “need ingredients” means we intend to cook it but first we have to go grocery shopping, and “planned” is ready to cook.
The board I use to track laundry has five columns: dirty (the backlog), ready (sorted), washing (duh), drying (also duh), and folding (triple duh). The folding column can probably go away, but sometimes the basket sits for a few hours or until the next day.
My board for Duck Alignment Academy has five columns (plus one). “New” is for ideas that I don’t really fleshed out yet. “Ready” is for posts that I could sit down and write. “In progress” is for posts that I am currently writing. “Scheduled” is for completed posts waiting to publish (I wish this had more cards in it). “Done” is for posts that have published. There’s an extra “archived” column that I move cards to after I send that month’s Duck Alignment Academy newsletter. (I have a column because the tool doesn’t support archiving cards directly.)
Most tools let you apply labels, add due dates, create custom fields, and so on. I’ve certainly made use of that when setting up boards, but I find myself not really paying attention to it most of the time. There’s often an urge to design a system so that everything will be perfectly organized. But in the same way that a backup that you never test restores from is not reliable, metadata that you only write is not useful. Metadata is so easy to overoptimize, because it requires getting everyone using the board to buy in and then also to have a reason to use it. I wrote a similar piece about issue labels earlier this year.
In my experience, it’s almost always better to ignore metadata when initially creating a board. Only when you can identify a concrete problem that you’re actually experiencing should you add metadata.
A great example of this is in my Duck Alignment Academy board. When I was first creating the board, I was also creating the website. I had cards for tasks like domain registration, hosting setup, page creation, and so on. In the four-plus years since I launched the site, the cards have almost exclusively been blog posts. The “blog post” label that I created doesn’t add a lot of value. What does add value are two custom fields I added: “URL” and “description.” I put the post’s URL and the excerpt (that gets used for social media preview and the like) into those two fields so that later on when I go to add them to the newsletter or share them elsewhere, they’re available and consistent.
Another use that is actually useful is on my meal planning board. I have labels for the primary protein in a meal, which helps me see at a glance if we’ve planned chicken five days in a row.
If your tool supports setting work in progress limits, I recommend that you do. If nothing else, it forces you to be honest about what you’re actually working on and what you’ve set aside for one reason or another. I’ve found that WIP limits seem to work better on single-user boards, but I haven’t used them much on collaborative boards.
Automate and integrate
The more your board does the boring work for you, the more likely you are to keep it up to date. If it supports auto-archiving cards when they’re complete, set that up. If you can tie it in to the tools you’re already using, do that. (The kanban board available in GitHub issues works pretty well in that regard.) Make the board a hub of your work and you’ll get use out of it. Make the board a chore that you have to go update and you won’t.
The post The best way to setup a kanban board appeared first on Duck Alignment Academy.
comments? additions? reactions?
As always, comment on the fediverse: https://fosstodon.org/@nirik/116506323315055393