I took a photo with my iPhone and used its own OCR, which made a lot of confusion between 0 (zero), O (capital o) and 8, mixed 1 and l (small L), and yielded many chars as very similar glyphs but in advanced Unicode ranges, which are invalid for Base64 encoding. It took me some time to fix it all. The final corrected text is this:
#!/bin/bash
# Congratulations! You found the easter egg! ❤
# おめでとうございます!隠されたサプライズを見つけました!❤
# Define the text to animate
text="♥PEACE♥FOR♥ALL♥PEACE♥FOR♥ALL♥PEACE♥FOR♥ALL♥PEACE♥FOR♥ALL♥PEACE♥FOR♥ALL♥"
# Get terminal dimensions
cols=$(tput cols)
lines=$(tput lines)
# Calculate the length of the text
text_length=${#text}
# Hide the cursor
tput civis
# Trap CTRL+C to show the cursor before exiting
trap "tput cnorm; exit" SIGINT
# Set frequency scaling factor
freq=0.2
# Infinite loop for continuous animation
for (( t=0; ; t+=1 )); do
# Extract one character at a time
char="${text:t % text_length:1}"
# Calculate the angle in radians
angle=$(echo "($t) * $freq" | bc -l)
# Calculate the sine of the angle
sine_value=$(echo "s($angle)" | bc -l)
# Calculate x position using the sine value
x=$(echo "($cols / 2) + ($cols / 4) * $sine_value" | bc -l)
x=$(printf "%.0f" "$x")
# Ensure x is within terminal bounds
if (( x < 0 )); then x=0; fi
if (( x >= cols )); then x=$((cols - 1)); fi
# Calculate color gradient between 12 (cyan) and 208 (orange)
color_start=12
color_end=208
color_range=$((color_end - color_start))
color=$((color_start + (color_range * t / lines) % color_range))
# Print the character with 256-color support
echo -ne "\033[38;5;${color}m"$(tput cup $t $x)"$char\033[0m"
# Line feed to move downward
echo ""
done
The original encoded text, when executed in my Linux terminal gives a beatiful animation, similar to this:
After decoding and executing the script, I found other people doing the same:
As the event owner for the Fedora Project’scommunity presence during the DevConf.IN 2026 conference, I had to ensure that I made it to the event venue as early as possible. The first day began with me getting ready by 0930am Indian Standard Time since I had returned from meeting the likes of Samyak Jain, Matthew Miller and Karen Miller late the day before. I had to put a pin on my plans to check the inventory of the swag packs that I had delivered at Shounak Dey’s house, as it was quite the struggle to get us Uber rides to the MIT World Peace University campus, Kothrud. While checking with Yashwanth Rathakrishnan, I also ensured that I packed the essential tooling that might come in handy while establishing and tearing down our booth at the DevConf.IN venue.
Collection A (Akashdeep Dhar, CC BY-SA 4.0)
After a couple of Uber cab cancellations, Shounak and I were finally on our way from Chandan Nagar to Kothrud. We seemed to have greatly underestimated the time it would take for us to get there, as not only was it thirty kilometres away from us, but the morning office-going traffic was also at its peak. We projected that it would take us around an hour to get there, but since the booth activities were to begin from 1100am Indian Standard Time onwards, we had ample time in our hands. I checked in with Yashwanth and Samyak on our organiser chatroom, who had reached there by around 1000am Indian Standard Time. They had joined the queue to pick up their badges and swag packs from the organiser’s desk, and also went ahead to visit where our Fedora ProjectCommunity Corner was located.
Collection B (Akashdeep Dhar, CC BY-SA 4.0)
To both Shounak’s and my chagrin, we figured out a little too late that the venue had been moved to the newly constructed Vyas Building of the MIT World Peace University campus. We realised it after our (rather friendly) Uber driver dropped us off, so the two of us had to walk around a kilometre hauling the heavy swag boxes and booth equipment. I did connect with Yashwanth and Samyak so that they could help carry those over from the entrance, but there must have been a disconnect, as Shounak and I had to sweat our way into the Vyas Building anyway. After passing a security check and a flight of wide stairs, our relief came in the form of one patient, Avadhoot Dhere, whom we met at the building’s entrance, who helped us carry our load to the Fedora ProjectCommunity Corner location.
Collection C (Akashdeep Dhar, CC BY-SA 4.0)
At around 1030am Indian Standard Time, we had about thirty minutes to get our community booth set up with exhibits and swag packs. I delivered a quick briefing to our on-site Fedora Project Indian Crew on what was expected of them and how they had to abide by the DevConf.IN Code of Conduct, when to take breaks after duly informing about the same, and how judicious they have to be regarding the swag distribution. We started drawing quite a huge crowd of early visitors to the booth while we were busy setting up the community booth, a bunch of whom confused our location with the registration desk. Huge kudos to Rajan Shah, who helped us secure a strategic location to allow for most (if not all) community conversations to come to us first before they seeped into other booths.
Collection D (Akashdeep Dhar, CC BY-SA 4.0)
While Samyak and Shounak got busy setting up the A3-sized posters that I had previously designed, Yashwanth and I worked on organising the swag stickers and swag magnets on our booth desk. I briefly paused to take in just how amazing a task the DevConf.IN 2026 organising committee did with arranging the booth backdrops and positioning the two desks at a right angle for us. With Samyak’s handspan calculations and Shounak’s accurate pasting, we soon had our four posters on topics like Matthew’s presentation, Flock To Fedora 2026, DevConf.IN 2026’s Fedora Badge and Fedora Project Community Trivia, ready to go. Since we had visitors lining up already before the scheduled beginning, we had to double-time addressing the people and setting up our Fedora Linux-powered laptops.
Collection E (Shounak Dey, CC BY-SA 4.0)
By around 1100am Indian Standard Time, we had already addressed approx 200 visitors at the booth, so you could imagine just how difficult the opening hours were for the on-site Fedora Project Indian Crew to handle that wave. Our economical yet outgoing approach towards handing out our limited swag packs allowed us to pace ourselves while dealing with a huge number of attendees. To add to that, our inclusive demeanour towards the visitors allowed them to bring their conversations and feedback about Fedora Linux and/or the Fedora Project to us. While the population numbers did not hold a candle against the likes that we usually get in places like FOSDEM, I extended those experiences to ensure that we did not stretch ourselves thin while answering questions and receiving feedback.
Collection F (Akashdeep Dhar, CC BY-SA 4.0)
We were also visited by Dorka Volavkova, who checked in with me about the situation regarding the availability of swag packs at our booth. While I was initially dependent on folks portering in swag packs from various other events due to logistical difficulties, I soon realised that the plan was not reliable. I informed her about my decision to extend the Fedora Mindshare budget request by about USD 150 to ensure that I could produce the swag packs locally with Rajan’s and Devang Parikh’s assistance. This not only allowed for deterministic representation at DevConf.IN 2026, but the same resources could also be used for organising more such Fedora Project events around India or APAC. Dorka was happy to note that we followed an ideal process of getting swag packs for events from local vendors.
Collection G (Shounak Dey, CC BY-SA 4.0)
As the footfall slightly slowed down, Shounak and I took that opportunity to leave for the registration desk to pick up our badges and swag packs. It had literally escaped our minds since we were occupied with setting up the Fedora ProjectCommunity Corner, but we did manage to get our DevConf.IN-themed backpack and ID badges on time. We fielded various questions from visitors wanting to know about what the Fedora Project is, how Fedora Linux differs from other distributions, how Fedora Linux shapes the future of enterprise distributions, and what one can do to get started with contributing. Yashwanth suggested rewarding more interesting conversations with our limited-edition Fedora Project magnet-and-clip combo, and that helped us drive the course of our conversations and shape an open narrative.
Collection H (Shounak Dey, CC BY-SA 4.0)
Matthew soon arrived at around 1200pm Indian Standard Time, and that not only gave us one more person to field interactions with, but the visitors could also avail themselves of the historical context about the Fedora Project that he could offer, having served in the Fedora Project Leader position for over a decade. This was also our opportunity to gather folks around us and promote his talk about “30/35 Fedora Linux Releases in 30/35 Minutes”, which was scheduled for the next day. The younger, college-going crowd looked up to us for working on free and open source software as our full-time employment and wanted to understand how they could begin doing the same. This was a great opportunity for us to showcase our onboarding pathways into the community through the Fedora Join SIG.
Collection I (Shounak Dey, CC BY-SA 4.0)
Among the curious questions that I fielded, a couple that I remember had to do with the incompatibility of Fedora Linux as a student base operating system for attempting CNCF certification examinations and the apparent lack of a long-term release in the Fedora Linux offerings. While I personally did not have any experience with those examinations, I elaborated on how Fedora Linux’s focus on innovation and the fast-moving release cycle could be something that could not be conveniently kept up with by the examination proctors. As for the question about the Fedora Linux long-term distribution release, I recommended that they use CentOS Stream and/or the Red Hat Enterprise Linux Developer Subscription for more serious workloads that require strict quality and uncompromising support.
Collection J (Shounak Dey, CC BY-SA 4.0)
As I was wearing a CentOS Linux-themed tee, I was also able to steer discussions into explaining how the Fedora Project’s decisions around technologies such as the systemd movement, PipeWire inclusion, Wayland defaults, etc., were seen as controversial in the past but ended up becoming industry standards just a couple of years later. I wanted our conversations with the visitors to be a gateway through which they could start exploring the space of open source enterprise distributions and potentially begin contributing to the projects of their interest. Amidst our conversations, the visitors were drawn to the Fedora Project Community Trivia that Samyak helped craft questions for, and the fact that it had an exclusive Fedora Project-themed sipper as an award prize only helped us farm more engagement.
Collection K (Akashdeep Dhar and Shounak Dey, CC BY-SA 4.0)
Shounak stayed busy photographing with his fancy Canon DSLR camera as we guided visitors to scan the QR code on our posters to do things like get themselves the associated Fedora Project event badge, learn more about the annual flagship contributor conference, Flock To Fedora 2026, and, of course, participate in the Fedora Project Community Trivia. A bunch of these visitors had experience using a GNU/Linux distribution, and with us sharing just how Fedora Linux allows them to build solutions for the consumers of today and tomorrow, with its packages being on the leading edge, they were eager to try it out on their personal devices. There were comparison conversations against the likes of Microsoft Windows and Apple macOS, too, on a more superficial level, among the younger folks there.
Collection L (Shounak Dey, CC BY-SA 4.0)
A DevConf.IN volunteer, Rahul Sharma, visited our booth to inform us about the catering being served on the eighth floor of the building at around 0100pm Indian Standard Time. Right around this time, we had a confusing conflict with a fellow speaker or booth participant who we found to be taking away Shounak’s swag pack, claiming it to be theirs since they had left it there for the restroom. While we could not verify the truthfulness of their claim, I sternly asked them to return the swag pack to Shounak and verify with his friends before pressing further. Since he did not bother informing any of the booth staff about leaving behind his objects, we could not be held responsible for any misplaced belongings. For all that trouble, it turned out that his friend indeed had his swag pack with him all along.
Collection M (Shounak Dey, CC BY-SA 4.0)
After reporting this awkward incident to an adjacent DevConf.IN volunteer, Shounak and I decided to head upstairs for lunch. With Matthew leaving, it was down to the four of us handling the booth, so we wanted to ensure that the booth was staffed by at least a couple of folks at any given time. Since both Samyak and Yashwanth mentioned that they were not feeling hungry enough, the two of us decided to scale through sixteen flights of stairs to make it to the eighth floor. It is not that the place did not have elevators, but they were jam-packed, and we would have ended up wasting time waiting for a vacant one to become available, given how popular DevConf.IN 2026 ended up being in terms of attendee count. We also connected with Avadhoot to check on his experience at the conference so far.
Collection N (Shounak Dey, CC BY-SA 4.0)
We did end up wasting our time, though, since we found a huge queue on the eighth floor at the dining establishment serving volunteers, speakers and staff. Actually, scratch that, it was not entirely a wasteful endeavour because I got to meet up with Brian Proffitt, whom I met after almost a year or so since the previous DevConf.IN. After a brief catch-up and a bio break, the two of us headed back to the booth, since waiting in the queue doing absolutely nothing before spending more time to have food would have been wasteful. Instead, Shounak and I decided to go across the place to see what other booths had to offer, which led me to meet a bunch of community friends and fellow employees. We met Sudhir Dharanendraiah while we were exploring, meeting after almost a couple of years.
Collection O (Akashdeep Dhar, CC BY-SA 4.0)
He remarked that the booth personnel from the Red Hat India Communities ran out of their swag during the first few hours of the event's commencement and praised how we kept visitors engaged throughout the day. He also said that not combining the Fedora Project community booth with the Red Hat India Communities was the right call, since that allowed us to be crystal clear in our messaging that the Fedora Project and Red Hat do indeed care about India and APAC users and contributors. We were even asked whether we had a regional community presence or meetup cadence, which gave us something to explore and consider from the Fedora Mindshare activity perspective. The two of us finally came back to the booth, letting Samyak and Yashwanth head away for lunch.
Collection P (Akashdeep Dhar, CC BY-SA 4.0)
The other major questions and feedback that came to us about Fedora Linux had to do with what we were planning to do around artificial intelligence. Being a Fedora Mindshare representative to the Fedora Council at the time, and also someone who proposed the creation of an AI-assisted contribution policy, I elaborated on how inclusive our community had recently become towards policy-abiding AI-assisted contributions. I also emphasised that with subprojects and SIGs around AI, ML, and PyTorch, our focus was to establish Fedora Linux as a general-purpose platform of choice for generalists, developers, sysadmins, or enthusiasts to build AI-powered technologies on, rather than have AI-based solutions that no one asked for enabled by default in our primary offerings.
Collection Q (Shounak Dey, CC BY-SA 4.0)
We met Pravin Satpute, who suggested that we have lunch at the cafeteria on the fifth floor, and after letting Samyak and Yashwanth return from their exploration, we headed there with Avadhoot. The dining choice was limited to a Vegetarian Biryani and Cold Coffee, but at around 0230pm Indian Standard Time, that felt like a divine serving. We were able to get ourselves an elevator this time around, and after a brief catch-up with the likes of Saumili Dutta and Kashyap Ekbote, both of whom I met previously during GNOME Asia 2024 in Bengaluru, we returned to the booth. I made it a point to remind the booth crew to stay hydrated now and then, given just how easy it could be for folks to forget about self-preservation after being overwhelmed by almost 600 visitors since we reached the conference.
Collection R (Shounak Dey, CC BY-SA 4.0)
Coincidentally, I met Harshavardhan Sharma from openSUSE, whom we had met in the past in another conference, along with Sahil Tah, a college senior from my alma mater. DevConf.IN 2026, did manage to bring like-minded individuals into one place, regardless of which industry they belonged to, showing that openness and innovation are indeed the way forward when it comes to technology. Rajan made a visit to the Fedora ProjectCommunity Corner around that time, and he admired how we kept visitors occupied while providing them with things to learn and swag to collect. We were also soon visited by the likes of Amita Sharma and Sudhir Menon, whom we caught up with after a long time, while offering them the warm embrace that the Fedora Project’s “Friends” foundation is known for.
Collection S (Shounak Dey, CC BY-SA 4.0)
I was also briefly visited by some folks who thought that we had mistaken the answer keys in the Fedora Project Community Trivia, but to their surprise, those were made tricky on purpose. I provided them with the context of how I planned those deceptively difficult questions while emphasising that we wanted the four winners each day of the conference to feel special about their victory. Since I had procured eight limited-edition Fedora Project-themed sippers for the winners, the attendees not only had to exhibit their community knowledge by getting full marks but also had to be lucky enough to emerge victorious in a raffle. We wanted folks to return to the booth thirty minutes before closing time for the announcements.
Collection T (Akashdeep Dhar, CC BY-SA 4.0)
While Samyak worked on populating the raffle with the high scorers on his laptop, I checked with Sayak Sarkar, who was visiting DevConf.IN, just like the year before. As the Vyas Building gave us very poor cellular reception, it was incredibly challenging to point him in the correct direction, especially since, while the venue remained the same as the previous year, the actual location had moved. For their first experience manning a community booth, I could not help but watch from a distance just how well both Shounak and Yashwanth did as volunteer contributors as I waited for Sayak to turn up at the Fedora ProjectCommunity Corner. At around 0430pm Indian Standard Time, I caught up with Sayak and introduced him to the Indian Crew as the curious crowd started gathering for the results.
Collection U (Akashdeep Dhar, CC BY-SA 4.0)
With the four winners being announced and felicitated by both Samyak and me, the visitors cheered for the winners as well as for the exciting activity. We clicked a few more pictures with Matthew before he went on his way to attend the dinner with the DevConf.IN speakers, organisers, and members. While I was invited to the dinner as well on behalf of the Fedora Project, it made little sense to me not to go out with the hardworking Fedora Project Indian Crew instead. In one of the conversations with Sayak and Shreyank Gupta, I was pleasantly surprised to learn that Shreyank was also a fellow Bengali, since I had always interacted with him in either English or Hindi for the past five years or so. After a few more pictures were taken by Shounak at the booth, we decided to start wrapping things up for the day.
Collection V (Akashdeep Dhar, CC BY-SA 4.0)
We unfortunately had to send some visitors back, and ask them to return the next day, who wanted a demonstration of Fedora Workstation at our exhibits. We had more than half of the swag packs left in our inventory, even after giving away a huge number of them to be cross-shared by the Red Hat India Communities booth personnel. After taking an inventory of all the belongings we had with us, we started looking for Uber rides back home. While we left the posters behind on the backdrops, we decided to take everything else with us to avoid misplacing them. It took a while for that to be confirmed, but after Samyak and Yashwanth got theirs, Shounak and I got ours, and we decided to leave for our homes around 0530pm Indian Standard Time, since we planned to reconvene for dinner later that day.
Collection W (Akashdeep Dhar, CC BY-SA 4.0)
After a rather uneventful but lengthy Uber drive, Shounak and I made it back to our homes. The evening went quite smoothly, with a pre-booking for dinner done at Wasabi15 under my name and Samyak and Yashwanth arriving early by 0800pm Indian Standard Time. Using the budget that I had previously requested from my management, we were able to have a great time unpacking from a busy day running operations at the Fedora ProjectCommunity Corner. Yashwanth sought advice on how he could take his contributions further, and Shounak shared how he got started in free and open source software contribution. With some great Asian cuisine and even greater conversations, we called it a night at 1000pm Indian Standard Time and went back to our places to prepare for the next day.
Fedora Documentation translations will be put on hold from March 4th as the Fedora Localization Team has started the process of migration from pagure.io to the Fedora Forge. From the date, translation projects of the documentation (with ‘fedora-docs-l10n’ in name) will be gradually locked on the Fedora translation platform. Translation automation of the Docs website will also be stopped in the Fedora infrastructure. Consequently, there will be no translation updates available in the language versions on the Fedora Documentation.
The migration involves all repositories which support and ensure the availability of translations of the Fedora Documentation. There is no possibility the migration can be performed ‘on the fly’ as changes in the repositories, related scripts and continuous integration with the translation platform cannot be dealt with independently. Therefore the translation process of the Fedora Documentation is kept on hold.
We regrettably ask the Fedora contributors, our translation community, to pull back from translating of the Fedora Documentation and wait till the translation automation of the documentation is resumed again.
The progress of migration can be followed in the localization tracker as issue #52.
In Rediscovering Reading (Without the Social Media Part) I wrote about stepping away from scrolling and building a slower, more deliberate reading habit. Part of that shift was making my reading log public without tying it to a dedicated social network.
The mechanics behind that were simple but fussy: keep a YAML file up to date, copy and paste links from Open Library, remember to grab cover images, and wire everything into Jekyll templates for the reading page and sidebar. None of it was hard, but it was just annoying enough that I knew future‑me would start skipping updates.
I built Jekyll Reads to make that workflow tolerable.
What Jekyll Reads actually does
Jekyll Reads is a small collection of pieces designed around a single idea: keep all the book data in one _data/reading.yml file and let everything else be presentation.
The core pieces are:
A shared Node.js library that talks to Open Library, picks a reasonable match, and produces a standard YAML snippet for a book
A command‑line tool that lets you search for a book and print the YAML to stdout, with options for indentation and auto‑selecting results
A Vim integration that shells out to the CLI and drops the YAML directly into your buffer at the right indentation level
A Visual Studio Code extension that does the same thing from inside the editor, with a proper search UI and update checks for the extension itself
All of this is intentionally boring: no external Node dependencies, just the built‑in modules and a bit of glue. The point is to make it slightly easier to keep the reading list current than to let it drift.
How it shows up on this site
On this site, the source of truth is _data/reading.yml. Entries that are still in progress, finished, or abandoned are all represented there with the same structure. The YAML includes things like start and finish dates, a link to more information (usually Open Library), an optional cover image, and a free‑form comment.
That data feeds two places:
The dedicated reading page, which separates currently‑reading, finished, and abandoned books and shows covers, dates, and comments
A small sidebar block on the home page that surfaces what I am currently reading, so the log is visible without needing a whole post for every book
Jekyll Reads does not try to be a general bookshelf app. It just reflects what I am already doing: writing short notes in YAML and publishing them along with the rest of the site.
Design constraints and trade‑offs
I made a few deliberate choices that might look odd if you are used to larger toolchains:
No external Node dependencies. The library and CLI only use built‑in modules like https and readline. That keeps installation simple and makes it easy to run in constrained environments.
Open Library as the primary data source. It provides book metadata, cover images, and stable URLs without requiring another account or scraping.
Plain YAML as the storage format. A static _data file is easy to version, review, and back up. It also plays nicely with Jekyll’s existing data pipeline.
Multiple small tools instead of one big one. The CLI, Vim integration, and VS Code extension all sit on top of the same library, so they stay in sync without each re‑implementing the logic.
If any of that stops being true in the future, I can replace or extend the pieces without touching the core data file.
If you want to use it
The repository README walks through how to set up your own _data/reading.yml, wire up a reading page and sidebar, and use the CLI or editor integrations. It is written so that you can follow it even if you are not using the same Jekyll theme I am.
The code is MIT‑licensed and shipped under Electric Pliers LLC. If you want a lightweight way to publish a reading log without standing up a whole social network, you might find it useful.
A lot of hardware runs non-free software. Sometimes that non-free software is in ROM. Sometimes it’s in flash. Sometimes it’s not stored on the device at all, it’s pushed into it at runtime by another piece of hardware or by the operating system. We typically refer to this software as “firmware” to differentiate it from the software run on the CPU after the OS has started1, but a lot of it (and, these days, probably most of it) is software written in C or some other systems programming language and targeting Arm or RISC-V or maybe MIPS and even sometimes x862. There’s no real distinction between it and any other bit of software you run, except it’s generally not run within the context of the OS3. Anyway. It’s code. I’m going to simplify things here and stop using the words “software” or “firmware” and just say “code” instead, because that way we don’t need to worry about semantics.
A fundamental problem for free software enthusiasts is that almost all of the code we’re talking about here is non-free. In some cases, it’s cryptographically signed in a way that makes it difficult or impossible to replace it with free code. In some cases it’s even encrypted, such that even examining the code is impossible. But because it’s code, sometimes the vendor responsible for it will provide updates, and now you get to choose whether or not to apply those updates.
I’m now going to present some things to consider. These are not in any particular order and are not intended to form any sort of argument in themselves, but are representative of the opinions you will get from various people and I would like you to read these, think about them, and come to your own set of opinions before I tell you what my opinion is.
THINGS TO CONSIDER
Does this blob do what it claims to do? Does it suddenly introduce functionality you don’t want? Does it introduce security flaws? Does it introduce deliberate backdoors? Does it make your life better or worse?
You’re almost certainly being provided with a blob of compiled code, with no source code available. You can’t just diff the source files, satisfy yourself that they’re fine, and then install them. To be fair, even though you (as someone reading this) are probably more capable of doing that than the average human, you’re likely not doing that even if you are capable because you’re also likely installing kernel upgrades that contain vast quantities of code beyond your ability to understand4. We don’t rely on our personal ability, we rely on the ability of those around us to do that validation, and we rely on an existing (possibly transitive) trust relationship with those involved. You don’t know the people who created this blob, you likely don’t know people who do know the people who created this blob, these people probably don’t have an online presence that gives you more insight. Why should you trust them?
If it’s in ROM and it turns out to be hostile then nobody can fix it ever
The people creating these blobs largely work for the same company that built the hardware in the first place. When they built that hardware they could have backdoored it in any number of ways. And if the hardware has a built-in copy of the code it runs, why do you trust that that copy isn’t backdoored? Maybe it isn’t and updates would introduce a backdoor, but in that case if you buy new hardware that runs new code aren’t you putting yourself at the same risk?
Designing hardware where you’re able to provide updated code and nobody else can is just a dick move5. We shouldn’t encourage vendors who do that.
Humans are bad at writing code, and code running on ancilliary hardware is no exception. It contains bugs. These bugs are sometimes very bad. This paper describes a set of vulnerabilities identified in code running on SSDs that made it possible to bypass encryption secrets. The SSD vendors released updates that fixed these issues. If the code couldn’t be replaced then anyone relying on those security features would need to replace the hardware.
Even if blobs are signed and can’t easily be replaced, the ones that aren’t encrypted can still be examined. The SSD vulnerabilities above were identifiable because researchers were able to reverse engineer the updates. It can be more annoying to audit binary code than source code, but it’s still possible.
Vulnerabilities in code running on other hardware can still compromise the OS. If someone can compromise the code running on your wifi card then if you don’t have a strong IOMMU setup they’re going to be able to overwrite your running OS.
Replacing one non-free blob with another non-free blob increases the total number of non-free blobs involved in the whole system, but doesn’t increase the number that are actually executing at any point in time.
Ok we’re done with the things to consider. Please spend a few seconds thinking about what the tradeoffs are here and what your feelings are. Proceed when ready.
I trust my CPU vendor. I don’t trust my CPU vendor because I want to, I trust my CPU vendor because I have no choice. I don’t think it’s likely that my CPU vendor has designed a CPU that identifies when I’m generating cryptographic keys and biases the RNG output so my keys are significantly weaker than they look, but it’s not literally impossible. I generate keys on it anyway, because what choice do I have? At some point I will buy a new laptop because Electron will no longer fit in 32GB of RAM and I will have to make the same affirmation of trust, because the alternative is that I just don’t have a computer. And in any case, I will be communicating with other people who generated their keys on CPUs I have no control over, and I will also be relying on them to be trustworthy. If I refuse to trust my CPU then I don’t get to computer, and if I don’t get to computer then I will be sad. I suspect I’m not alone here.
Why would I install a code update on my CPU when my CPU’s job is to run my code in the first place? Because it turns out that CPUs are complicated and messy and they have their own bugs, and those bugs may be functional (for example, some performance counter functionality was broken on Sandybridge at release, and was then fixed with a microcode blob update) and if you update it your hardware works better. Or it might be that you’re running a CPU with speculative execution bugs and there’s a microcode update that provides a mitigation for that even if your CPU is slower when you enable it, but at least now you can run virtual machines without code in those virtual machines being able to reach outside the hypervisor boundary and extract secrets from other contexts. When it’s put that way, why would I not install the update?
And the straightforward answer is that theoretically it could include new code that doesn’t act in my interests, either deliberately or not. And, yes, this is theoretically possible. Of course, if you don’t trust your CPU vendor, why are you buying CPUs from them, but well maybe they’ve been corrupted (in which case don’t buy any new CPUs from them either) or maybe they’ve just introduced a new vulnerability by accident, and also you’re in a position to determine whether the alleged security improvements matter to you at all. Do you care about speculative execution attacks if all software running on your system is trustworthy? Probably not! Do you need to update a blob that fixes something you don’t care about and which might introduce some sort of vulnerability? Seems like no!
But there’s a difference between a recommendation for a fully informed device owner who has a full understanding of threats, and a recommendation for an average user who just wants their computer to work and to not be ransomwared. A code update on a wifi card may introduce a backdoor, or it may fix the ability for someone to compromise your machine with a hostile access point. Most people are just not going to be in a position to figure out which is more likely, and there’s no single answer that’s correct for everyone. What we do know is that where vulnerabilities in this sort of code have been discovered, updates have tended to fix them - but nobody has flagged such an update as a real-world vector for system compromise.
My personal opinion? You should make your own mind up, but also you shouldn’t impose that choice on others, because your threat model is not necessarily their threat model. Code updates are a reasonable default, but they shouldn’t be unilaterally imposed, and nor should they be blocked outright. And the best way to shift the balance of power away from vendors who insist on distributing non-free blobs is to demonstrate the benefits gained from them being free - a vendor who ships free code on their system enables their customers to improve their code and enable new functionality and make their hardware more attractive.
It’s impossible to say with absolute certainty that your security will be improved by installing code blobs. It’s also impossible to say with absolute certainty that it won’t. So far evidence tends to support the idea that most updates that claim to fix security issues do, and there’s not a lot of evidence to support the idea that updates add new backdoors. Overall I’d say that providing the updates is likely the right default for most users - and that that should never be strongly enforced, because people should be allowed to define their own security model, and whatever set of threats I’m worried about, someone else may have a good reason to focus on different ones.
Code that runs on the CPU before the OS is still usually described as firmware - UEFI is firmware even though it’s executing on the CPU, which should give a strong indication that the difference between “firmware” and “software” is largely arbitrary ↩︎
Because UEFI makes everything more complicated, UEFI makes this more complicated. Triggering a UEFI runtime service involves your OS jumping into firmware code at runtime, in the same context as the OS kernel. Sometimes this will trigger a jump into System Management Mode, but other times it won’t, and it’s just your kernel executing code that got dumped into RAM when your system booted. ↩︎
I don’t understand most of the diff between one kernel version and the next, and I don’t have time to read all of it either. ↩︎
There’s a bunch of reasons to do this, the most reasonable of which is probably not wanting customers to replace the code and break their hardware and deal with the support overhead of that, but not being able to replace code running on hardware I own is always going to be an affront to me. ↩︎
The year is rolling along, and here we are at the end of Feb.
Lots of small day to day items
There were a lot of small day to day investigations and incoming requests,
along with a pretty large amount of pull requests for our ansible repo.
Since we are in Beta freeze some of them will have to wait, but some we can test
out in staging now. It's great to see people submitting fixes and enhancements.
There were also some small fun to debug issues this week, including:
The https://whatcanidoforfedora.org/ site was sometimes alerting that
it's ssl cert was expired. Turns out this was caused by that domain
having old ip's for 2 proxies that had moved datacenters. So, sometimes
it hit those, timed out and the ssl check just assumed it was bad.
So, it was DNS. :)
The fedorapeople.org web server started being very slow to respond.
Turns out the scrapers were hitting the cgit interface there and
downloading xz snapshots of every commit. This caused the server to
have to try and compress things over and over again. So, for now
I just disabled those links and increased resources on the webserver.
scrapers continue to keep on giving.
Secure boot signing work
Much of my time this week has been spent working on our new secure boot
signing workflow. This is really really overdue and something I was hoping
to finish mid last year, but things kept coming up and it kept getting pushed back.
The new setup leverages our existing signing infrastructure (sigul) so
there's no need for special build hardware anymore. It also removes some
constraints in the existing setup allowing us to do something we have wanted
for a long time, namely sign aarch64 boot loader artifacts for secure boot.
Kudos to Jermey Cline for all his work on the code to make this possible.
This uses the siguldry-bridge, rust based server to talk to sigul, and
hopfully before too long we can replace the sigul server side with the
new rust based server too.
I got everything deployed, I am now able to sign things, but in testing
on my aarch64 laptop, there's still some issue with grub2 that needs to
be sorted out. Hopefully it's something not too difficult to track down
and we can move to this new setup after beta freeze once and for all.
I have released version 0.4.5 of patchutils and also built it in Fedora rawhide. This is a stability-focused update fixing compatibility issues and bugs, some of which had been introduced in 0.4.4.
Compatibility Fix: Git Extended Diffs
Version 0.4.4 added support in the filterdiff suite for Git’s extended diff format. Git diffs without content hunks (such as renames, copies, mode-only changes and binary files) were included in the output. This broke compatibility with 0.4.3.
For 0.4.5 this functionality is now gated with a --git-extended-diffs=include|exclude parameter. The default for 0.4.x is to exclude files in Git extended diffs with no content. There were also some fixes relating to file numbering for these types of diffs.
Note: in 0.5.x this default will change to include.
Status Indictors for grepdiff
Previously grepdiff --status showed ! for all matching files, but now it correctly reports them as additions (+), removals (-) or modifications (!).
As always, bug reports and feature requests are welcome on GitHub. Thanks to everyone who reported issues and helped to test fixes!
This is a report created by CLE Team, which is a team containing community members working in various Fedora groups for example Infrastructure, Release Engineering, Quality etc. This team is also moving forward some initiatives inside Fedora project.
Week: 20 Feb – 27 Feb 2026
Fedora Infrastructure
This team is taking care of day to day business regarding Fedora Infrastructure. It’s responsible for services running in Fedora infrastructure. Ticket tracker
Resolved errors on dist-git that were spamming sysadmin-main mailbox – ticket
CentOS Infra including CentOS CI
This team is taking care of day to day business regarding CentOS Infrastructure and CentOS Stream Infrastructure. It’s responsible for services running in CentOS Infratrusture and CentOS Stream. CentOS ticket tracker CentOS Stream ticket tracker
This team is taking care of day to day business regarding Fedora releases. It’s responsible for releases, retirement process of packages and package builds. Ticket tracker
Release engineering is currently in Beta Freeze.
Releng has provided the first beta release candidate after the QE request.
Request for creating detached signature has been handled by Samyak for ignition 2.26.0 release.
RISC-V
This is the summary of the work done regarding the RISC-V architecture in Fedora.
Continued to go through the list of packages to investigate (failing to build; requires patching, and more).
Got a PR merged for ‘libkrunfw’ (a low-level library for process isolation) and built it in RISC-V Koji; Marcin (“hrw”) also got a couple more merged.
Work is progressing well on Fedora RISC-V unified kernels (Jason Montleon is doing most of the heavy-lifting here). Currently hosted in Copr.
AI
This is the summary of the work done regarding AI in Fedora.
The Fedora Design Team has switched to using Google Meet for our weekly meetings to utilise Gemini for note-taking. Discussion took place on this ticket and on our weekly call. Updated Fedocal entry can be found here.
Pour rappel, l’architecture de Fedora-Fr repose sur un VPS, ou plutôt une Instance, comme l’appelle Scaleway. Cette machine (virtuelle) correspond à nos besoins actuels et nous est gracieusement fournie par notre partenaire Scaleway. Côté sécurité, nous utilisons le firewall natif proposé par Scaleway, appelé Security Groups. Nous n’avons pas activé le WAF, car je n’ai […]
Release Candidate versions are available in the testing repository for Fedora and Enterprise Linux (RHEL / CentOS / Alma / Rocky and other clones) to allow more people to test them. They are available as Software Collections, for parallel installation, the perfect solution for such tests, and as base packages.
RPMs of PHP version 8.5.4RC1 are available
as base packages in the remi-modular-test for Fedora 42-44 and Enterprise Linux≥ 8
as SCL in remi-test repository
RPMs of PHP version 8.4.19RC1 are available
as base packages in the remi-modular-test for Fedora 42-44 and Enterprise Linux≥ 8
as SCL in remi-test repository
ℹ️ The packages are available for x86_64 and aarch64.
ℹ️ PHP version 8.3 is now in security mode only, so no more RC will be released.
The data sheet of my new AI focused mini workstation from HP mentions Ubuntu 24.04 as the supported Linux distribution. I have tried that, but I could not get the installer to run. However, 25.10 installed without any problems, even from an openSUSE branded USB stick :-)
Only the chameleon works with this machine:-)
I must admit that I’m not an Ubuntu fan, but installed it anyway, as Ubuntu is the “official” Linux distro for this machine. GNOME is heavily modified compared to other distros. For GUI apps the focus seems to be shifted to snaps from distro packages.
For now I did not test the in hardware AI support, just tried to collect some first impressions. I ended up installing a few 3D games and playing :-) Having AMD graphics has the advantage that everything works out of box. There is no need for binary only drivers, extra repositories, praying to the binary gods, etc. It just works. Fully open source.
SuperTuxKart :-)
This blog is part of a longer series about my adventures with my new machine and AI. You can reach me to discuss this blog on one of the contacts listed in the upper right corner. You can read the rest of the blogs under the toy tag.
Version 4.11.0 of syslog-ng is now available. The main attraction is the brand new Kafka source, but there are many other smaller features and improvements, as well.
The largest new feature is the Kafka source, which allows you to collect log messages from Kafka streams. For many years, syslog-ng had a Kafka destination, allowing you to send log messages to a Kafka-based data pipeline. The Kafka source enables syslog-ng to collect log messages from Kafka, parse and filter log messages, and route them to various destinations. You can learn more about the Kafka source from the syslog-ng documentation at https://syslog-ng.github.io/admin-guide/060_Sources/038_Kafka/README .
As usual, while we make every effort to make all features work everywhere, it is not always technically possible. For example, compilers and / or dependencies are too old to support gRPC-based modules in older RHEL, SUSE and Debian releases.
What is next?
As usual: feedback is very welcome. If you have any problems with the syslog-ng 4.11.0 release, open an issue on GitHub at https://github.com/syslog-ng/syslog-ng/issues Your report helps us to make syslog-ng better. Of course, we are also very happy about any positive feedback :-)
In chapter 3 of Program Management for Open Source Projects, I talk about the importance of trust. “Open source communities run on trust,” I wrote. I go on to talk about building trust by establishing relationships and credibility. This is fine when you’re coming into a defined role, perhaps if you got hired to fill a sponsored role in a community or if a project leader has asked you to apply your skills to the project.
Most people, of course, don’t come directly into a defined role. They start by making a small contribution: filing a bug, answering a question on a mailing list or forum, submitting a patch, and so on. Sometimes, they don’t even plan to stick around. They’re making one contribution and moving on. The kind of trust-building based on relationships doesn’t work as well in that case. But you still need trust to evaluate a contributor (and thus their contribution).
This issue has only grown more relevant as large language models become widespread. If the person who submitted a pull request didn’t write the code, do they understand it? Can they answer maintainers’ questions or address feedback? Is the code even worth a maintainer’s time to review or is it plausible-looking garbage?
In late January, GitHub product manager Camilla Moraes started a conversation seeking ideas for giving maintainers tools to address low-quality contributions. The conversation produced many good (and also some bad) ideas and highlighted the difficulty of a universal solution. Although the word “trust” only appears six times (as of this writing) in the whole thread, the conversation is basically a discussion of trust. “How can we slow the rate of un-trusted contribution without making life harder for the trusted contributors?” is a fair summary of the underlying issue.
Defining trust
Charles H. Green developed an equation of trustworthiness that includes credibility, reliability, and intimacy. Although it’s a smidge hokey, it’s fundamentally a reasonable representation of trust, so we can roll with it.
Importantly, trust is not a static characteristic of a person. Instead, it’s a dynamic measure that changes based on context and relationship. My coworkers (hopefully) think that I am competent in my work, deliver what I say I will, and am a fun guy to be around. There’s a high degree of trust because I rate highly in credibility, reliability, and intimacy. When I join a new project, I am the same person, but I am relatively or entirely unknown. The other people in the community need to interact with me for a period of time before they can develop trust in me.
Even if I’ve known someone for a long time, their trust in me may change when the context changes. The intimacy and reliability may be the same, but they don’t necessarily know if I’m credible in the new context. Just because I have experience in other languages, that doesn’t mean my Rust code is good. Someone who thinks I write competent Python (we’re pretending here!) would be well served reviewing a Rust contribution very closely, as I’ve written essentially none.
Trust in your community
As with many concepts, we often think about trust in open source communities without explicitly thinking about it. But most projects have some concept of a contributor ladder, where people get increased privileges and responsibilities based on the trust they’ve earned. It’s more important than ever to give deliberate thought to how trust is evaluated in your community.
This not only affects community management concerns but also security. Many projects have automated CI jobs that run on pull requests. These check for code style, run unit and integration tests, and so on. In the best case, bad code (intentional or accidental) can limit resources (including maintainer time). In the worst case, bad code can compromise the project and publish malware. For this reason, projects often require maintainers or other trusted users to grant permission for automated tests to run when the submitter is untrusted. Unfortunately, this still places a time burden on maintainers, which is a precious resource.
I suggest that projects explicitly consider what levels of trust are required to access certain resources (CI jobs, project emails, etc) and how that trust will be measured. The Discourse trust levels are an excellent starting point for building your project’s trust model. The specifics are designed for forum interaction, but you can extrapolate to your project’s activities.
The path to build trust has to be easy, or else you’ll drive away new contributors and your community will wither away over time. Trust levels are a safety measure, not a gatekeeping measure.
Tools to help
I am aware of a few tools to help with trust evaluation. I share them here as a reference, but I have not used them and do not endorse or renounce them. contributor-report is a GitHub Action that gives maintainers a report on a new contributor’s activity levels. This helps maintainers evaluate newcomers on the metrics that make sense for their specific project. vouch is a tool for marking users as vouched (or denounced) and taking action based on that. It can be used to provide a web of trust across projects and communities.
Last week I introduced you to my new toy at home: an AI focused mini workstation from HP. It arrived with Windows pre-installed, but of course I also wanted to have Linux on the box.
Documentation mentions that I have to disable secure boot and make a few more changes before installing Linux. I did all the suggested BIOS changes before installing Linux.
The data sheet mentions Ubuntu 24.04 as the supported Linux distribution. I have tried that, but I could not get the installer to run. Along the way I realized that the USB boot support is very picky on this box. Using my old USB sticks, which work perfectly in my laptop and old desktop, does not work at all. Also, changing the USB stick requires you to turn the machine off and on, a simple reboot is not enough. Finally I found a USB-C stick, and that almost worked with Ubuntu 24.04. It booted, but the installer crashed.
The USB sticks I tried
As I have been a S.u.S.E. / openSUSE user for the past 30 years, I did not mind this failure much. I downloaded the openSUSE Tumbleweed installer, and it worked like a charm. Best of all, unlike openSUSE Leap 16.0, Tumbleweed still has the good old YaST installer I used for decades. Installation was quick, easy and rock solid.
Surprise arrived when I rebooted the machine. Windows was not available in the boot menu. As it turned out, Tumbleweed used a new flavor of GRUB2 by default: grub2-bls, but that does not seem to boot other operating systems. There is no supported way to switch back to grub2-efi, so I reistalled openSUSE. Luckily it’s an easy job, and I did not have any data yet on the machine. So, it was just a few mouse clicks.
openSUSE is my daily driver, so I did not spend much time exploring the system. It seems to work just fine. Installing a few games and checking the in hardware AI support comes once I finished installing all operating systems on the machine. Next to Windows I plan to install openSUSE, Fedora and Ubuntu on the Linux side, and FreeBSD as well.
This blog is part of a longer series about my adventures with my new machine and AI. You can reach me to discuss this blog on one of the contacts listed in the upper right corner. You can read the rest of the blogs under the toy tag.
The repository is available for x86_64 (Intel/AMD) and aarch64 (ARM).
Repositories configuration:
On Fedora, standards repositories are enough, on Enterprise Linux (RHEL, CentOS) the Extra Packages for Enterprise Linux (EPEL) and Code Ready Builder (CRB) repositories must be configured.
If these extensions are not mandatory, you can remove them before the upgrade; otherwise, you must be patient.
Warning: some extensions are still under development, but it seems useful to provide them to upgrade more people and allow users to give feedback to the authors.
More information
If you prefer to install PHP 8.5 beside the default PHP version, this can be achieved using the php85 prefixed packages, see the PHP 8.5 as Software Collection post.
This is also documented as the community way to install PHP 8.5 on the official PHP web site.
The packages available in the repository were used as sources for Fedora 44.
By providing a full feature PHP stack, with about 150 available extensions, 11 PHP versions, as base and SCL packages, for Fedora and Enterprise Linux, and with 300 000 downloads per day, the remi repository became in the last 21 years a reference for PHP users on RPM based distributions, maintained by an active contributor to the projects (Fedora, PHP, PECL...).
These packages are weak dependencies of Redis, so they are installed by default (if install_weak_deps is not disabled in the dnf configuration).
The modules are automatically loaded after installation and service (re)start.
The modules are not available for Enterprise Linux 8.
3. Future
Valkey also provides a similar set of modules, requiring some packaging changes already proposed for Fedora official repository.
Redis may be proposed for unretirement and be back in the Fedora official repository, by me if I find enough motivation and energy, or by someone else.
I may also try to solve packaging issues for other modules (e.g. RediSearch). For now, module packages are very far from Packaging Guidelines, so obviously not ready for a review.
Loadouts for Genshin Impact v0.1.14 is OUT NOW with the addition of support for recently released artifacts like Aubade of the Morningstar and Moon and A Day Carved From Rising Woods, recently released characters like Columbina, Zibai and Illuga and for recently released weapons like Nocturne's Curtain Call and Lightbearing Moonshard from Genshin Impact Luna IV or v6.3 Phase 2. Take this FREE and OPEN SOURCE application for a spin using the links below to manage the custom equipment of artifacts and weapons for the playable characters.
Automated dependency updates for GI Loadouts by @renovate[bot] in #486
Add the recently added character Columbina to the GI Loadouts roster by @sdglitched in #494
Add the recently added weapon Nocturne's Curtain Call to the GI Loadouts roster by @sdglitched in #495
Add the recently added artifact set A Day Carved From Rising Winds to the GI Loadouts roster by @sdglitched in #497
Add the recently added artifact set Aubade of Morningstar and Moon to the GI Loadouts roster by @sdglitched in #496
Update dependency pillow to v12.1.1 [SECURITY] by @renovate[bot] in #502
Add the recently added character Zibai to the GI Loadouts roster by @sdglitched in #501
Add the recently added character Illuga to the GI Loadouts roster by @sdglitched in #503
Add the recently added weapon Lightbearing Moonshard to the GI Loadouts roster by @sdglitched in #504
Remove the conditional substat calculation of weapon Harbinger of Dawn by @sdglitched in #505
Stage the release v0.1.13 for Genshin Impact Luna IV (v6.3 Phase 2) by @sdglitched in #506
Automated dependency updates for GI Loadouts by @renovate[bot] in #500
Artifacts
Two artifacts have debuted in this version release.
Aubade of the Morningstar and Moon
Bonus for Two Piece Equipment Increases Elemental Mastery by 80.
Bonus for Four Piece Equipment When the equipping character is off-field, Lunar Reaction DMG is increased by 20%. When the party's Moonsign Level is at least Ascendant Gleam, Lunar Reaction DMG will be further increased by 40%. This effect will disappear after the equipping character is active for 3s.
Aubade of the Morningstar and Moon - Workspace and Results
A Day Carved From Rising Winds
Bonus for Two Piece Equipment ATK +18%.
Bonus for Four Piece Equipment After a Normal Attack, Charged Attack, Elemental Skill or Elemental Burst hits an opponent, gain the Blessing of Pastoral Winds effect for 6s: ATK is increased by 25%. If the equipping character has completed Witch's Homework, Blessing of Pastoral Winds will be upgraded to Resolve of Pastoral Winds, which also increases the CRIT Rate of the equipping character by an additional 20%. This effect can be triggered even when the character is off-field.
A Day Carved From Rising Winds - Workspace and Results
Characters
Three characters have debuted in this version release.
Columbina
Columbina is a catalyst-wielding Hydro character of five-star quality.
Columbina - Workspace and Results
Zibai
Zibai is a sword-wielding Geo character of five-star quality.
Zibai - Workspace and Results
Illuga
Illuga is a polearm-wielding Geo character of five-star quality.
Illuga - Workspace and Results
Weapons
Two weapons have debuted in this version release.
Nocturne's Curtain Call
Ballad of the Crossroads - Scales on Crit DMG.
Lightbearing Moonshard
Legacy of Lang-Gan - Scales on Crit DMG.
Appeal
While allowing you to experiment with various builds and share them for later, Loadouts for Genshin Impact lets you take calculated risks by showing you the potential of your characters with certain artifacts and weapons equipped that you might not even own. Loadouts for Genshin Impact has been and always will be a free and open source software project, and we are committed to delivering a quality experience with every release we make.
Disclaimer
With an extensive suite of over 1550 diverse functionality tests and impeccable 100% source code coverage, we proudly invite auditors and analysts from MiHoYo and other organizations to review our free and open source codebase. This thorough transparency underscores our unwavering commitment to maintaining the fairness and integrity of the game.
The users of this ecosystem application can have complete confidence that their accounts are safe from warnings, suspensions or terminations when using this project. The ecosystem application ensures complete compliance with the terms of services and the regulations regarding third-party software established by MiHoYo for Genshin Impact.
All rights to Genshin Impact assets used in this project are reserved by miHoYo Ltd. and Cognosphere Pte., Ltd. Other properties belong to their respective owners.
I wanted a way to stay inside Visual Studio Code, use Copilot Chat as the “orchestrator,” and still mix and match models for different parts of the work. Plan a change with one of the slower, more capable models, but let a smaller, faster model handle mechanical refactors. Edit a blog post with one model, but hand Jekyll plumbing or JSON/YAML munging to another. The friction was that the built-in Copilot Chat extension only lets subagents run on the same model as the parent conversation, while the Copilot CLI happily lets you pick any available model per run. Phone a Friend bolts that flexibility onto Copilot Chat, so I can keep the full VS Code experience - including gutter diffs - while dispatching subtasks to whatever model is best for the job.
The Problem
When you use GitHub Copilot Chat in VS Code, every subagent it spawns runs on the same model as the parent conversation. If you’re on Claude Opus 4.6, all subagents are Claude Opus 4.6. Sometimes you want a different model for a subtask - a faster one for simple work, or a different vendor for a second opinion.
GitHub Copilot CLI supports --model to pick any available model, but using it directly doesn’t help - changes made by the CLI don’t produce VS Code’s gutter indicators (the green/red diff decorations in the editor margin). You get the work done but lose the visual feedback that makes code review comfortable.
Phone a Friend is an MCP server that solves both problems. It dispatches work to Copilot CLI with the model of choice, captures a unified diff of the changes, and returns it to the calling agent - which applies it through VS Code’s edit tools. Gutter indicators show up as the changes were made natively.
How It Works
Copilot Chat calls the phone_a_friend MCP tool with a prompt, model name, and working directory
The MCP server creates an isolated git worktree from HEAD
It launches Copilot CLI in non-interactive mode in that worktree with the requested model
The subagent does its work and writes its response to a “message-in-a-bottle” file
The MCP server reads the response, captures a git diff, and cleans up the worktree
The MCP server then returns the response text and unified diff to the calling agent
The calling agent applies the diff using VS Code’s edit tools - gutter indicators appear
The “message in a bottle” pattern is worth explaining. Copilot CLI’s stdout mixes the agent’s response with progress output and is unreliable to parse. Rather than fighting noisy output, the tool instructs the subagent to write its final response to a file. The server reads the file. Clean separation.
Safety
Worktree isolation means your working tree is never modified directly. Push protection blocks git push at the tool level. Worktrees are cleaned up after every invocation, even on errors.
Setup
You install Phone a Friend like any other MCP server in VS Code: add the @bexelbie/phone-a-friend npm package through the MCP: Add Server... command, or point VS Code at it via your MCP configuration. The GitHub README details the exact JSON and prerequisites (Node.js, Copilot CLI, Git).
Usage
Once configured, you stay in Copilot Chat and describe the outcome you want; the calling agent decides when to route a subtask through Phone a Friend. The tool surface includes discovery hints, so natural phrasing like “get a second opinion from another model” is usually enough to trigger it. Any model that Copilot CLI exposes is available.
Known Limitations
A few trade-offs worth knowing:
Context cost. The unified diff lands in the calling agent’s context window. Large diffs eat context. I’ve got an issue open exploring ideas for improving this.
Message-in-a-bottle compliance. Most models follow the instruction to write their final response into the message-in-a-bottle file, but some may occasionally ignore it. When that happens, the calling agent still gets the diff of any file changes but not the response text.
Since integrating this into my Copilot setup, the biggest shift is that I no longer have to choose between “the model I want to think with” and “the model I want to do the work” and I eliminated a bunch of copy/paste from manually emulating this. I keep the main conversation with a larger, more capable model for planning and review, and routinely:
send quick, mechanical refactors to a smaller, faster model
hand Jekyll front matter, Liquid, and config tweaks to a model that’s better at markup and templating
ask a different vendor’s model for a second opinion on changes or ideas, especially where that model may be better at the task
Because everything still lands back in the same VS Code buffer with normal gutter diffs, it feels like one coherent tool instead of a handful of loosely-connected ones.
The project also had an unexpected dynamic in the development process. Building an MCP server that mimics a capability already available to the model created a strange feedback loop. I could collaborate on the implementation with Opus, and then turn around and interview it as a subject matter expert on how it uses that very same capability. It was a weird feeling to use the model as both a partner in writing the code and a primary source for understanding the user requirements.
Fedora is heading back to sunny Southern California! As we gear up for SCaLE 23x, we are thrilled to announce a special edition of Fedora Hatch. This is taking place on Friday, March 6 as an embedded track at SCALE.
Whether you’re a long-time contributor, a curious user, or someone looking to make your very first pull request, Fedora Hatch is designed for you. This is our way of bringing the experience of Flock (our annual contributor conference) to a local level. It focuses on connection, collaboration, and community growth.
What’s Happening?
This year, Fedora has secured a dedicated track on Friday at SCALE. We’ve curated a line-up that balances technical deep dives with essential community initiatives.
When: Friday, March 6, 2026
Where: Room 208, Pasadena Convention Center
Who: You! (And a bunch of friendly Fedorans)
The Schedule Highlights
We have a packed morning featuring five talks and a hands-on workshop:
Getting Started in Open Source and Fedora (Amy Marrich): Are you new to the world of open source? Or are you looking to make your first contribution? This session will provide a guide for beginners interested in contributing to open source projects. It will focus on the Fedora project. We’ll cover a variety of topics, like finding suitable projects, making your first pull request, and navigating community interactions. Attendees will leave with practical tips, resources, and the confidence to embark on their open source journey.
Fedora Docs Revamp Initiative (Shaun McCance): The Fedora Council recently approved an initiative to revamp the Fedora docs. The initiative aims to establish a support team to maintain a productive environment for writing docs. It will establish subteams with subject matter expertise to develop docs in specific areas of interest. We’ll describe some of the challenges the Fedora docs have faced, and present the progress so far in improving the docs. You’ll also learn how you can help Fedora have better docs.
A Brief Tour of the Age of Atomic (Laura Santamaria): Ever wished to try a number of different desktop experiences quickly in your homelab? Maybe it’s time to explore Fedora Atomic or Universal Blue! The tour starts with what makes these experiences special. It will then review the options including Silverblue, Cosmic, Bluefin and Bazzite (yes, the gaming OS). We’ll briefly get under the hood to explore bootc, the technology powering Atomic. Finally, we’ll explore how you can contribute to the future of Fedora Atomic.
Accelerating CentOS with Fedora (Davide Cavalca): This talk will explore how CentOS SIGs are able to leverage the work happening in Fedora to improve the quality and velocity of packages in CentOS Stream. We’ll cover how the CentOS Hyperscale SIG is able to deliver faster-moving updates for select packages, and how the CentOS Proposed Updates SIG integrates bugfixes and improves the contribution process to the distribution.
Agentic Workloads on Linux: Btrfs + Service Accounts Architecture (David Duncan): As AI agents become more prevalent in enterprise environments, Linux systems need architectural patterns that provide isolation, security, and efficient resource management. This session explores an approach, using BTRFS subvolumes combined with dedicated service accounts, to build secure, isolated environments for autonomous AI agents in enterprise deployment.
RPM Packaging Workshop (Carl George): While universal package formats like Flatpak, Snap, and AppImage have gained popularity for their cross-distro support, native system packages remain a cornerstone of Linux distributions. These native formats offer numerous benefits. Understanding them is essential for those who want to contribute to the Linux ecosystem at a deeper level. In this hands-on workshop, we’ll explore RPM, the native package format used by Fedora, CentOS, and RHEL. RPM is a powerful and flexible tool. It plays a vital role in the management and distribution of software for these operating systems.
Don’t forget to swing by the Fedora Booth in the Expo Hall! Our team will be there all weekend (March 6–8) with live demonstrations of Fedora Linux 43, GNOME 49 improvements, and plenty of fresh swag to go around.
Registration Details
To join us at the Hatch, you’ll need a SCaLE 23x pass.
Location: Pasadena Convention Center, 300 E Green St, Pasadena, CA.
Well, another saturday, time for another bit of longer form recapping
what has been going on in fedora infrastructure and other areas for me.
Infrastructure Fedora 44 Beta freeze
We started the beta freeze in infrastructure. This is to make sure that
we don't cause any problems for the release building and distributing
pipeline. We require some acks for any changes to things that might
impact those things until the day after the Beta is released.
I think this has served us fine over the years. Every once in a while
I wonder if we could just stop doing it as we are usually pretty good
about not breaking things day to day, but having the extra eyes on
changes and slowing down a bit is a good thing I think.
Forge migrations
We have been busy working on migrating things from pagure.io to
forge.fedoraproject.org. On tuesday just before the freeze we finally
got our ansible repo moved over. I've really been looking forward
to this as the review interface in forgejo is a good deal nicer
than the pagure one. I've already used it to great effect.
We do still have a few more things to migrate, but overall it's
moving along nicely.
Last bits of rdu2-cc move
We finally finished off the last things (at least that I am aware of)
for things we moved in last december from rdu2-cc to rdu3.
There was a very strange and difficut to figure out problem for
copr builders on ipv6 that I wasn't able to track down, but luckily
Pavel worked with networking and finally did so! It seems to have
been a odd caching bug in the switches. Hopefully it's now gone
once and for all.
There was some hardware issues to sort out: some bad network cards
that had to be replaced, a machine that didn't actually move when
it was supposed to, etc.
Anyhow I hope all that work is all finally done.
signing work
Finally got back to deploying / testing the new signing path for
secure boot signing. I got it all deployed, just need to get
things tested now and hopefully we can switch over after the freeze.
This should hopefully allow us to sign aarch64 kernels for
secure boot as well as removing reliance on an old smart card
for signing.
Si tu me lis un peu, tu sais que j’ai une attitude assez pragmatique vis-à-vis de mes outils. Je n’aime pas changer pour changer, mais je n’hésite pas non plus à bouger quand une solution stagne trop ou prend une direction qui ne me plaît pas. Récemment, j’ai d’ailleurs confirmé mon choix de rester sous […]
Hoje fiz um curso online do RH da empresa, sobre preconceito e assédio, e abrange etnia, idade, sexo e deficiência. Eu achei ótimo o conteúdo, muito lúdico e bem estruturado, aprendi um monte de coisas. Hoje em dia todas as empresas exigem esses treinamentos. E fiquei pensando que a História da Humanidade, até bem pouco tempo atrás, ficaria orgulhosa em se olhar no futuro finalmente tratando esse assunto para o bem. Então, mesmo que estejamos ainda longe de erradicar preconceito neste Planeta, quero reconhecer e parabenizar todos nós pelos passos que estamos dando. Com a soma dos pequenos passos chegaremos longe.
Today I completed an online HR course at my company on prejudice and harassment, covering topics such as ethnicity, age, gender, and disability. I found the content excellent. Engaging, well-structured, and truly educational. I learned a great deal. These days, this kind of training is required in most organizations. It made me reflect on how, not so long ago, humanity might have looked to the future appreciating that we are finally addressing these issues constructively and consciously. Even though we are still far from eradicating prejudice on this planet, I want to acknowledge and congratulate all of us for the steps we are taking. It is through the accumulation of small steps that we will go far.
NVidia on Linux, it always was, and still is a pain sometimes, especially on laptops with a dual-GPU setup. Although the situation is getting better, it still isn’t perfect.
Motivation
For a long time, I was switching between the integrated Intel GPU and dedicated NVidia GPU manually, which wasn’t ideal, but also required a re-login every time. But, the solution was right there!
Setup
The official drivers for NVidia GPUs allow offloading, that means that your DE and every app you launch gets run on the integrated GPU by default, however, you can export some enviroment variables to run the process on the dedicated GPU. This means that you need to have the official drivers installed on your system, please follow this guide for installation instructions.
Fedora Project is fortunate enough to secure a booth at DevConf India in Pune. Akashdeep Dhar, Samyak Jain, Shounak Dey & myself attended DevConf India representing Fedora Project. The booth was a success, we also attracted a lot of new comers and those who are in their early phase in open source. Most of the visitors were college students and early in their career. We also had Matthew Miller’s talk which attracted so many people and the room was filled with lot of people as well.
This is a report created by CLE Team, which is a team containing community members working in various Fedora groups for example Infrastructure, Release Engineering, Quality etc. This team is also moving forward some initiatives inside Fedora project.
Week: 16 Feb – 20 Feb 2026
Fedora Infrastructure
This team is taking care of day to day business regarding Fedora Infrastructure. It’s responsible for services running in Fedora infrastructure. Ticket tracker
[GSoC Project Idea 2026] Revamp Fedora Badges project with modern fullstack architecture and dedicated MCP support [Ticket][Followup]
[Infra] Added package and installed size to package metadata [Review][Lint]
Migration of pagure.io repositories to forge.fedoraproject.org continues (9 more repositories migrated)
Resolved authentication issues with wordpress instances (thanks to misc)
Fixed database connection issues on Dist-Git
Dep updates and CI fixes for our apps in Github
Worked on the port of bugzilla2fedmsg to Kafka (since the UMB deprecation), deployed it to staging, asked RHIT for firewall ports.
CentOS Infra including CentOS CI
This team is taking care of day to day business regarding CentOS Infrastructure and CentOS Stream Infrastructure. It’s responsible for services running in CentOS Infratrusture and CentOS Stream. CentOS ticket tracker CentOS Stream ticket tracker
This team is taking care of day to day business regarding Fedora releases. It’s responsible for releases, retirement process of packages and package builds. Ticket tracker
Fedora 44 Beta Freeze is now in effect.
RISC-V
This is the summary of the work done regarding the RISC-V architecture in Fedora.
(Not a lot to report this week, besides the routine on-going work.)
Started a discussion with the RISC-V team about RHEL builders for Konflux. (This is not about general Konflux support, that’s out of scope)
Continued to investigate Fedora 44 build failures and all that entails — working with relevant upstream maintainers to get changes reviewed, merged, etc.
Sorted out a build-timeout issue with Copr upstream. (Jason Montleon is currently used to build some board-specific kernels.)
QE
This team is taking care of quality of Fedora. Maintaining CI, organizing test days and keeping an eye on overall quality of Fedora releases.
TestDays App was updated in production.
Anubis no longer breaks actions in Forge thanks to our debugging (and Infra fixing it, of course).
Blockerbugs meetings and the whole blocker review process has started since this week.
Ran test days: Grub OOM fix, GNOME 50
Forgejo
This team is working on introduction of https://forge.fedoraproject.org to Fedora and migration of repositories from pagure.io.
The Podman team and the Fedora Quality Assurance team are organizing a Test Week from Friday, February 27 through Friday, March 6, 2026. This is your chance to get an early look at the latest improvements coming to Podman and see how they perform on your machine.
What is Podman?
For those new to the tool, Podman is a daemonless, Linux-native engine for running, building, and sharing OCI containers. It offers a familiar command-line experience but runs containers safely without requiring a root daemon.
What’s Coming in Podman 5.8?
The upcoming release includes updates designed to make Podman faster and more robust. Here is what you can look forward to, and what you can try out during this Fedora Test Day.
A Modern Database Backend (SQLite)
Podman is upgrading its internal storage logic by transitioning to SQLite. This change modernizes how Podman handles data under the hood, aiming for better stability and long-term robustness.
Faster Parallel Pulls
This release brings optimizations to how Podman downloads image layers, specifically when pulling multiple images at the same time. For a deep dive into the engineering behind this, check out the developer blog post on Accelerating Parallel Layer Creation.
Experiment and Explore: Feel free to push the system a bit and try pulling several large images simultaneously to see if you notice the performance boost. Beyond that, please bring your own workflows. Don’t just follow the wiki instructions. Run the containers and commands you use daily. Your specific use cases are the best way to uncover edge cases that standard tests might miss.
Potential GSoC contributors may reach out with questions about our project ideas or GNOME internships in general. Please direct them to gsoc.gnome.org to learn more.
There is a new toy in the house. It is a miniature workstation from HP, built around AMD’s Ryzen AI Max+ PRO 395 chip. If you are interested in the specifications and other details, check the HP product page at https://www.hp.com/us-en/workstations/z2-mini-a.html. In the long run, this box will serve many purposes:
learning AI, but running as much as possible locally instead of utilizing cloud services
learning Kubernetes by building everything from scratch on multiple virtual machines
home server: running complex test environments on a single box (128 GB of RAM should be enough in most cases :-) )
photo editing using Capture One Pro
occasional gaming :-)
For now, I have finished unboxing and taken the first steps with Windows. It worked, though I made a mistake during setup and had to reinstall. I do not mind, since I do not like using pre-installed operating systems anyway. At least I know the machine works.
The whole packaging is smaller than my previous desktop computer
The computer itself is barely larger than a book
Keyboard, mouse, display port converter all in the box
On the chaos of my desk :-)
Right now I am hesitant to migrate any production applications or data to the new box. I have already clicked “use the whole disk” instead of creating a partition a few times, so I want to finalize the partitioning before using the box for anything beyond hardware testing. Better safe than sorry :-)
I plan to install a couple of Linux distributions. I mainly use openSUSE on the desktop, but I found instructions for Fedora to accelerate AI on this AMD chip. So, I’ll most likely install both. And, of course, I also plan to install FreeBSD 15 on the machine and see how it works both as a server and as a desktop.
I plan to post updates about my experiences in the coming weeks.
UDP log collection is a legacy feature that does not provide any security or reliability, but is still in wide use. You can improve its reliability using eBPF on Linux in recent syslog-ng versions. Support for eBPF was added to Debian packages while preparing for the 4.11.0 syslog-ng release.
Right now, packaging changes only affect the syslog-ng nightly Debian / Ubuntu packages and the syslog-ng nightly container image. You can learn more about how to use them in the syslog-ng README on GitHub at https://github.com/syslog-ng/syslog-ng/ Once the syslog-ng 4.11.0 release is available, using the stable syslog-ng packages will include improved UDP support as well.
Want to learn the latest container tech? From February 27 to March 6, 2026, you can join the Podman 5.8 Test Day. It is the perfect time to explore new features and see how the future of Fedora is built.
What is new?
Faster Downloads: Try optimized “parallel pulls” to get your images in seconds.
Setup: Test the new “automatic database creation” for a smoother start.
Expert Skills: Learn how to use latest container environments before they go mainstream.
Why join?
Your setup is unique. By running Podman 5.8 on your machine, you make sure the final version works perfectly for everyone. It is a great way to learn by doing and to see how top-tier open-source software is made.
I’ve used some form of DSri Seah’s Compact Calendar for over seven years. The calendar is a lovingly designed single-page view of the entire year, organized into Monday-through-Sunday weeks with no breaks between months.
The point of the format is simple: my normal calendar is great at telling me what I’m doing on Tuesday. What it’s terrible at is answering planning questions that are above the day level, such as:
If we take a vacation the last two weeks of July, will it overlap business travel?
Can we connect these two public holidays and get 14 days away for only 8 days of PTO?
Do we have any genuinely empty weeks left this year?
For a long time, my compact calendar was a spreadsheet. That worked until it didn’t.
The problem I actually needed to solve
The spreadsheet version served me well for years, but life got more complicated.
My kid is getting older, which means more activities to track: summer camps, school breaks, etc. My partner and I no longer work for the same company, so we don’t share the same corporate holidays and as our roles have changed so has the amount of travel we do. And, honestly, my spreadsheet has bespoke formulas that only I understand … on Thursdays when there is a full moon.
My partner knows how to use a calendar app. She really doesn’t want to learn a special spreadsheet for planning, and I don’t blame her.
The real friction screaming out that there had to be a better way was the double-entry work. If my kid has summer camp in July, I’d put it on the family calendar - and then manually mark those weeks on my compact calendar spreadsheet. Two sources of truth means one of them is eventually wrong.
So the job wasn’t “build a better calendar.” It was: keep the year-at-a-glance view, but make the calendar app the source of truth.
The shape of the solution
I decided to build a web version of the compact calendar that could read directly from standard ICS calendar feeds.
Put the summer camp on the shared calendar once. The compact calendar picks it up automatically.
And if this was going to be something my partner and I actually used together, it needed two things:
A simple setup flow (not “copy this spreadsheet and don’t touch column Q”)
A way to always be available beyond, “go find this Google Docs Link”
What the tool does
The calendar renders a full year on a single page. Each row is one week, Monday through Sunday.
Parallel to the block of weeks running down the page is a column for displaying committed events and a second for displaying possible events.
Committed: events that are definitely happening - travel that’s booked, school terms, confirmed work trips.
Possible: things under consideration - a conference I submitted a talk to but haven’t heard back from yet, vacation options we’re weighing.
The tool uses color to signal status at a glance:
Blue background: first day of the month (anchors the continuous weeks)
Red text: public holidays (per selected country)
Green background: committed events
Yellow background: possible events
Green background with a yellow border: overlaps/conflicts that need attention
Here’s what the full-year view looks like with demo data loaded:
Inputs: URL, file, or demo
While there is demo data available in the system, the key comes from loading your own data. You can choose two different kinds of sources:
A URL - a webcal:// or https:// link to a published calendar (iCloud, Google Calendar, etc.)
A file - a .ics file uploaded from your computer
We’re an Apple household so our calendars live in iCloud, but the tool doesn’t care about your calendar provider. Anything that produces a standard ICS feed works.
My practical workflow is two shared calendars in Apple Calendar:
one for committed travel and events. For me, this is actually my shared calendar that our family maintains.
one for possibilities we’re considering
Both are published as webcal URLs, and the compact calendar fetches them and renders the year view. Using my shared calendar works because the app ignores events that aren’t multi-day, all-day blocks - so dentist appointments don’t drown out the year view. You can optionally include single day all-day events if that helps you.
The setup controls are intentionally simple:
The tech (and the annoying part)
This is a vanilla JavaScript app built with Vite, hosted on Azure Static Web Apps. No framework - just DOM manipulation, a CSS file, and under 500 lines of main application code.
The interesting technical problem was CORS.
Calendar providers like iCloud don’t set CORS headers on their published feeds, which means a browser can’t fetch them directly. The solution is a small Azure Function that acts as a proxy:
the browser sends the calendar URL to the server
the server fetches the calendar data
the server returns it to the browser
The proxy doesn’t store or log anything. It’s a pass-through.
I built the app with an AI coding agent. I provided direction and made decisions, but I didn’t hand-write every line. For this kind of tool, I’m comfortable with that. It’s a static site that renders calendar data client-side, and the risk profile is low. Additionally, nothing in this code represents a new problem or a novelty. This is bog-standard code, and the agent handled the boilerplate well for this project.
Importantly, even though I could have written this code myself, I wouldn’t have. I probably would have gotten myself caught in a bit of analysis paralysis over frameworks. But more importantly, writing a lot of this code is just boring code to write. The AI agent has allowed me to solve my own problem, and that’s the part that matters to me. I didn’t have to suddenly become more disciplined about spreadsheets or get my family dragged onto a tool that really only speaks to me. Instead, I was able to change the shape of the problem and make it more solvable within the context of the humans involved.
Privacy and the honest trade-off
All your data stays in your browser. The app stores the URLs you’re loading, your selected country, and cached holiday data in local storage. This is purely functional and not for tracking.
Calendar URLs necessarily have to go through the server-side proxy because browsers won’t fetch them directly. The proxy is a stateless pass-through — I don’t persist calendar data in the function or in your browser. Calendar URLs are sent via POST request body rather than query parameters, which means they aren’t captured in Azure’s platform-level request logs. Error logging includes only the target hostname (e.g., “iCloud fetch failed”), never the full URL or authentication tokens. If your calendar URL contains authentication tokens (iCloud URLs do), understand that the proxy briefly sees them in transit.
Try it out
The calendar is live at cc.bexelbie.com. You can load the built-in demo data to explore without connecting your own calendars - select “Demo” from either input dropdown.
To control which devices LVM can work with, it was always possible to configure filtering in the devices section of the /etc/lvm/lvm.conf configuration file. But filtering devices this way was not very simple and could lead to problems when using paths like /dev/sda which are not stable. Many users also didn’t know this possibility exists and while using this type of filtering is possible for a single command with the --config option, it is not very user friendly. This all changed recently with the introduction of the new configuration file /etc/lvm/devices/system.devices and the corresponding lvmdevices command in LVM 2.03.12. A new option --devices was also added to the existing LVM commands for a quick way to limit which devices one specific command can use.
LVM Devices File
As was said above, there is a new /etc/lvm/devices/system.devices configuration file. When this file exists, it controls which devices LVM is allowed to scan. Instead of relying on matching the device path, the devices file uses stable identifiers like WWID, serial number or UUID.
A device file on a simple system with a single physical volume on a partition would look like this:
# LVM uses devices listed in this file.
# Created by LVM command vgimportdevices pid 187757 at Fri Feb 13 16:44:45 2026
# HASH=1524312511
PRODUCT_UUID=4d58d0c1-8b67-4fa6-a937-035d2bfbb220
VERSION=1.1.1
IDTYPE=devname IDNAME=/dev/sda2 DEVNAME=/dev/sda2 PVID=rYeMgwy0mO0THDagB6k8mZkoOSqAWfte PART=2
When the devices file is enabled, LVM will only scan and operate on devices listed in it. Any device not present in the file is invisible to LVM, even if it has a valid PV header.
This is the biggest change brought in with this feature. The old lvm.conf based filters were always optional and LVM always scanned all devices in the system, unless told otherwise. This could cause problems on systems with many disks, where LVM (especially during boot) could take a long time scanning devices that did not even “belong” to it.
By default, the LVM devices file is enabled with the latest versions of LVM and on systems without preexisting volume groups, creating new LVM setups with commands like pvcreate or vgcreate will automatically add the new physical volumes to the devices file. If desired, this feature can be disabled by setting use_devicesfile=0 in lvm.conf or by simply removing the existing devices file. On systems without the devices file, LVM will simply scan all devices in the system the same way it did before introduction of this configuration file.
Managing Devices with lvmdevices and vgimportdevices
On most newly installed systems with LVM, the devices file should be already present and populated, but you might want to either create it later on systems installed with an older version of LVM, or manage some devices manually. It is possible to modify the system.devices manually, but a new command lvmdevices was added for simple management of the file.
To simply import all devices in an existing volume group, vgimportdevices <vgname> can be used and for all volume groups in the system, vgimportdevices -a can be used.
A single physical volume can be added to the file with lvmdevices --adddev and removed with lvmdevices --deldev.
To check all entries in the devices file, lvmdevices --check can be used and any issues found by the check command can be fixed with lvmdevices --update.
Backups
In the sample devices file above, you might have noticed the VERSION field. This is the current version of the file. LVM automatically makes a backup of the file with every change and old versions of the file can be found in the /etc/lvm/devices/backup directory. So if you make some mistakes when changing the file with lvmdevices, you can simply restore to a previous version of the file.
Overriding the Devices File and Filtering with Commands
Together with the devices file feature, a new option --devices was added to all LVM commands. This option allows specifying devices which are visible to the command. This overrides the existing devices file so it can be used either to restrict the command to work only on a subset of devices specified in the devices file or even to allow it to run on devices not specified in the file at all.
This option is also very useful when dealing with multiple volume groups with the same name. This is a known limitation of LVM – two volume groups with the same name cannot coexist in one system and LVM will refuse to work without renaming one of them. This can be a problem when dealing with cloned disks or backups. With --devices, commands like vgs can be restricted to “see” only one of the volume groups.
Issue: Missing Volume Group
As mentioned above, when installing a new system with LVM, for the newly created volume groups, the used devices will be added to the devices file. Fedora (and RHEL) installer, Anaconda, will also add all other volume groups present during installation to the devices file so these will also be visible in the installed system. The problems start when a device with a volume group is added to the system after installation. The volume group (and any logical volumes in it) is suddenly invisible. Even commands like vgs will simply ignore it, because its physical volumes are not listed in the devices file.
This can be a problem on dual boot systems with encryption. Because the second system’s volume group is “hidden” by the encryption layer, it is not visible during installation and not added to the devices file. When the user unlocks the LUKS device in their newly installed system, they can’t access their second system. Unfortunately in this situation, the only solution is to manually add the second system’s volume group with vgimportdevices as described above.
Conclusion
The LVM devices file provides a cleaner and more reliable way to control which devices LVM uses, replacing the old lvm.conf based filtering with stable device identifiers and simple management through the lvmdevices command. Overall, for most users the devices file should work transparently without any manual configuration needed.
Mrhbaan, Fedora community! I am happy to share that as of 10 February 2026, Fedora is now available in Syria. Last week, the Fedora Infrastructure Team lifted the IP range block on IP addresses in Syria. This action restores download access to Fedora Linux deliverables, such as ISOs. It also restores access from Syria to Fedora Linux RPM repositories, the Fedora Account System, and Fedora build systems. Users can now access the various applications and services that make up the Fedora Project. This change follows a recent update to the Fedora Export Control Policy. Today, anyone connecting to the public Internet from Syria should once again be able to access Fedora.
This article explains why this is happening now. It also covers the work behind the scenes to make this change happen.
Why Syria, why now?
You might wonder: what happened? Why is this happening now? I cannot answer everything in this post. However, the story begins in December 2024 with the fall of the Assad regime in Syria. A new government took control of the country. This began a new era of foreign policy in Syrian international relations.
This may seem like a small change. Yet, it is significant for Syrians. Some U.S. Commerce Department regulations remain in place. However, the U.S. Department of the Treasury’s policy change now allows open source software availability in Syria. The Fedora Project updated its stance to welcome Syrians back into the Fedora community. This matches actions taken by other major platforms for open source software, such as Microsoft’s GitHub.
Syria & Fedora, behind the scenes
Opening the firewall to Syria took seconds. However, months of conversations and hidden work occurred behind the scenes to make this happen. The story begins with a ticket. Zaid Ballour (@devzaid) opened Ticket #541 to the Fedora Council on 1 September 2025. This escalated the issue to the Fedora Council. It prompted a closer look at the changing political situation in Syria.
Jef Spaleta and I dug deeper into the issue. We wanted to understand the overall context. The United States repealed the 2019 Caesar Act sanctions in December 2025. This indicated that the Fedora Export Policy Control might be outdated.
During this time, Jef and I spoke with legal experts at Red Hat and IBM. We reviewed the situation in Syria. This review process took time. We had to ensure compliance with all United States federal laws and sanctions. The situation for Fedora differs from other open source communities. Much of our development happens within infrastructure that we control. Additionally, Linux serves as digital infrastructure. This context differs from a random open source library on GitHub.
However, the path forward became clear after the repeal of the 2019 Caesar Act. After several months, we received approval. Fedora is accessible to Syrians once again.
We wanted to share this exciting announcement now. It aligns with our commitment to the Fedora Project vision:
“The Fedora Project envisions a world where everyone benefits from free and open source software built by inclusive, welcoming, and open-minded communities.“
We look forward to welcoming Syrians back into the Fedora community and the wider open source community at large. Mrhbaan!
ℹ️ This new major version requires PHP ≥ 8.4 and is not backward compatible with previous versions, so the package is designed to be installed beside versions 8, 9, 10, 11, and 12.
Installation:
dnf --enablerepo=remi install phpunit13
Notice: This tool is an essential component of PHP QA in Fedora. This version should be available soon in the Fedora ≥ 43 official repository (19 new packages).
Another weekly recap of happenings around fedora for me.
Strange long httpd reload times on proxy11
I spent a fair bit of time looking at one of our proxies.
We have them all to a reload (aka 'graceful restart') every
hour when we update a ticketkey on them. For the vast majority of them,
thats fine and works as expected. However, proxy11 decided to start
taking a while (like 12-15seconds) to reload, causing our monitoring
to alert that it was down... then back up.
In the end, it seemed the problem was somehow related to some old
tls certificates that were present, but not used anywhere. All I can
think of is that it's doing some kind of parsing of all certs and
somehow those old ones cause it undue processing time. I removed
those old certs and reload times went way back down again.
I'm tempted to try and figure out what it's doing exactly here, but
I already spent a fair bit of time on it and it's working again now,
so I guess I will just shrug and move on.
Anubis and download servers
A while back I had to hurredly deploy anubis in front of our download servers.
This was due to the scrapers deciding to just download every rpm / iso from
every fedora release since the dawn of time at a massive concurrency.
This was saturating one of our 10G links completely, and making another
somewhat full. So, I deployed anubis and it dropped things back to 'normal'
again.
Fast forward to this last week, and my rush in deploying anubis came back to
bite me. We have a cloudfront distribution that uses our download servers as
it's 'origin'. Then we point all aws network blocks to use that for any
fedora instances in aws. This is a win for us as then everything for them
is cached on the aws side saving bandwith, and a win for aws users as
that traffic is 'local' to them so faster and doesn't cause them to need
to be billed for ingress either.
Last week, anubis started blocking CloudFront, so uses in aws would get a
anubis challenge page instead of the actual content they were expecting.
But why did it this just happen now? well, as near as I could determine,
someone/scrapers were hitting the CloudFront endpoints and crawling our
download server (fine, no problem there), but then they hit a directory
that they handled poorly.
The directory was used/last updated about 11 years ago with a readme file
explaining that the content was moved and no longer there. Great. However,
also it had previous subdirectories as links to '.' (ie, the current directory).
Since scrapers don't use any of the 20 years of crawling code, and instead
just brute force things, this resulted in a bunch of requests like:
GET /foo/
GET /foo/foo/
GET /foo/foo/foo/
and so on. These are all really small (just a directory listing), so that meant
it could make requests really really fast. So, after some point anubis
started challenging those CloudFront connections and boom.
So, the problem with the hurred deployment I had made there was that
The policy file I had deployed was not actually being used.
I had allowed CloudFront, but it didn't seem to help any, and it took
me far too long to figure out that anubis was starting up, printing
one error about not being able to read the policy file and just running
with the default configuration. ;( It turned out be a podman/selinux interaction
and is now fixed.
I also removed those . links and set that directory tree to just 403
all requests to it.
Anubis and forge
Also this week, folks were reporting problems with our new forgejo forge.
Anubis was doing challenges when people were trying to submit comments and
it was messing them up.
In the end here, I just needed to adjust the config to allow POSTs through.
At least right now scrapers aren't doing any POSTS and just allowing those
seems to fix the issues people were having.
Some more scrapers
Friday we had them hitting release-monitoring.org. This time it was
what I am calling a 'type 0' scraper. It was all coming from one cloud ip
and I could just block them.
This morning a bit ago, we had a group hit/find the 'search' button
on koji.fedoraproject.org, taking it offline. I was able to block the
endpoint for a few hours and they went away, but no telling if they will
be back. These were the 'type 2' kind (botnet using users ip's/browsers
from 100's of thousands of different ips).
I am sad that the end game here sounds like there's not going to be so
much of a open internet anymore. ie, for self defense sites will all have
to go to requiring registration of some kind before working.
I can only hope business models change before it comes to that.
FOSDEM is one of my favorite events because I get to catch up with friends from across the FOSS communities and meet so many amazing contributors who use and support open source. It’s inspiring to see so many people excited about building software, hardware, and everything in between together. That energy motivates me in my own open source work.
This year, I was also glad to attend two FOSDEM fringe events, CHAOSSCon-EU and CentOS Connect, where I got to sit on panels.
A few minutes ago I was answering an email in Thunderbird, and I realized one thing that might have been there for years. The date was in the wrong format! (Wrong as in for me, of course).
I use English (US) for my desktop environment, but I change the format of several things because I use the metric system, and I need the Euro sign and normal dates. Sorry, but month, day, and year is a weird format.
The “normal” thing would be to use my country format. But if I select a format from Spain, I get dates in Spanish and in a format that I also hate:
Yes, we have many different languages in Spain.
What I want is ISO8601 and English. But I don’t want to modify each field manually. Too much. The weird trick is to use Denmark (English). I am not kidding. And I am not alone. At all.
Weird, huh?
Why, you may ask? Look at this beauty. It’s just perfect.
Thank you, Denmark
Anyway. My problem is with Thunderbird. It looks like it doesn’t support having a language and a format from different regions. Thankfully they documented it here.
So now, I have:
intl.date_time.pattern_override.date_short set to yyyy-MM-dd
intl.date_time.pattern_override.time_short set to HH:mm
I guess I might need more stuff, but at least for now I don’t see a weird date when I am answering emails.
P.D: Talking about weird things I like to configure… My keyboards are ANSI US QWERTY. But the layout I use is English (intl., with AltGr dead keys). So I can type Spanish letters using the right alt and a key (e.g: AltGr + n gives me ñ).
This is a report created by CLE Team, which is a team containing community members working in various Fedora groups for example Infrastructure, Release Engineering, Quality etc. This team is also moving forward some initiatives inside Fedora project.
Week: 09 – 13 February 2026
Fedora Infrastructure
This team is taking care of day to day business regarding Fedora Infrastructure. It’s responsible for services running in Fedora infrastructure. Ticket tracker
Ported bugzilla2fedmsg to use Red Hat’s Kafka servers, since UMB will be decommissioned. The code with Kafka support is currently in staging but requires firewall ports to be opened (ticket 13133)
[Tahrir] Add PUT endpoint for updating for USER relation and integrate it with the frontend [Review][Merged]
[Tahrir] List profile invitations when the user has logged in [Pull request]
CentOS Infra including CentOS CI
This team is taking care of day to day business regarding CentOS Infrastructure and CentOS Stream Infrastructure. It’s responsible for services running in CentOS Infratrusture and CentOS Stream. CentOS ticket tracker CentOS Stream ticket tracker
This team is taking care of day to day business regarding Fedora releases. It’s responsible for releases, retirement process of packages and package builds. Ticket tracker
Fedora 44 Mass Branching was successfully completed last week!
Beta Freeze is tentatively scheduled to begin next week, Tue 2026-02-17.
RISC-V
This is the summary of the work done regarding the RISC-V architecture in Fedora.
Fedora RISC-V “unified kernel” work in progress (Jason Montleon): targeted for F44. Right now they’re being built in Copr. Will move to Koji once the F44 buildroot is populated.
Work by Fabian Arrotin RISC-V builders in CentOS Build System (CBS) now can build from official sources (git+https from CentOS Stream)
QE
This team is taking care of quality of Fedora. Maintaining CI, organizing test days and keeping an eye on overall quality of Fedora releases.
Co-ordinated with releng team on branching as it involves openQA, critical path and the gating policies – things went much smoother this time but still lots of lessons learned and SOP improvements drafted and suggested
comments? additions? reactions?
As always, comment on mastodon: https://fosstodon.org/@nirik/116149442549416772