/rss20.xml">

Fedora People

Uniqlo T-Shirt Bash Script Easter Egg

Posted by Avi Alkalay on 2026-03-03 21:18:35 UTC

At Uniqlo flagship store in Ginza, Tokyo, there was this T-shirt with an encoded shell script.

Well, I had to decode it and see the result.

I took a photo with my iPhone and used its own OCR, which made a lot of confusion between 0 (zero), O (capital o) and 8, mixed 1 and l (small L), and yielded many chars as very similar glyphs but in advanced Unicode ranges, which are invalid for Base64 encoding. It took me some time to fix it all. The final corrected text is this:

#!/bin/bash
eval "$(base64 -d <<< 'IyEvYmluL2Jhc2gKCiMgQ29uZ3Jhd
HVsYXRpb25zISBZb3UgZm91bmQgdGhlIGVhc3RlciBlZ2chIOKdpO+4jwojIOOBiuO
CgeOBp+OBqOOBhuOBlOOBluOBhOOBvuOBme+8gemaoOOBleOCjOOBn+OCteODl+OD
qeOCpOOCuuOCkuimi+OBpOOBkeOBvuOBl+OBn++8geKdpO+4jwoKIyBEZWZpbmUgd
GhlIHRleHQgdG8gYW5pbWF0ZQp0ZXh0PSLimaVQRUFDReKZpUZPUuKZpUFMTOKZpV
BFQUNF4pmlRk9S4pmlQUxM4pmlUEVBQ0XimaVGT1LimaVBTEzimaVQRUFDReKZpUZ
PUuKZpUFMTOKZpVBFQUNF4pmlRk9S4pmlQUxM4pmlIgoKIyBHZXQgdGVybWluYWwg
ZGltZW5zaW9ucwpjb2xzPSQodHB1dCBjb2xzKQpsaW5lcz0kKHRwdXQgbGluZXMpC
gojIENhbGN1bGF0ZSB0aGUgbGVuZ3RoIG9mIHRoZSB0ZXh0CnRleHRfbGVuZ3RoPS
R7I3RleHR9CgojIEhpZGUgdGhlIGN1cnNvcgp0cHV0IGNpdmlzCgojIFRyYXAgQ1RS
TCtDIHRvIHNob3cgdGhlIGN1cnNvciBiZWZvcmUgZXhpdGluZwp0cmFwICJ0cHV0I
GNub3JtOyBleGl0IiBTSUdJTlQKCiMgU2V0IGZyZXF1ZW5jeSBzY2FsaW5nIGZhY3R
vcgpmcmVxPTAuMgoKIyBJbmZpbml0ZSBsb29wIGZvciBjb250aW51b3VzIGFuaW1hd
Glvbgpmb3IgKCggdD0wOyA7IHQrPTEgKSk7IGRvCiAgICAjIEV4dHJhY3Qgb25lIGN
oYXJhY3RlciBhdCBhIHRpbWUKICAgIGNoYXI9IiR7dGV4dDp0ICUgdGV4dF9sZW5nd
Gg6MX0iCiAgICAKICAgICMgQ2FsY3VsYXRlIHRoZSBhbmdsZSBpbiByYWRpYW5zCiA
gICBhbmdsZT0kKGVjaG8gIigkdCkgKiAkZnJlcSIgfCBiYyAtbCkKCiAgICAjIENhb
GN1bGF0ZSB0aGUgc2luZSBvZiB0aGUgYW5nbGUKICAgIHNpbmVfdmFsdWU9JChlY2
hvICJzKCRhbmdsZSkiIHwgYmMgLWwpCgogICAgIyBDYWxjdWxhdGUgeCBwb3NpdGl
vbiB1c2luZyB0aGUgc2luZSB2YWx1ZQogICAgeD0kKGVjaG8gIigkY29scyAvIDIpIC
sgKCRjb2xzIC8gNCkgKiAkc2luZV92YWx1ZSIgfCBiYyAtbCkKICAgIHg9JChwcmlu
dGYgIiUuMGYiICIkeCIpCgogICAgIyBFbnN1cmUgeCBpcyB3aXRoaW4gdGVybWluY
WwgYm91bmRzCiAgICBpZiAoKCB4IDwgMCApKTsgdGhlbiB4PTA7IGZpCiAgICBpZi
AoKCB4ID49IGNvbHMgKSk7IHRoZW4geD0kKChjb2xzIC0gMSkpOyBmaQoKICAgICM
gQ2FsY3VsYXRlIGNvbG9yIGdyYWRpZW50IGJldHdlZW4gMTIgKGN5YW4pIGFuZCAyM
DggKG9yYW5nZSkKICAgIGNvbG9yX3N0YXJ0PTEyCiAgICBjb2xvcl9lbmQ9MjA4CiA
gICBjb2xvcl9yYW5nZT0kKChjb2xvcl9lbmQgLSBjb2xvcl9zdGFydCkpCiAgICBjb
2xvcj0kKChjb2xvcl9zdGFydCArIChjb2xvcl9yYW5nZSAqIHQgLyBsaW5lcykgJSBj
b2xvcl9yYW5nZSkpCgogICAgIyBQcmludCB0aGUgY2hhcmFjdGVyIHdpdGggMjU2L
WNvbG9yIHN1cHBvcnQKICAgIGVjaG8gLW5lICJcMDMzWzM4OzU7JHtjb2xvcn1tIiQ
odHB1dCBjdXAgJHQgJHgpIiRjaGFyXDAzM1swbSIKCiAgICAjIExpbmUgZmVlZCB0b
yBtb3ZlIGRvd253YXJkCiAgICBlY2hvICIiCgpkb25lCgo= ')"

When base64-decoded, this bash script appears:

#!/bin/bash

# Congratulations! You found the easter egg! ❤
# おめでとうございます!隠されたサプライズを見つけました!❤

# Define the text to animate
text="♥PEACE♥FOR♥ALL♥PEACE♥FOR♥ALL♥PEACE♥FOR♥ALL♥PEACE♥FOR♥ALL♥PEACE♥FOR♥ALL♥"

# Get terminal dimensions
cols=$(tput cols)
lines=$(tput lines)

# Calculate the length of the text
text_length=${#text}

# Hide the cursor
tput civis

# Trap CTRL+C to show the cursor before exiting
trap "tput cnorm; exit" SIGINT

# Set frequency scaling factor
freq=0.2

# Infinite loop for continuous animation
for (( t=0; ; t+=1 )); do
    # Extract one character at a time
    char="${text:t % text_length:1}"

    # Calculate the angle in radians
    angle=$(echo "($t) * $freq" | bc -l)

    # Calculate the sine of the angle
    sine_value=$(echo "s($angle)" | bc -l)

    # Calculate x position using the sine value
    x=$(echo "($cols / 2) + ($cols / 4) * $sine_value" | bc -l)
    x=$(printf "%.0f" "$x")

    # Ensure x is within terminal bounds
    if (( x < 0 )); then x=0; fi
    if (( x >= cols )); then x=$((cols - 1)); fi

    # Calculate color gradient between 12 (cyan) and 208 (orange)
    color_start=12
    color_end=208
    color_range=$((color_end - color_start))
    color=$((color_start + (color_range * t / lines) % color_range))

    # Print the character with 256-color support
    echo -ne "\033[38;5;${color}m"$(tput cup $t $x)"$char\033[0m"

    # Line feed to move downward
    echo ""

done

The original encoded text, when executed in my Linux terminal gives a beatiful animation, similar to this:


After decoding and executing the script, I found other people doing the same:

But I swear I decoded it myself first, without any help.

Fedora Project Community Corner @ DevConf.IN 2026 - Day One

Posted by Akashdeep Dhar on 2026-03-03 18:30:29 UTC
Fedora Project Community Corner @ DevConf.IN 2026 - Day One

As the event owner for the Fedora Project’s community presence during the DevConf.IN 2026 conference, I had to ensure that I made it to the event venue as early as possible. The first day began with me getting ready by 0930am Indian Standard Time since I had returned from meeting the likes of Samyak Jain, Matthew Miller and Karen Miller late the day before. I had to put a pin on my plans to check the inventory of the swag packs that I had delivered at Shounak Dey’s house, as it was quite the struggle to get us Uber rides to the MIT World Peace University campus, Kothrud. While checking with Yashwanth Rathakrishnan, I also ensured that I packed the essential tooling that might come in handy while establishing and tearing down our booth at the DevConf.IN venue.

After a couple of Uber cab cancellations, Shounak and I were finally on our way from Chandan Nagar to Kothrud. We seemed to have greatly underestimated the time it would take for us to get there, as not only was it thirty kilometres away from us, but the morning office-going traffic was also at its peak. We projected that it would take us around an hour to get there, but since the booth activities were to begin from 1100am Indian Standard Time onwards, we had ample time in our hands. I checked in with Yashwanth and Samyak on our organiser chatroom, who had reached there by around 1000am Indian Standard Time. They had joined the queue to pick up their badges and swag packs from the organiser’s desk, and also went ahead to visit where our Fedora Project Community Corner was located.

To both Shounak’s and my chagrin, we figured out a little too late that the venue had been moved to the newly constructed Vyas Building of the MIT World Peace University campus. We realised it after our (rather friendly) Uber driver dropped us off, so the two of us had to walk around a kilometre hauling the heavy swag boxes and booth equipment. I did connect with Yashwanth and Samyak so that they could help carry those over from the entrance, but there must have been a disconnect, as Shounak and I had to sweat our way into the Vyas Building anyway. After passing a security check and a flight of wide stairs, our relief came in the form of one patient, Avadhoot Dhere, whom we met at the building’s entrance, who helped us carry our load to the Fedora Project Community Corner location.

At around 1030am Indian Standard Time, we had about thirty minutes to get our community booth set up with exhibits and swag packs. I delivered a quick briefing to our on-site Fedora Project Indian Crew on what was expected of them and how they had to abide by the DevConf.IN Code of Conduct, when to take breaks after duly informing about the same, and how judicious they have to be regarding the swag distribution. We started drawing quite a huge crowd of early visitors to the booth while we were busy setting up the community booth, a bunch of whom confused our location with the registration desk. Huge kudos to Rajan Shah, who helped us secure a strategic location to allow for most (if not all) community conversations to come to us first before they seeped into other booths.

While Samyak and Shounak got busy setting up the A3-sized posters that I had previously designed, Yashwanth and I worked on organising the swag stickers and swag magnets on our booth desk. I briefly paused to take in just how amazing a task the DevConf.IN 2026 organising committee did with arranging the booth backdrops and positioning the two desks at a right angle for us. With Samyak’s handspan calculations and Shounak’s accurate pasting, we soon had our four posters on topics like Matthew’s presentation, Flock To Fedora 2026, DevConf.IN 2026’s Fedora Badge and Fedora Project Community Trivia, ready to go. Since we had visitors lining up already before the scheduled beginning, we had to double-time addressing the people and setting up our Fedora Linux-powered laptops.

By around 1100am Indian Standard Time, we had already addressed approx 200 visitors at the booth, so you could imagine just how difficult the opening hours were for the on-site Fedora Project Indian Crew to handle that wave. Our economical yet outgoing approach towards handing out our limited swag packs allowed us to pace ourselves while dealing with a huge number of attendees. To add to that, our inclusive demeanour towards the visitors allowed them to bring their conversations and feedback about Fedora Linux and/or the Fedora Project to us. While the population numbers did not hold a candle against the likes that we usually get in places like FOSDEM, I extended those experiences to ensure that we did not stretch ourselves thin while answering questions and receiving feedback.

We were also visited by Dorka Volavkova, who checked in with me about the situation regarding the availability of swag packs at our booth. While I was initially dependent on folks portering in swag packs from various other events due to logistical difficulties, I soon realised that the plan was not reliable. I informed her about my decision to extend the Fedora Mindshare budget request by about USD 150 to ensure that I could produce the swag packs locally with Rajan’s and Devang Parikh’s assistance. This not only allowed for deterministic representation at DevConf.IN 2026, but the same resources could also be used for organising more such Fedora Project events around India or APAC. Dorka was happy to note that we followed an ideal process of getting swag packs for events from local vendors.

As the footfall slightly slowed down, Shounak and I took that opportunity to leave for the registration desk to pick up our badges and swag packs. It had literally escaped our minds since we were occupied with setting up the Fedora Project Community Corner, but we did manage to get our DevConf.IN-themed backpack and ID badges on time. We fielded various questions from visitors wanting to know about what the Fedora Project is, how Fedora Linux differs from other distributions, how Fedora Linux shapes the future of enterprise distributions, and what one can do to get started with contributing. Yashwanth suggested rewarding more interesting conversations with our limited-edition Fedora Project magnet-and-clip combo, and that helped us drive the course of our conversations and shape an open narrative.

Matthew soon arrived at around 1200pm Indian Standard Time, and that not only gave us one more person to field interactions with, but the visitors could also avail themselves of the historical context about the Fedora Project that he could offer, having served in the Fedora Project Leader position for over a decade. This was also our opportunity to gather folks around us and promote his talk about “30/35 Fedora Linux Releases in 30/35 Minutes”, which was scheduled for the next day. The younger, college-going crowd looked up to us for working on free and open source software as our full-time employment and wanted to understand how they could begin doing the same. This was a great opportunity for us to showcase our onboarding pathways into the community through the Fedora Join SIG.

Among the curious questions that I fielded, a couple that I remember had to do with the incompatibility of Fedora Linux as a student base operating system for attempting CNCF certification examinations and the apparent lack of a long-term release in the Fedora Linux offerings. While I personally did not have any experience with those examinations, I elaborated on how Fedora Linux’s focus on innovation and the fast-moving release cycle could be something that could not be conveniently kept up with by the examination proctors. As for the question about the Fedora Linux long-term distribution release, I recommended that they use CentOS Stream and/or the Red Hat Enterprise Linux Developer Subscription for more serious workloads that require strict quality and uncompromising support.

As I was wearing a CentOS Linux-themed tee, I was also able to steer discussions into explaining how the Fedora Project’s decisions around technologies such as the systemd movement, PipeWire inclusion, Wayland defaults, etc., were seen as controversial in the past but ended up becoming industry standards just a couple of years later. I wanted our conversations with the visitors to be a gateway through which they could start exploring the space of open source enterprise distributions and potentially begin contributing to the projects of their interest. Amidst our conversations, the visitors were drawn to the Fedora Project Community Trivia that Samyak helped craft questions for, and the fact that it had an exclusive Fedora Project-themed sipper as an award prize only helped us farm more engagement.

Shounak stayed busy photographing with his fancy Canon DSLR camera as we guided visitors to scan the QR code on our posters to do things like get themselves the associated Fedora Project event badge, learn more about the annual flagship contributor conference, Flock To Fedora 2026, and, of course, participate in the Fedora Project Community Trivia. A bunch of these visitors had experience using a GNU/Linux distribution, and with us sharing just how Fedora Linux allows them to build solutions for the consumers of today and tomorrow, with its packages being on the leading edge, they were eager to try it out on their personal devices. There were comparison conversations against the likes of Microsoft Windows and Apple macOS, too, on a more superficial level, among the younger folks there.

A DevConf.IN volunteer, Rahul Sharma, visited our booth to inform us about the catering being served on the eighth floor of the building at around 0100pm Indian Standard Time. Right around this time, we had a confusing conflict with a fellow speaker or booth participant who we found to be taking away Shounak’s swag pack, claiming it to be theirs since they had left it there for the restroom. While we could not verify the truthfulness of their claim, I sternly asked them to return the swag pack to Shounak and verify with his friends before pressing further. Since he did not bother informing any of the booth staff about leaving behind his objects, we could not be held responsible for any misplaced belongings. For all that trouble, it turned out that his friend indeed had his swag pack with him all along.

After reporting this awkward incident to an adjacent DevConf.IN volunteer, Shounak and I decided to head upstairs for lunch. With Matthew leaving, it was down to the four of us handling the booth, so we wanted to ensure that the booth was staffed by at least a couple of folks at any given time. Since both Samyak and Yashwanth mentioned that they were not feeling hungry enough, the two of us decided to scale through sixteen flights of stairs to make it to the eighth floor. It is not that the place did not have elevators, but they were jam-packed, and we would have ended up wasting time waiting for a vacant one to become available, given how popular DevConf.IN 2026 ended up being in terms of attendee count. We also connected with Avadhoot to check on his experience at the conference so far.

We did end up wasting our time, though, since we found a huge queue on the eighth floor at the dining establishment serving volunteers, speakers and staff. Actually, scratch that, it was not entirely a wasteful endeavour because I got to meet up with Brian Proffitt, whom I met after almost a year or so since the previous DevConf.IN. After a brief catch-up and a bio break, the two of us headed back to the booth, since waiting in the queue doing absolutely nothing before spending more time to have food would have been wasteful. Instead, Shounak and I decided to go across the place to see what other booths had to offer, which led me to meet a bunch of community friends and fellow employees. We met Sudhir Dharanendraiah while we were exploring, meeting after almost a couple of years.

He remarked that the booth personnel from the Red Hat India Communities ran out of their swag during the first few hours of the event's commencement and praised how we kept visitors engaged throughout the day. He also said that not combining the Fedora Project community booth with the Red Hat India Communities was the right call, since that allowed us to be crystal clear in our messaging that the Fedora Project and Red Hat do indeed care about India and APAC users and contributors. We were even asked whether we had a regional community presence or meetup cadence, which gave us something to explore and consider from the Fedora Mindshare activity perspective. The two of us finally came back to the booth, letting Samyak and Yashwanth head away for lunch.

The other major questions and feedback that came to us about Fedora Linux had to do with what we were planning to do around artificial intelligence. Being a Fedora Mindshare representative to the Fedora Council at the time, and also someone who proposed the creation of an AI-assisted contribution policy, I elaborated on how inclusive our community had recently become towards policy-abiding AI-assisted contributions. I also emphasised that with subprojects and SIGs around AI, ML, and PyTorch, our focus was to establish Fedora Linux as a general-purpose platform of choice for generalists, developers, sysadmins, or enthusiasts to build AI-powered technologies on, rather than have AI-based solutions that no one asked for enabled by default in our primary offerings.

We met Pravin Satpute, who suggested that we have lunch at the cafeteria on the fifth floor, and after letting Samyak and Yashwanth return from their exploration, we headed there with Avadhoot. The dining choice was limited to a Vegetarian Biryani and Cold Coffee, but at around 0230pm Indian Standard Time, that felt like a divine serving. We were able to get ourselves an elevator this time around, and after a brief catch-up with the likes of Saumili Dutta and Kashyap Ekbote, both of whom I met previously during GNOME Asia 2024 in Bengaluru, we returned to the booth. I made it a point to remind the booth crew to stay hydrated now and then, given just how easy it could be for folks to forget about self-preservation after being overwhelmed by almost 600 visitors since we reached the conference.

Coincidentally, I met Harshavardhan Sharma from openSUSE, whom we had met in the past in another conference, along with Sahil Tah, a college senior from my alma mater. DevConf.IN 2026, did manage to bring like-minded individuals into one place, regardless of which industry they belonged to, showing that openness and innovation are indeed the way forward when it comes to technology. Rajan made a visit to the Fedora Project Community Corner around that time, and he admired how we kept visitors occupied while providing them with things to learn and swag to collect. We were also soon visited by the likes of Amita Sharma and Sudhir Menon, whom we caught up with after a long time, while offering them the warm embrace that the Fedora Project’s “Friends” foundation is known for.

I was also briefly visited by some folks who thought that we had mistaken the answer keys in the Fedora Project Community Trivia, but to their surprise, those were made tricky on purpose. I provided them with the context of how I planned those deceptively difficult questions while emphasising that we wanted the four winners each day of the conference to feel special about their victory. Since I had procured eight limited-edition Fedora Project-themed sippers for the winners, the attendees not only had to exhibit their community knowledge by getting full marks but also had to be lucky enough to emerge victorious in a raffle. We wanted folks to return to the booth thirty minutes before closing time for the announcements.

While Samyak worked on populating the raffle with the high scorers on his laptop, I checked with Sayak Sarkar, who was visiting DevConf.IN, just like the year before. As the Vyas Building gave us very poor cellular reception, it was incredibly challenging to point him in the correct direction, especially since, while the venue remained the same as the previous year, the actual location had moved. For their first experience manning a community booth, I could not help but watch from a distance just how well both Shounak and Yashwanth did as volunteer contributors as I waited for Sayak to turn up at the Fedora Project Community Corner. At around 0430pm Indian Standard Time, I caught up with Sayak and introduced him to the Indian Crew as the curious crowd started gathering for the results.

With the four winners being announced and felicitated by both Samyak and me, the visitors cheered for the winners as well as for the exciting activity. We clicked a few more pictures with Matthew before he went on his way to attend the dinner with the DevConf.IN speakers, organisers, and members. While I was invited to the dinner as well on behalf of the Fedora Project, it made little sense to me not to go out with the hardworking Fedora Project Indian Crew instead. In one of the conversations with Sayak and Shreyank Gupta, I was pleasantly surprised to learn that Shreyank was also a fellow Bengali, since I had always interacted with him in either English or Hindi for the past five years or so. After a few more pictures were taken by Shounak at the booth, we decided to start wrapping things up for the day.

We unfortunately had to send some visitors back, and ask them to return the next day, who wanted a demonstration of Fedora Workstation at our exhibits. We had more than half of the swag packs left in our inventory, even after giving away a huge number of them to be cross-shared by the Red Hat India Communities booth personnel. After taking an inventory of all the belongings we had with us, we started looking for Uber rides back home. While we left the posters behind on the backdrops, we decided to take everything else with us to avoid misplacing them. It took a while for that to be confirmed, but after Samyak and Yashwanth got theirs, Shounak and I got ours, and we decided to leave for our homes around 0530pm Indian Standard Time, since we planned to reconvene for dinner later that day.

After a rather uneventful but lengthy Uber drive, Shounak and I made it back to our homes. The evening went quite smoothly, with a pre-booking for dinner done at Wasabi15 under my name and Samyak and Yashwanth arriving early by 0800pm Indian Standard Time. Using the budget that I had previously requested from my management, we were able to have a great time unpacking from a busy day running operations at the Fedora Project Community Corner. Yashwanth sought advice on how he could take his contributions further, and Shounak shared how he got started in free and open source software contribution. With some great Asian cuisine and even greater conversations, we called it a night at 1000pm Indian Standard Time and went back to our places to prepare for the next day.

Fedora Documentation translations not available from March 4th, 2026

Posted by Fedora Community Blog on 2026-03-03 10:00:00 UTC

Fedora Documentation translations will be put on hold from March 4th as the Fedora Localization Team has started the process of migration from pagure.io to the Fedora Forge. From the date, translation projects of the documentation (with ‘fedora-docs-l10n’ in name) will be gradually locked on the Fedora translation platform. Translation automation of the Docs website will also be stopped in the Fedora infrastructure. Consequently, there will be no translation updates available in the language versions on the Fedora Documentation.

The migration involves all repositories which support and ensure the availability of translations of the Fedora Documentation. There is no possibility the migration can be performed ‘on the fly’ as changes in the repositories, related scripts and continuous integration with the translation platform cannot be dealt with independently. Therefore the translation process of the Fedora Documentation is kept on hold.

We regrettably ask the Fedora contributors, our translation community, to pull back from translating of the Fedora Documentation and wait till the translation automation of the documentation is resumed again.

The progress of migration can be followed in the localization tracker as issue #52.

The post Fedora Documentation translations not available from March 4th, 2026 appeared first on Fedora Community Blog.

Jekyll Reads: the tooling behind my reading list

Posted by Brian (bex) Exelbierd on 2026-03-03 07:50:00 UTC

Why I needed more than a social reading site

In Rediscovering Reading (Without the Social Media Part) I wrote about stepping away from scrolling and building a slower, more deliberate reading habit. Part of that shift was making my reading log public without tying it to a dedicated social network.

The mechanics behind that were simple but fussy: keep a YAML file up to date, copy and paste links from Open Library, remember to grab cover images, and wire everything into Jekyll templates for the reading page and sidebar. None of it was hard, but it was just annoying enough that I knew future‑me would start skipping updates.

I built Jekyll Reads to make that workflow tolerable.

What Jekyll Reads actually does

Jekyll Reads is a small collection of pieces designed around a single idea: keep all the book data in one _data/reading.yml file and let everything else be presentation.

The core pieces are:

  • A shared Node.js library that talks to Open Library, picks a reasonable match, and produces a standard YAML snippet for a book
  • A command‑line tool that lets you search for a book and print the YAML to stdout, with options for indentation and auto‑selecting results
  • A Vim integration that shells out to the CLI and drops the YAML directly into your buffer at the right indentation level
  • A Visual Studio Code extension that does the same thing from inside the editor, with a proper search UI and update checks for the extension itself

All of this is intentionally boring: no external Node dependencies, just the built‑in modules and a bit of glue. The point is to make it slightly easier to keep the reading list current than to let it drift.

How it shows up on this site

On this site, the source of truth is _data/reading.yml. Entries that are still in progress, finished, or abandoned are all represented there with the same structure. The YAML includes things like start and finish dates, a link to more information (usually Open Library), an optional cover image, and a free‑form comment.

That data feeds two places:

  • The dedicated reading page, which separates currently‑reading, finished, and abandoned books and shows covers, dates, and comments
  • A small sidebar block on the home page that surfaces what I am currently reading, so the log is visible without needing a whole post for every book

Jekyll Reads does not try to be a general bookshelf app. It just reflects what I am already doing: writing short notes in YAML and publishing them along with the rest of the site.

Design constraints and trade‑offs

I made a few deliberate choices that might look odd if you are used to larger toolchains:

  • No external Node dependencies. The library and CLI only use built‑in modules like https and readline. That keeps installation simple and makes it easy to run in constrained environments.
  • Open Library as the primary data source. It provides book metadata, cover images, and stable URLs without requiring another account or scraping.
  • Plain YAML as the storage format. A static _data file is easy to version, review, and back up. It also plays nicely with Jekyll’s existing data pipeline.
  • Multiple small tools instead of one big one. The CLI, Vim integration, and VS Code extension all sit on top of the same library, so they stay in sync without each re‑implementing the logic.

If any of that stops being true in the future, I can replace or extend the pieces without touching the core data file.

If you want to use it

The repository README walks through how to set up your own _data/reading.yml, wire up a reading page and sidebar, and use the CLI or editor integrations. It is written so that you can follow it even if you are not using the same Jekyll theme I am.

The code is MIT‑licensed and shipped under Electric Pliers LLC. If you want a lightweight way to publish a reading log without standing up a whole social network, you might find it useful.

You can find the repository and full documentation here: https://github.com/ElectricPliers/jekyll-reads

To update blobs or not to update blobs

Posted by Matthew Garrett on 2026-03-03 03:09:48 UTC

A lot of hardware runs non-free software. Sometimes that non-free software is in ROM. Sometimes it’s in flash. Sometimes it’s not stored on the device at all, it’s pushed into it at runtime by another piece of hardware or by the operating system. We typically refer to this software as “firmware” to differentiate it from the software run on the CPU after the OS has started1, but a lot of it (and, these days, probably most of it) is software written in C or some other systems programming language and targeting Arm or RISC-V or maybe MIPS and even sometimes x862. There’s no real distinction between it and any other bit of software you run, except it’s generally not run within the context of the OS3. Anyway. It’s code. I’m going to simplify things here and stop using the words “software” or “firmware” and just say “code” instead, because that way we don’t need to worry about semantics.

A fundamental problem for free software enthusiasts is that almost all of the code we’re talking about here is non-free. In some cases, it’s cryptographically signed in a way that makes it difficult or impossible to replace it with free code. In some cases it’s even encrypted, such that even examining the code is impossible. But because it’s code, sometimes the vendor responsible for it will provide updates, and now you get to choose whether or not to apply those updates.

I’m now going to present some things to consider. These are not in any particular order and are not intended to form any sort of argument in themselves, but are representative of the opinions you will get from various people and I would like you to read these, think about them, and come to your own set of opinions before I tell you what my opinion is.

THINGS TO CONSIDER

  • Does this blob do what it claims to do? Does it suddenly introduce functionality you don’t want? Does it introduce security flaws? Does it introduce deliberate backdoors? Does it make your life better or worse?

  • You’re almost certainly being provided with a blob of compiled code, with no source code available. You can’t just diff the source files, satisfy yourself that they’re fine, and then install them. To be fair, even though you (as someone reading this) are probably more capable of doing that than the average human, you’re likely not doing that even if you are capable because you’re also likely installing kernel upgrades that contain vast quantities of code beyond your ability to understand4. We don’t rely on our personal ability, we rely on the ability of those around us to do that validation, and we rely on an existing (possibly transitive) trust relationship with those involved. You don’t know the people who created this blob, you likely don’t know people who do know the people who created this blob, these people probably don’t have an online presence that gives you more insight. Why should you trust them?

  • If it’s in ROM and it turns out to be hostile then nobody can fix it ever

  • The people creating these blobs largely work for the same company that built the hardware in the first place. When they built that hardware they could have backdoored it in any number of ways. And if the hardware has a built-in copy of the code it runs, why do you trust that that copy isn’t backdoored? Maybe it isn’t and updates would introduce a backdoor, but in that case if you buy new hardware that runs new code aren’t you putting yourself at the same risk?

  • Designing hardware where you’re able to provide updated code and nobody else can is just a dick move5. We shouldn’t encourage vendors who do that.

  • Humans are bad at writing code, and code running on ancilliary hardware is no exception. It contains bugs. These bugs are sometimes very bad. This paper describes a set of vulnerabilities identified in code running on SSDs that made it possible to bypass encryption secrets. The SSD vendors released updates that fixed these issues. If the code couldn’t be replaced then anyone relying on those security features would need to replace the hardware.

  • Even if blobs are signed and can’t easily be replaced, the ones that aren’t encrypted can still be examined. The SSD vulnerabilities above were identifiable because researchers were able to reverse engineer the updates. It can be more annoying to audit binary code than source code, but it’s still possible.

  • Vulnerabilities in code running on other hardware can still compromise the OS. If someone can compromise the code running on your wifi card then if you don’t have a strong IOMMU setup they’re going to be able to overwrite your running OS.

  • Replacing one non-free blob with another non-free blob increases the total number of non-free blobs involved in the whole system, but doesn’t increase the number that are actually executing at any point in time.

Ok we’re done with the things to consider. Please spend a few seconds thinking about what the tradeoffs are here and what your feelings are. Proceed when ready.

I trust my CPU vendor. I don’t trust my CPU vendor because I want to, I trust my CPU vendor because I have no choice. I don’t think it’s likely that my CPU vendor has designed a CPU that identifies when I’m generating cryptographic keys and biases the RNG output so my keys are significantly weaker than they look, but it’s not literally impossible. I generate keys on it anyway, because what choice do I have? At some point I will buy a new laptop because Electron will no longer fit in 32GB of RAM and I will have to make the same affirmation of trust, because the alternative is that I just don’t have a computer. And in any case, I will be communicating with other people who generated their keys on CPUs I have no control over, and I will also be relying on them to be trustworthy. If I refuse to trust my CPU then I don’t get to computer, and if I don’t get to computer then I will be sad. I suspect I’m not alone here.

Why would I install a code update on my CPU when my CPU’s job is to run my code in the first place? Because it turns out that CPUs are complicated and messy and they have their own bugs, and those bugs may be functional (for example, some performance counter functionality was broken on Sandybridge at release, and was then fixed with a microcode blob update) and if you update it your hardware works better. Or it might be that you’re running a CPU with speculative execution bugs and there’s a microcode update that provides a mitigation for that even if your CPU is slower when you enable it, but at least now you can run virtual machines without code in those virtual machines being able to reach outside the hypervisor boundary and extract secrets from other contexts. When it’s put that way, why would I not install the update?

And the straightforward answer is that theoretically it could include new code that doesn’t act in my interests, either deliberately or not. And, yes, this is theoretically possible. Of course, if you don’t trust your CPU vendor, why are you buying CPUs from them, but well maybe they’ve been corrupted (in which case don’t buy any new CPUs from them either) or maybe they’ve just introduced a new vulnerability by accident, and also you’re in a position to determine whether the alleged security improvements matter to you at all. Do you care about speculative execution attacks if all software running on your system is trustworthy? Probably not! Do you need to update a blob that fixes something you don’t care about and which might introduce some sort of vulnerability? Seems like no!

But there’s a difference between a recommendation for a fully informed device owner who has a full understanding of threats, and a recommendation for an average user who just wants their computer to work and to not be ransomwared. A code update on a wifi card may introduce a backdoor, or it may fix the ability for someone to compromise your machine with a hostile access point. Most people are just not going to be in a position to figure out which is more likely, and there’s no single answer that’s correct for everyone. What we do know is that where vulnerabilities in this sort of code have been discovered, updates have tended to fix them - but nobody has flagged such an update as a real-world vector for system compromise.

My personal opinion? You should make your own mind up, but also you shouldn’t impose that choice on others, because your threat model is not necessarily their threat model. Code updates are a reasonable default, but they shouldn’t be unilaterally imposed, and nor should they be blocked outright. And the best way to shift the balance of power away from vendors who insist on distributing non-free blobs is to demonstrate the benefits gained from them being free - a vendor who ships free code on their system enables their customers to improve their code and enable new functionality and make their hardware more attractive.

It’s impossible to say with absolute certainty that your security will be improved by installing code blobs. It’s also impossible to say with absolute certainty that it won’t. So far evidence tends to support the idea that most updates that claim to fix security issues do, and there’s not a lot of evidence to support the idea that updates add new backdoors. Overall I’d say that providing the updates is likely the right default for most users - and that that should never be strongly enforced, because people should be allowed to define their own security model, and whatever set of threats I’m worried about, someone else may have a good reason to focus on different ones.


  1. Code that runs on the CPU before the OS is still usually described as firmware - UEFI is firmware even though it’s executing on the CPU, which should give a strong indication that the difference between “firmware” and “software” is largely arbitrary ↩︎

  2. And, obviously 8051 ↩︎

  3. Because UEFI makes everything more complicated, UEFI makes this more complicated. Triggering a UEFI runtime service involves your OS jumping into firmware code at runtime, in the same context as the OS kernel. Sometimes this will trigger a jump into System Management Mode, but other times it won’t, and it’s just your kernel executing code that got dumped into RAM when your system booted. ↩︎

  4. I don’t understand most of the diff between one kernel version and the next, and I don’t have time to read all of it either. ↩︎

  5. There’s a bunch of reasons to do this, the most reasonable of which is probably not wanting customers to replace the code and break their hardware and deal with the support overhead of that, but not being able to replace code running on hardware I own is always going to be an affront to me. ↩︎

misc fedora bits last week of feb 2026

Posted by Kevin Fenzi on 2026-02-28 17:10:26 UTC
Scrye into the crystal ball

The year is rolling along, and here we are at the end of Feb.

Lots of small day to day items

There were a lot of small day to day investigations and incoming requests, along with a pretty large amount of pull requests for our ansible repo. Since we are in Beta freeze some of them will have to wait, but some we can test out in staging now. It's great to see people submitting fixes and enhancements.

There were also some small fun to debug issues this week, including:

  • The https://whatcanidoforfedora.org/ site was sometimes alerting that it's ssl cert was expired. Turns out this was caused by that domain having old ip's for 2 proxies that had moved datacenters. So, sometimes it hit those, timed out and the ssl check just assumed it was bad. So, it was DNS. :)

  • The fedorapeople.org web server started being very slow to respond. Turns out the scrapers were hitting the cgit interface there and downloading xz snapshots of every commit. This caused the server to have to try and compress things over and over again. So, for now I just disabled those links and increased resources on the webserver. scrapers continue to keep on giving.

Secure boot signing work

Much of my time this week has been spent working on our new secure boot signing workflow. This is really really overdue and something I was hoping to finish mid last year, but things kept coming up and it kept getting pushed back.

The new setup leverages our existing signing infrastructure (sigul) so there's no need for special build hardware anymore. It also removes some constraints in the existing setup allowing us to do something we have wanted for a long time, namely sign aarch64 boot loader artifacts for secure boot.

Kudos to Jermey Cline for all his work on the code to make this possible. This uses the siguldry-bridge, rust based server to talk to sigul, and hopfully before too long we can replace the sigul server side with the new rust based server too.

I got everything deployed, I am now able to sign things, but in testing on my aarch64 laptop, there's still some issue with grub2 that needs to be sorted out. Hopefully it's something not too difficult to track down and we can move to this new setup after beta freeze once and for all.

comments? additions? reactions?

As always, comment on mastodon: https://fosstodon.org/@nirik/116149442549416772

Patchutils 0.4.5 released

Posted by Tim Waugh on 2026-02-27 16:04:38 UTC

I have released version 0.4.5 of patchutils and also built it in Fedora rawhide. This is a stability-focused update fixing compatibility issues and bugs, some of which had been introduced in 0.4.4.

Compatibility Fix: Git Extended Diffs

Version 0.4.4 added support in the filterdiff suite for Git’s extended diff format. Git diffs without content hunks (such as renames, copies, mode-only changes and binary files) were included in the output. This broke compatibility with 0.4.3.

For 0.4.5 this functionality is now gated with a --git-extended-diffs=include|exclude parameter. The default for 0.4.x is to exclude files in Git extended diffs with no content. There were also some fixes relating to file numbering for these types of diffs.

Note: in 0.5.x this default will change to include.

Status Indictors for grepdiff

Previously grepdiff --status showed ! for all matching files, but now it correctly reports them as additions (+), removals (-) or modifications (!).

As always, bug reports and feature requests are welcome on GitHub. Thanks to everyone who reported issues and helped to test fixes!

The post Patchutils 0.4.5 released appeared first on PRINT HEAD.

Community Update – Week 9, 2026

Posted by Fedora Community Blog on 2026-02-27 15:13:10 UTC

This is a report created by CLE Team, which is a team containing community members working in various Fedora groups for example Infrastructure, Release Engineering, Quality etc. This team is also moving forward some initiatives inside Fedora project.

Week: 20 Feb – 27 Feb 2026

Fedora Infrastructure

This team is taking care of day to day business regarding Fedora Infrastructure.
It’s responsible for services running in Fedora infrastructure.
Ticket tracker

  • Migration of pagure.io repositories continues
  • Ansible repository now has CI running in runner
  • Resolved errors on dist-git that were spamming sysadmin-main mailbox – ticket

CentOS Infra including CentOS CI

This team is taking care of day to day business regarding CentOS Infrastructure and CentOS Stream Infrastructure.
It’s responsible for services running in CentOS Infratrusture and CentOS Stream.
CentOS ticket tracker
CentOS Stream ticket tracker

Release Engineering

This team is taking care of day to day business regarding Fedora releases.
It’s responsible for releases, retirement process of packages and package builds.
Ticket tracker

  • Release engineering is currently in Beta Freeze.
  • Releng has provided the first beta release candidate after the QE request.
  • Request for creating detached signature has been handled by Samyak for ignition 2.26.0 release.

RISC-V

This is the summary of the work done regarding the RISC-V architecture in Fedora.

  • Continued to go through the list of packages to investigate (failing to build; requires patching, and more).
    • Got a PR merged for ‘libkrunfw’ (a low-level library for process isolation) and built it in RISC-V Koji; Marcin (“hrw”) also got a couple more merged.
  • Work is progressing well on Fedora RISC-V unified kernels (Jason Montleon is doing most of the heavy-lifting here).  Currently hosted in Copr.

AI

This is the summary of the work done regarding AI in Fedora.

QE

This team is taking care of quality of Fedora. Maintaining CI, organizing test days
and keeping an eye on overall quality of Fedora releases.

Forgejo

This team is working on introduction of https://forge.fedoraproject.org to Fedora
and migration of repositories from pagure.io.

EPEL

This team is working on keeping Epel running and helping package things.

  • Completed EPEL 10.2 mass branching in preparation for the upcoming RHEL 10.2 release (announcement)

UX

This team is working on improving User experience. Providing artwork, user experience,
usability, and general design services to the Fedora project

If you have any questions or feedback, please respond to this report or contact us on #admin:fedoraproject.org channel on matrix.

The post Community Update – Week 9, 2026 appeared first on Fedora Community Blog.

Mise en place d’Anubis sur l’instance Scaleway de Fedora-Fr

Posted by Guillaume Kulakowski on 2026-02-27 11:22:00 UTC

Pour rappel, l’architecture de Fedora-Fr repose sur un VPS, ou plutôt une Instance, comme l’appelle Scaleway. Cette machine (virtuelle) correspond à nos besoins actuels et nous est gracieusement fournie par notre partenaire Scaleway. Côté sécurité, nous utilisons le firewall natif proposé par Scaleway, appelé Security Groups. Nous n’avons pas activé le WAF, car je n’ai […]

Cet article Mise en place d’Anubis sur l’instance Scaleway de Fedora-Fr est apparu en premier sur Guillaume Kulakowski's blog.

🎲 PHP version 8.4.19RC1 and 8.5.4RC1

Posted by Remi Collet on 2026-02-27 07:12:00 UTC

Release Candidate versions are available in the testing repository for Fedora and Enterprise Linux (RHEL / CentOS / Alma / Rocky and other clones) to allow more people to test them. They are available as Software Collections, for parallel installation, the perfect solution for such tests, and as base packages.

RPMs of PHP version 8.5.4RC1 are available

  • as base packages in the remi-modular-test for Fedora 42-44 and Enterprise Linux ≥ 8
  • as SCL in remi-test repository

RPMs of PHP version 8.4.19RC1 are available

  • as base packages in the remi-modular-test for Fedora 42-44 and Enterprise Linux ≥ 8
  • as SCL in remi-test repository

ℹ️ The packages are available for x86_64 and aarch64.

ℹ️ PHP version 8.3 is now in security mode only, so no more RC will be released.

ℹ️ Installation: follow the wizard instructions.

ℹ️ Announcements:

Parallel installation of version 8.5 as Software Collection:

yum --enablerepo=remi-test install php85

Parallel installation of version 8.4 as Software Collection:

yum --enablerepo=remi-test install php84

Update of system version 8.5:

dnf module switch-to php:remi-8.5
dnf --enablerepo=remi-modular-test update php\*

Update of system version 8.4:

dnf module switch-to php:remi-8.4
dnf --enablerepo=remi-modular-test update php\*

ℹ️ Notice:

  • version 8.5.4RC1 is in Fedora rawhide for QA
  • EL-10 packages are built using RHEL-10.1 and EPEL-10.1
  • EL-9 packages are built using RHEL-9.7 and EPEL-9
  • EL-8 packages are built using RHEL-8.10 and EPEL-8
  • oci8 extension uses the RPM of the Oracle Instant Client version 23.9 on x86_64 and aarch64
  • intl extension uses libicu 74.2
  • RC version is usually the same as the final version (no change accepted after RC, exception for security fix).
  • versions 8.4.19 and 8.5.4 are planed for March 12th, in 2 weeks.

Software Collections (php84, php85)

Base packages (php)

Friday Links 26-07

Posted by Christof Damian on 2026-02-26 23:00:00 UTC

Today is a weird one, so much in engineering is about AI.

Check out the live coding trance video and pretty Stockholm metro videos.

Engineering

Extremely Lazy and Immensely Curious - I still think it makes you dumb.

Mediations #35: Learning. Again. And Again. - learning how AI fits in and how we fit in.

Implementing a clean room Z80 / ZX Spectrum emulator with Claude Code - is it clean code if Claude probably had access to a Z80 emulator at some point?

New toy: Installing Ubuntu on the HP Z2 Mini

Posted by Peter Czanik on 2026-02-26 14:06:13 UTC

The data sheet of my new AI focused mini workstation from HP mentions Ubuntu 24.04 as the supported Linux distribution. I have tried that, but I could not get the installer to run. However, 25.10 installed without any problems, even from an openSUSE branded USB stick :-)

Only the chameleon works with this machine:-)

I must admit that I’m not an Ubuntu fan, but installed it anyway, as Ubuntu is the “official” Linux distro for this machine. GNOME is heavily modified compared to other distros. For GUI apps the focus seems to be shifted to snaps from distro packages.

For now I did not test the in hardware AI support, just tried to collect some first impressions. I ended up installing a few 3D games and playing :-) Having AMD graphics has the advantage that everything works out of box. There is no need for binary only drivers, extra repositories, praying to the binary gods, etc. It just works. Fully open source.

SuperTuxKart :-)

This blog is part of a longer series about my adventures with my new machine and AI. You can reach me to discuss this blog on one of the contacts listed in the upper right corner. You can read the rest of the blogs under the toy tag.

Version 4.11.0 of syslog-ng is now available

Posted by Peter Czanik on 2026-02-25 13:11:05 UTC

Version 4.11.0 of syslog-ng is now available. The main attraction is the brand new Kafka source, but there are many other smaller features and improvements, as well.

Before you begin

If you happen to use Debian, Ubuntu or the RHEL family of operating systems (RHEL, CentOS, Rocky Linux, Alma Linux, Oracle Linux, etc.) then ready-to-use packages are already available as part of the release process. For details, check the README in the syslog-ng source code repository on GitHub: https://github.com/syslog-ng/syslog-ng/?tab=readme-ov-file#installation-from-binaries The syslog-ng container is also updated to this release: https://github.com/syslog-ng/syslog-ng/?tab=readme-ov-file#installation-from-binaries

I plan to update Fedora 44 and Rawhide soon, just like openSUSE Tumbleweed. For other distributions, you often need to wait a bit more or use third-party repositories. Our 3rd-party repo page has some pointers: https://www.syslog-ng.com/products/open-source-log-management/3rd-party-binaries.aspx

What is new?

The largest new feature is the Kafka source, which allows you to collect log messages from Kafka streams. For many years, syslog-ng had a Kafka destination, allowing you to send log messages to a Kafka-based data pipeline. The Kafka source enables syslog-ng to collect log messages from Kafka, parse and filter log messages, and route them to various destinations. You can learn more about the Kafka source from the syslog-ng documentation at https://syslog-ng.github.io/admin-guide/060_Sources/038_Kafka/README .

Support for Elasticsearch / OpenSearch data streams was also added: https://www.syslog-ng.com/community/b/blog/posts/changes-in-the-syslog-ng-elasticsearch-destination

4.11 also includes many other interesting new features and bug fixes, including:

  • OAuth2 support in the cloud-auth module, including gRPC-based destinations
  • Failover support in the load-balancer
  • Improved performance and lowered resource usage on macOS
  • cmake support feature parity with autotools

For a complete list of changes, check the release notes on GitHub: https://github.com/syslog-ng/syslog-ng/releases/tag/syslog-ng-4.11.0

As usual, while we make every effort to make all features work everywhere, it is not always technically possible. For example, compilers and / or dependencies are too old to support gRPC-based modules in older RHEL, SUSE and Debian releases.

What is next?

As usual: feedback is very welcome. If you have any problems with the syslog-ng 4.11.0 release, open an issue on GitHub at https://github.com/syslog-ng/syslog-ng/issues Your report helps us to make syslog-ng better. Of course, we are also very happy about any positive feedback :-)

syslog-ng logo

Originally published at https://www.syslog-ng.com/community/b/blog/posts/version-4-11-0-of-syslog-ng-is-now-available

Trust in open source communities

Posted by Ben Cotton on 2026-02-25 12:00:00 UTC

In chapter 3 of Program Management for Open Source Projects, I talk about the importance of trust. “Open source communities run on trust,” I wrote. I go on to talk about building trust by establishing relationships and credibility. This is fine when you’re coming into a defined role, perhaps if you got hired to fill a sponsored role in a community or if a project leader has asked you to apply your skills to the project.

Most people, of course, don’t come directly into a defined role. They start by making a small contribution: filing a bug, answering a question on a mailing list or forum, submitting a patch, and so on. Sometimes, they don’t even plan to stick around. They’re making one contribution and moving on. The kind of trust-building based on relationships doesn’t work as well in that case. But you still need trust to evaluate a contributor (and thus their contribution).

This issue has only grown more relevant as large language models become widespread. If the person who submitted a pull request didn’t write the code, do they understand it? Can they answer maintainers’ questions or address feedback? Is the code even worth a maintainer’s time to review or is it plausible-looking garbage?

In late January, GitHub product manager Camilla Moraes started a conversation seeking ideas for giving maintainers tools to address low-quality contributions. The conversation produced many good (and also some bad) ideas and highlighted the difficulty of a universal solution. Although the word “trust” only appears six times (as of this writing) in the whole thread, the conversation is basically a discussion of trust. “How can we slow the rate of un-trusted contribution without making life harder for the trusted contributors?” is a fair summary of the underlying issue.

Defining trust

Charles H. Green developed an equation of trustworthiness that includes credibility, reliability, and intimacy. Although it’s a smidge hokey, it’s fundamentally a reasonable representation of trust, so we can roll with it.

Importantly, trust is not a static characteristic of a person. Instead, it’s a dynamic measure that changes based on context and relationship. My coworkers (hopefully) think that I am competent in my work, deliver what I say I will, and am a fun guy to be around. There’s a high degree of trust because I rate highly in credibility, reliability, and intimacy. When I join a new project, I am the same person, but I am relatively or entirely unknown. The other people in the community need to interact with me for a period of time before they can develop trust in me.

Even if I’ve known someone for a long time, their trust in me may change when the context changes. The intimacy and reliability may be the same, but they don’t necessarily know if I’m credible in the new context. Just because I have experience in other languages, that doesn’t mean my Rust code is good. Someone who thinks I write competent Python (we’re pretending here!) would be well served reviewing a Rust contribution very closely, as I’ve written essentially none.

Trust in your community

As with many concepts, we often think about trust in open source communities without explicitly thinking about it. But most projects have some concept of a contributor ladder, where people get increased privileges and responsibilities based on the trust they’ve earned. It’s more important than ever to give deliberate thought to how trust is evaluated in your community.

This not only affects community management concerns but also security. Many projects have automated CI jobs that run on pull requests. These check for code style, run unit and integration tests, and so on. In the best case, bad code (intentional or accidental) can limit resources (including maintainer time). In the worst case, bad code can compromise the project and publish malware. For this reason, projects often require maintainers or other trusted users to grant permission for automated tests to run when the submitter is untrusted. Unfortunately, this still places a time burden on maintainers, which is a precious resource.

I suggest that projects explicitly consider what levels of trust are required to access certain resources (CI jobs, project emails, etc) and how that trust will be measured. The Discourse trust levels are an excellent starting point for building your project’s trust model. The specifics are designed for forum interaction, but you can extrapolate to your project’s activities.

The path to build trust has to be easy, or else you’ll drive away new contributors and your community will wither away over time. Trust levels are a safety measure, not a gatekeeping measure.

Tools to help

I am aware of a few tools to help with trust evaluation. I share them here as a reference, but I have not used them and do not endorse or renounce them. contributor-report is a GitHub Action that gives maintainers a report on a new contributor’s activity levels. This helps maintainers evaluate newcomers on the metrics that make sense for their specific project. vouch is a tool for marking users as vouched (or denounced) and taking action based on that. It can be used to provide a web of trust across projects and communities.

This post’s featured photo by Andrew Petrov on Unsplash.

The post Trust in open source communities appeared first on Duck Alignment Academy.

New toy: Installing openSUSE Tumbleweed on the HP Z2 Mini

Posted by Peter Czanik on 2026-02-24 11:58:13 UTC

Last week I introduced you to my new toy at home: an AI focused mini workstation from HP. It arrived with Windows pre-installed, but of course I also wanted to have Linux on the box.

Documentation mentions that I have to disable secure boot and make a few more changes before installing Linux. I did all the suggested BIOS changes before installing Linux.

The data sheet mentions Ubuntu 24.04 as the supported Linux distribution. I have tried that, but I could not get the installer to run. Along the way I realized that the USB boot support is very picky on this box. Using my old USB sticks, which work perfectly in my laptop and old desktop, does not work at all. Also, changing the USB stick requires you to turn the machine off and on, a simple reboot is not enough. Finally I found a USB-C stick, and that almost worked with Ubuntu 24.04. It booted, but the installer crashed.

The USB sticks I tried

As I have been a S.u.S.E. / openSUSE user for the past 30 years, I did not mind this failure much. I downloaded the openSUSE Tumbleweed installer, and it worked like a charm. Best of all, unlike openSUSE Leap 16.0, Tumbleweed still has the good old YaST installer I used for decades. Installation was quick, easy and rock solid.

Surprise arrived when I rebooted the machine. Windows was not available in the boot menu. As it turned out, Tumbleweed used a new flavor of GRUB2 by default: grub2-bls, but that does not seem to boot other operating systems. There is no supported way to switch back to grub2-efi, so I reistalled openSUSE. Luckily it’s an easy job, and I did not have any data yet on the machine. So, it was just a few mouse clicks.

openSUSE is my daily driver, so I did not spend much time exploring the system. It seems to work just fine. Installing a few games and checking the in hardware AI support comes once I finished installing all operating systems on the machine. Next to Windows I plan to install openSUSE, Fedora and Ubuntu on the Linux side, and FreeBSD as well.

This blog is part of a longer series about my adventures with my new machine and AI. You can reach me to discuss this blog on one of the contacts listed in the upper right corner. You can read the rest of the blogs under the toy tag.

📝 Install PHP 8.5 on Fedora, RHEL, CentOS Stream, Alma, Rocky or other clone

Posted by Remi Collet on 2026-02-24 11:16:00 UTC

Here is a quick howto upgrade default PHP version provided on Fedora, RHEL, CentOS, AlmaLinux, Rocky Linux or other clones with latest version 8.5.

You can also follow the Wizard instructions.

 

Architectures:

 

The repository is available for x86_64 (Intel/AMD) and aarch64 (ARM).

 

Repositories configuration:

 

On Fedora, standards repositories are enough, on Enterprise Linux (RHEL, CentOS) the Extra Packages for Enterprise Linux (EPEL) and Code Ready Builder (CRB) repositories must be configured.

Fedora 44

dnf install https://rpms.remirepo.net/fedora/remi-release-44.rpm

Fedora 43

dnf install https://rpms.remirepo.net/fedora/remi-release-43.rpm

Fedora 42

dnf install https://rpms.remirepo.net/fedora/remi-release-42.rpm

RHEL version 10.1

dnf install https://dl.fedoraproject.org/pub/epel/epel-release-latest-10.noarch.rpm
dnf install https://rpms.remirepo.net/enterprise/remi-release-10.rpm
subscription-manager repos --enable codeready-builder-for-rhel-10-x86_64-rpms

RHEL version 9.7

dnf install https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm
dnf install https://rpms.remirepo.net/enterprise/remi-release-9.rpm
subscription-manager repos --enable codeready-builder-for-rhel-9-x86_64-rpms

RHEL version 8.10

dnf install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm
dnf install https://rpms.remirepo.net/enterprise/remi-release-8.rpm
subscription-manager repos --enable codeready-builder-for-rhel-8-x86_64-rpms

Alma, CentOS Stream, Rocky version 10

dnf install https://dl.fedoraproject.org/pub/epel/epel-release-latest-10.noarch.rpm
dnf install https://rpms.remirepo.net/enterprise/remi-release-10.rpm
crb install

Alma, CentOS Stream, Rocky version 9

dnf install https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm
dnf install https://rpms.remirepo.net/enterprise/remi-release-9.rpm
crb install

Alma, Rocky version 8

dnf install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm
dnf install https://rpms.remirepo.net/enterprise/remi-release-8.rpm
crb install

 

PHP module usage

 

With Fedora and EL, you can simply use the remi-8.4 stream of the php module

With Fedora (dnf5 has partial module support)

dnf module reset php
dnf module enable php:remi-8.5
dnf install php-cli php-fpm php-mbstring php-xml

Other distributions (dnf4)

dnf module switch-to php:remi-8.5/common

 

PHP upgrade

 

By choice, the packages have the same name as in the distribution, so a simple update is enough:

dnf update

That's all :)

$ php -v
PHP 8.5.3 (cli) (built: Feb 10 2026 18:25:51) (NTS gcc x86_64)
Copyright (c) The PHP Group
Built by Remi's RPM repository  #StandWithUkraine
Zend Engine v4.5.3, Copyright (c) Zend Technologies
    with Zend OPcache v8.5.3, Copyright (c), by Zend Technologies

 

Known issues

The upgrade can fail (by design) when some installed extensions are not yet compatible with  PHP 8.5.

See the compatibility tracking list: PECL extensions RPM status

If these extensions are not mandatory, you can remove them before the upgrade; otherwise, you must be patient.

Warning: some extensions are still under development, but it seems useful to provide them to upgrade more people and allow users to give feedback to the authors.

 

More information

If you prefer to install PHP 8.5 beside the default PHP version, this can be achieved using the php85 prefixed packages, see the PHP 8.5 as Software Collection post.

You can also try the configuration wizard.

This is also documented as the community way to install PHP 8.5 on the official PHP web site.

The packages available in the repository were used as sources for Fedora 44.

By providing a full feature PHP stack, with about 150 available extensions, 11 PHP versions, as base and SCL packages, for Fedora and Enterprise Linux, and with 300 000 downloads per day, the remi repository became in the last 21 years a reference for PHP users on RPM based distributions, maintained by an active contributor to the projects (Fedora, PHP, PECL...).

See also:

📝 Redis version 8.4

Posted by Remi Collet on 2025-11-04 12:48:00 UTC

RPMs of Redis version 8.4 are available in the remi-modular repository for Fedora ≥ 41 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).

1. Installation

Packages are available in the redis:remi-8.4 module stream.

1.1. Using dnf4 on Enterprise Linux

# dnf install https://rpms.remirepo.net/enterprise/remi-release-<ver>.rpm
# dnf module switch-to redis:remi-8.4/common

1.2. Using dnf5 on Fedora

# dnf install https://rpms.remirepo.net/fedora/remi-release-<ver>.rpm
# dnf module reset  redis
# dnf module enable redis:remi-8.4
# dnf install redis --allowerasing

You may have to remove the valkey-compat-redis compatibilty package.

2. Modules

Some optional modules are also available:

These packages are weak dependencies of Redis, so they are installed by default (if install_weak_deps is not disabled in the dnf configuration).

The modules are automatically loaded after installation and service (re)start.

The modules are not available for Enterprise Linux 8.

3. Future

Valkey also provides a similar set of modules, requiring some packaging changes already proposed for Fedora official repository.

Redis may be proposed for unretirement and be back in the Fedora official repository, by me if I find enough motivation and energy, or by someone else.

I may also try to solve packaging issues for other modules (e.g. RediSearch). For now, module packages are very far from Packaging Guidelines, so obviously not ready for a review.

Loadouts For Genshin Impact v0.1.14 Released

Posted by Akashdeep Dhar on 2026-02-23 18:30:40 UTC
Loadouts For Genshin Impact v0.1.14 Released

Hello travelers!

Loadouts for Genshin Impact v0.1.14 is OUT NOW with the addition of support for recently released artifacts like Aubade of the Morningstar and Moon and A Day Carved From Rising Woods, recently released characters like Columbina, Zibai and Illuga and for recently released weapons like Nocturne's Curtain Call and Lightbearing Moonshard from Genshin Impact Luna IV or v6.3 Phase 2. Take this FREE and OPEN SOURCE application for a spin using the links below to manage the custom equipment of artifacts and weapons for the playable characters.

Resources

Installation

Besides its availability as a repository package on PyPI and as an archived binary on PyInstaller, Loadouts for Genshin Impact is now available as an installable package on Fedora Linux. Travelers using Fedora Linux 42 and above can install the package on their operating system by executing the following command.

$ sudo dnf install gi-loadouts --assumeyes --setopt=install_weak_deps=False

Installation command for Fedora Linux

Changelog

  • Automated dependency updates for GI Loadouts by @renovate[bot] in #486
  • Add the recently added character Columbina to the GI Loadouts roster by @sdglitched in #494
  • Add the recently added weapon Nocturne's Curtain Call to the GI Loadouts roster by @sdglitched in #495
  • Add the recently added artifact set A Day Carved From Rising Winds to the GI Loadouts roster by @sdglitched in #497
  • Add the recently added artifact set Aubade of Morningstar and Moon to the GI Loadouts roster by @sdglitched in #496
  • Update dependency pillow to v12.1.1 [SECURITY] by @renovate[bot] in #502
  • Add the recently added character Zibai to the GI Loadouts roster by @sdglitched in #501
  • Add the recently added character Illuga to the GI Loadouts roster by @sdglitched in #503
  • Add the recently added weapon Lightbearing Moonshard to the GI Loadouts roster by @sdglitched in #504
  • Remove the conditional substat calculation of weapon Harbinger of Dawn by @sdglitched in #505
  • Stage the release v0.1.13 for Genshin Impact Luna IV (v6.3 Phase 2) by @sdglitched in #506
  • Automated dependency updates for GI Loadouts by @renovate[bot] in #500

Artifacts

Two artifacts have debuted in this version release.

Aubade of the Morningstar and Moon

  • Bonus for Two Piece Equipment
    Increases Elemental Mastery by 80.
  • Bonus for Four Piece Equipment
    When the equipping character is off-field, Lunar Reaction DMG is increased by 20%. When the party's Moonsign Level is at least Ascendant Gleam, Lunar Reaction DMG will be further increased by 40%. This effect will disappear after the equipping character is active for 3s.

A Day Carved From Rising Winds

  • Bonus for Two Piece Equipment
    ATK +18%.
  • Bonus for Four Piece Equipment
    After a Normal Attack, Charged Attack, Elemental Skill or Elemental Burst hits an opponent, gain the Blessing of Pastoral Winds effect for 6s: ATK is increased by 25%. If the equipping character has completed Witch's Homework, Blessing of Pastoral Winds will be upgraded to Resolve of Pastoral Winds, which also increases the CRIT Rate of the equipping character by an additional 20%. This effect can be triggered even when the character is off-field.

Characters

Three characters have debuted in this version release.

Columbina

Columbina is a catalyst-wielding Hydro character of five-star quality.

Zibai

Zibai is a sword-wielding Geo character of five-star quality.

Illuga

Illuga is a polearm-wielding Geo character of five-star quality.

Weapons

Two weapons have debuted in this version release.

Nocturne's Curtain Call

Ballad of the Crossroads - Scales on Crit DMG.

Loadouts For Genshin Impact v0.1.14 Released

Lightbearing Moonshard

Legacy of Lang-Gan - Scales on Crit DMG.

Loadouts For Genshin Impact v0.1.14 Released

Appeal

While allowing you to experiment with various builds and share them for later, Loadouts for Genshin Impact lets you take calculated risks by showing you the potential of your characters with certain artifacts and weapons equipped that you might not even own. Loadouts for Genshin Impact has been and always will be a free and open source software project, and we are committed to delivering a quality experience with every release we make.

Disclaimer

With an extensive suite of over 1550 diverse functionality tests and impeccable 100% source code coverage, we proudly invite auditors and analysts from MiHoYo and other organizations to review our free and open source codebase. This thorough transparency underscores our unwavering commitment to maintaining the fairness and integrity of the game.

The users of this ecosystem application can have complete confidence that their accounts are safe from warnings, suspensions or terminations when using this project. The ecosystem application ensures complete compliance with the terms of services and the regulations regarding third-party software established by MiHoYo for Genshin Impact.

All rights to Genshin Impact assets used in this project are reserved by miHoYo Ltd. and Cognosphere Pte., Ltd. Other properties belong to their respective owners.

Phone a Friend: Multi-Model Subagents for VS Code Copilot Chat

Posted by Brian (bex) Exelbierd on 2026-02-23 10:10:00 UTC

I wanted a way to stay inside Visual Studio Code, use Copilot Chat as the “orchestrator,” and still mix and match models for different parts of the work. Plan a change with one of the slower, more capable models, but let a smaller, faster model handle mechanical refactors. Edit a blog post with one model, but hand Jekyll plumbing or JSON/YAML munging to another. The friction was that the built-in Copilot Chat extension only lets subagents run on the same model as the parent conversation, while the Copilot CLI happily lets you pick any available model per run. Phone a Friend bolts that flexibility onto Copilot Chat, so I can keep the full VS Code experience - including gutter diffs - while dispatching subtasks to whatever model is best for the job.

The Problem

When you use GitHub Copilot Chat in VS Code, every subagent it spawns runs on the same model as the parent conversation. If you’re on Claude Opus 4.6, all subagents are Claude Opus 4.6. Sometimes you want a different model for a subtask - a faster one for simple work, or a different vendor for a second opinion.

GitHub Copilot CLI supports --model to pick any available model, but using it directly doesn’t help - changes made by the CLI don’t produce VS Code’s gutter indicators (the green/red diff decorations in the editor margin). You get the work done but lose the visual feedback that makes code review comfortable.

Phone a Friend is an MCP server that solves both problems. It dispatches work to Copilot CLI with the model of choice, captures a unified diff of the changes, and returns it to the calling agent - which applies it through VS Code’s edit tools. Gutter indicators show up as the changes were made natively.

How It Works

  1. Copilot Chat calls the phone_a_friend MCP tool with a prompt, model name, and working directory
  2. The MCP server creates an isolated git worktree from HEAD
  3. It launches Copilot CLI in non-interactive mode in that worktree with the requested model
  4. The subagent does its work and writes its response to a “message-in-a-bottle” file
  5. The MCP server reads the response, captures a git diff, and cleans up the worktree
  6. The MCP server then returns the response text and unified diff to the calling agent
  7. The calling agent applies the diff using VS Code’s edit tools - gutter indicators appear

The “message in a bottle” pattern is worth explaining. Copilot CLI’s stdout mixes the agent’s response with progress output and is unreliable to parse. Rather than fighting noisy output, the tool instructs the subagent to write its final response to a file. The server reads the file. Clean separation.

Safety

Worktree isolation means your working tree is never modified directly. Push protection blocks git push at the tool level. Worktrees are cleaned up after every invocation, even on errors.

Setup

You install Phone a Friend like any other MCP server in VS Code: add the @bexelbie/phone-a-friend npm package through the MCP: Add Server... command, or point VS Code at it via your MCP configuration. The GitHub README details the exact JSON and prerequisites (Node.js, Copilot CLI, Git).

Usage

Once configured, you stay in Copilot Chat and describe the outcome you want; the calling agent decides when to route a subtask through Phone a Friend. The tool surface includes discovery hints, so natural phrasing like “get a second opinion from another model” is usually enough to trigger it. Any model that Copilot CLI exposes is available.

Known Limitations

A few trade-offs worth knowing:

  • Context cost. The unified diff lands in the calling agent’s context window. Large diffs eat context. I’ve got an issue open exploring ideas for improving this.
  • Message-in-a-bottle compliance. Most models follow the instruction to write their final response into the message-in-a-bottle file, but some may occasionally ignore it. When that happens, the calling agent still gets the diff of any file changes but not the response text.

Availability

The project is on GitHub under MIT license, and published on npm as @bexelbie/phone-a-friend. Written in TypeScript.

What Changed For Me

Since integrating this into my Copilot setup, the biggest shift is that I no longer have to choose between “the model I want to think with” and “the model I want to do the work” and I eliminated a bunch of copy/paste from manually emulating this. I keep the main conversation with a larger, more capable model for planning and review, and routinely:

  • send quick, mechanical refactors to a smaller, faster model
  • hand Jekyll front matter, Liquid, and config tweaks to a model that’s better at markup and templating
  • ask a different vendor’s model for a second opinion on changes or ideas, especially where that model may be better at the task

Because everything still lands back in the same VS Code buffer with normal gutter diffs, it feels like one coherent tool instead of a handful of loosely-connected ones.

The project also had an unexpected dynamic in the development process. Building an MCP server that mimics a capability already available to the model created a strange feedback loop. I could collaborate on the implementation with Opus, and then turn around and interview it as a subject matter expert on how it uses that very same capability. It was a weird feeling to use the model as both a partner in writing the code and a primary source for understanding the user requirements.

Join Us for Fedora Hatch at SCaLE 23x!

Posted by Fedora Magazine on 2026-02-23 08:00:00 UTC
A graphic featuring a Linux penguin in front of a gear, a blue Fedora logo, and the text "SCaLE 23X" against a dark gray background.

Fedora is heading back to sunny Southern California! As we gear up for SCaLE 23x, we are thrilled to announce a special edition of Fedora Hatch. This is taking place on Friday, March 6 as an embedded track at SCALE.

Whether you’re a long-time contributor, a curious user, or someone looking to make your very first pull request, Fedora Hatch is designed for you. This is our way of bringing the experience of Flock (our annual contributor conference) to a local level. It focuses on connection, collaboration, and community growth.

What’s Happening?

This year, Fedora has secured a dedicated track on Friday at SCALE. We’ve curated a line-up that balances technical deep dives with essential community initiatives.

When: Friday, March 6, 2026

Where: Room 208, Pasadena Convention Center

Who: You! (And a bunch of friendly Fedorans)

The Schedule Highlights

We have a packed morning featuring five talks and a hands-on workshop:

  • Getting Started in Open Source and Fedora (Amy Marrich): Are you new to the world of open source? Or are you looking to make your first contribution? This session will provide a guide for beginners interested in contributing to open source projects. It will focus on the Fedora project. We’ll cover a variety of topics, like finding suitable projects, making your first pull request, and navigating community interactions. Attendees will leave with practical tips, resources, and the confidence to embark on their open source journey.
  • Fedora Docs Revamp Initiative (Shaun McCance): The Fedora Council recently approved an initiative to revamp the Fedora docs. The initiative aims to establish a support team to maintain a productive environment for writing docs. It will establish subteams with subject matter expertise to develop docs in specific areas of interest. We’ll describe some of the challenges the Fedora docs have faced, and present the progress so far in improving the docs. You’ll also learn how you can help Fedora have better docs.
  • A Brief Tour of the Age of Atomic (Laura Santamaria): Ever wished to try a number of different desktop experiences quickly in your homelab? Maybe it’s time to explore Fedora Atomic or Universal Blue! The tour starts with what makes these experiences special. It will then review the options including Silverblue, Cosmic, Bluefin and Bazzite (yes, the gaming OS). We’ll briefly get under the hood to explore bootc, the technology powering Atomic. Finally, we’ll explore how you can contribute to the future of Fedora Atomic.
  • Accelerating CentOS with Fedora (Davide Cavalca): This talk will explore how CentOS SIGs are able to leverage the work happening in Fedora to improve the quality and velocity of packages in CentOS Stream. We’ll cover how the CentOS Hyperscale SIG is able to deliver faster-moving updates for select packages, and how the CentOS Proposed Updates SIG integrates bugfixes and improves the contribution process to the distribution.
  • Agentic Workloads on Linux: Btrfs + Service Accounts Architecture (David Duncan): As AI agents become more prevalent in enterprise environments, Linux systems need architectural patterns that provide isolation, security, and efficient resource management. This session explores an approach, using BTRFS subvolumes combined with dedicated service accounts, to build secure, isolated environments for autonomous AI agents in enterprise deployment.
  • RPM Packaging Workshop (Carl George): While universal package formats like Flatpak, Snap, and AppImage have gained popularity for their cross-distro support, native system packages remain a cornerstone of Linux distributions. These native formats offer numerous benefits. Understanding them is essential for those who want to contribute to the Linux ecosystem at a deeper level. In this hands-on workshop, we’ll explore RPM, the native package format used by Fedora, CentOS, and RHEL. RPM is a powerful and flexible tool. It plays a vital role in the management and distribution of software for these operating systems.

Don’t forget to swing by the Fedora Booth in the Expo Hall! Our team will be there all weekend (March 6–8) with live demonstrations of Fedora Linux 43, GNOME 49 improvements, and plenty of fresh swag to go around.

Registration Details

To join us at the Hatch, you’ll need a SCaLE 23x pass.

  • Location: Pasadena Convention Center, 300 E Green St, Pasadena, CA.
  • Tickets: Available at the official SCaLE website.

We can’t wait to see you there. Let’s make SCaLE 23x the best one yet!

New badge: SCaLE 23x Attendee !

Posted by Fedora Badges on 2026-02-23 06:17:14 UTC
SCaLE 23x AttendeeYou dropped by the Fedora booth at SCaLE 23x!

misc fedora bits 3nd week of feb 2026

Posted by Kevin Fenzi on 2026-02-21 19:27:52 UTC
Scrye into the crystal ball

Well, another saturday, time for another bit of longer form recapping what has been going on in fedora infrastructure and other areas for me.

Infrastructure Fedora 44 Beta freeze

We started the beta freeze in infrastructure. This is to make sure that we don't cause any problems for the release building and distributing pipeline. We require some acks for any changes to things that might impact those things until the day after the Beta is released.

I think this has served us fine over the years. Every once in a while I wonder if we could just stop doing it as we are usually pretty good about not breaking things day to day, but having the extra eyes on changes and slowing down a bit is a good thing I think.

Forge migrations

We have been busy working on migrating things from pagure.io to forge.fedoraproject.org. On tuesday just before the freeze we finally got our ansible repo moved over. I've really been looking forward to this as the review interface in forgejo is a good deal nicer than the pagure one. I've already used it to great effect.

We do still have a few more things to migrate, but overall it's moving along nicely.

Last bits of rdu2-cc move

We finally finished off the last things (at least that I am aware of) for things we moved in last december from rdu2-cc to rdu3.

There was a very strange and difficut to figure out problem for copr builders on ipv6 that I wasn't able to track down, but luckily Pavel worked with networking and finally did so! It seems to have been a odd caching bug in the switches. Hopefully it's now gone once and for all.

There was some hardware issues to sort out: some bad network cards that had to be replaced, a machine that didn't actually move when it was supposed to, etc.

Anyhow I hope all that work is all finally done.

signing work

Finally got back to deploying / testing the new signing path for secure boot signing. I got it all deployed, just need to get things tested now and hopefully we can switch over after the freeze.

This should hopefully allow us to sign aarch64 kernels for secure boot as well as removing reliance on an old smart card for signing.

comments? additions? reactions?

As always, comment on mastodon: https://fosstodon.org/@nirik/116110354434738317

Pourquoi j’ai basculé de Portainer vers Arcane pour gérer les conteneurs sur le NAS

Posted by Guillaume Kulakowski on 2026-02-21 09:37:36 UTC

Si tu me lis un peu, tu sais que j’ai une attitude assez pragmatique vis-à-vis de mes outils. Je n’aime pas changer pour changer, mais je n’hésite pas non plus à bouger quand une solution stagne trop ou prend une direction qui ne me plaît pas. Récemment, j’ai d’ailleurs confirmé mon choix de rester sous […]

Cet article Pourquoi j’ai basculé de Portainer vers Arcane pour gérer les conteneurs sur le NAS est apparu en premier sur Guillaume Kulakowski's blog.

Erradicando Preconceito

Posted by Avi Alkalay on 2026-02-20 16:25:50 UTC

🇧🇷 Hoje fiz um curso online do RH da empresa, sobre preconceito e assédio, e abrange etnia, idade, sexo e deficiência. Eu achei ótimo o conteúdo, muito lúdico e bem estruturado, aprendi um monte de coisas. Hoje em dia todas as empresas exigem esses treinamentos. E fiquei pensando que a História da Humanidade, até bem pouco tempo atrás, ficaria orgulhosa em se olhar no futuro finalmente tratando esse assunto para o bem. Então, mesmo que estejamos ainda longe de erradicar preconceito neste Planeta, quero reconhecer e parabenizar todos nós pelos passos que estamos dando. Com a soma dos pequenos passos chegaremos longe.

🇬🇧 Today I completed an online HR course at my company on prejudice and harassment, covering topics such as ethnicity, age, gender, and disability. I found the content excellent. Engaging, well-structured, and truly educational. I learned a great deal. These days, this kind of training is required in most organizations. It made me reflect on how, not so long ago, humanity might have looked to the future appreciating that we are finally addressing these issues constructively and consciously. Even though we are still far from eradicating prejudice on this planet, I want to acknowledge and congratulate all of us for the steps we are taking. It is through the accumulation of small steps that we will go far.

Publicado também no meu LinkedIn.

Offloading processes to NVidia GPU on Fedora

Posted by Vít Smolík on 2026-02-20 12:22:46 UTC

NVidia on Linux, it always was, and still is a pain sometimes, especially on laptops with a dual-GPU setup. Although the situation is getting better, it still isn’t perfect.

Motivation

For a long time, I was switching between the integrated Intel GPU and dedicated NVidia GPU manually, which wasn’t ideal, but also required a re-login every time. But, the solution was right there!

Setup

The official drivers for NVidia GPUs allow offloading, that means that your DE and every app you launch gets run on the integrated GPU by default, however, you can export some enviroment variables to run the process on the dedicated GPU. This means that you need to have the official drivers installed on your system, please follow this guide for installation instructions.

DevConf India 2026

Posted by Yashwanth Rathakrishnan on 2026-02-20 11:24:05 UTC
Fedora Project is fortunate enough to secure a booth at DevConf India in Pune. Akashdeep Dhar, Samyak Jain, Shounak Dey & myself attended DevConf India representing Fedora Project. The booth was a success, we also attracted a lot of new comers and those who are in their early phase in open source. Most of the visitors were college students and early in their career. We also had Matthew Miller’s talk which attracted so many people and the room was filled with lot of people as well.

Community Update – Week 8 2026

Posted by Fedora Community Blog on 2026-02-20 10:00:00 UTC

This is a report created by CLE Team, which is a team containing community members working in various Fedora groups for example Infrastructure, Release Engineering, Quality etc. This team is also moving forward some initiatives inside Fedora project.

Week: 16 Feb – 20 Feb 2026

Fedora Infrastructure

This team is taking care of day to day business regarding Fedora Infrastructure.
It’s responsible for services running in Fedora infrastructure.
Ticket tracker

  • [GSoC Project Idea 2026] Revamp Fedora Badges project with modern fullstack architecture and dedicated MCP support [Ticket] [Followup]
  • [Infra] Added package and installed size to package metadata [Review] [Lint]
  • [Infra] Improve vagrant setup instructions and add container-based setup [Followup A] [Followup B]
  • Migration of pagure.io repositories to forge.fedoraproject.org continues (9 more repositories migrated)
  • Resolved authentication issues with wordpress instances (thanks to misc)
  • Fixed database connection issues on Dist-Git
  • Dep  updates and CI fixes for our apps in Github
  • Worked on the port of bugzilla2fedmsg to Kafka (since the UMB deprecation), deployed it to staging, asked RHIT for firewall ports.

CentOS Infra including CentOS CI

This team is taking care of day to day business regarding CentOS Infrastructure and CentOS Stream Infrastructure.
It’s responsible for services running in CentOS Infratrusture and CentOS Stream.
CentOS ticket tracker
CentOS Stream ticket tracker

Release Engineering

This team is taking care of day to day business regarding Fedora releases.
It’s responsible for releases, retirement process of packages and package builds.
Ticket tracker

  • Fedora 44 Beta Freeze is now in effect.

RISC-V

This is the summary of the work done regarding the RISC-V architecture in Fedora.

  • (Not a lot to report this week, besides the routine on-going work.)
  • Started a discussion with the RISC-V team about RHEL builders for Konflux.  (This is not about general Konflux support, that’s out of scope)
  • Continued to investigate Fedora 44 build failures and all that entails — working with relevant upstream maintainers to get changes reviewed, merged, etc.
  • Sorted out  a build-timeout issue with Copr upstream. (Jason Montleon is currently used to build some board-specific kernels.)

QE

This team is taking care of quality of Fedora. Maintaining CI, organizing test days
and keeping an eye on overall quality of Fedora releases.

  • TestDays App was updated in production.
  • Anubis no longer breaks actions in Forge thanks to our debugging (and Infra fixing it, of course).
  • Blockerbugs meetings and the whole blocker review process has started since this week.
  • Ran test days: Grub OOM fix, GNOME 50

Forgejo

This team is working on introduction of https://forge.fedoraproject.org to Fedora
and migration of repositories from pagure.io.

UX

This team is working on improving User experience. Providing artwork, user experience,
usability, and general design services to the Fedora project

  • CLE logo complete! 
  • F45 wallpaper mindmap session took place. Mindmap created can be found here.
  • Continuing Forgejo migration.

If you have any questions or feedback, please respond to this report or contact us on #admin:fedoraproject.org channel on matrix.

The post Community Update – Week 8 2026 appeared first on Fedora Community Blog.

Podman Test Days: Try the New Backend & Parallel Pulls

Posted by Fedora Magazine on 2026-02-20 08:00:00 UTC

The Podman team and the Fedora Quality Assurance team are organizing a Test Week from Friday, February 27 through Friday, March 6, 2026. This is your chance to get an early look at the latest improvements coming to Podman and see how they perform on your machine.

What is Podman?

For those new to the tool, Podman is a daemonless, Linux-native engine for running, building, and sharing OCI containers. It offers a familiar command-line experience but runs containers safely without requiring a root daemon.

What’s Coming in Podman 5.8?

The upcoming release includes updates designed to make Podman faster and more robust. Here is what you can look forward to, and what you can try out during this Fedora Test Day.

A Modern Database Backend (SQLite)

Podman is upgrading its internal storage logic by transitioning to SQLite. This change modernizes how Podman handles data under the hood, aiming for better stability and long-term robustness.

Faster Parallel Pulls

This release brings optimizations to how Podman downloads image layers, specifically when pulling multiple images at the same time. For a deep dive into the engineering behind this, check out the developer blog post on Accelerating Parallel Layer Creation.

Experiment and Explore: Feel free to push the system a bit and try pulling several large images simultaneously to see if you notice the performance boost. Beyond that, please bring your own workflows. Don’t just follow the wiki instructions. Run the containers and commands you use daily. Your specific use cases are the best way to uncover edge cases that standard tests might miss.

What do I need to do?

  • Make sure you have a Fedora Account (FAS).
  • Download test materials in advance where applicable, which may include some large files.
  • Follow the steps on the wiki test page one by one.
  • Send us your results through the app.

Details on how to test and report results are available at the Wiki Test Day site for Podman 5.8 test day:
https://fedoraproject.org/wiki/Test_Day:2026-02-27_Podman_5.8

Test Week runs from Friday, February 27 through Friday, March 6, 2026

Thank you for taking part in the testing of Fedora Linux 44!

GNOME is participating in Google Summer of Code 2026!

Posted by Felipe Borges on 2026-02-20 06:58:52 UTC

Potential GSoC contributors may reach out with questions about our project ideas or GNOME internships in general. Please direct them to gsoc.gnome.org to learn more.

You can find our proposed project ideas at gsoc.gnome.org/2026.

Project proposal submissions are open from March 16th to 31st.

New toy in the house for AI, gaming, Linux, Windows and FreeBSD

Posted by Peter Czanik on 2026-02-19 11:56:32 UTC

There is a new toy in the house. It is a miniature workstation from HP, built around AMD’s Ryzen AI Max+ PRO 395 chip. If you are interested in the specifications and other details, check the HP product page at https://www.hp.com/us-en/workstations/z2-mini-a.html. In the long run, this box will serve many purposes:

  • learning AI, but running as much as possible locally instead of utilizing cloud services
  • learning Kubernetes by building everything from scratch on multiple virtual machines
  • home server: running complex test environments on a single box (128 GB of RAM should be enough in most cases :-) )
  • photo editing using Capture One Pro
  • occasional gaming :-)

For now, I have finished unboxing and taken the first steps with Windows. It worked, though I made a mistake during setup and had to reinstall. I do not mind, since I do not like using pre-installed operating systems anyway. At least I know the machine works.

The whole packaging is smaller than my previous desktop computer

The computer itself is barely larger than a book

Keyboard, mouse, display port converter all in the box

On the chaos of my desk :-)

Right now I am hesitant to migrate any production applications or data to the new box. I have already clicked “use the whole disk” instead of creating a partition a few times, so I want to finalize the partitioning before using the box for anything beyond hardware testing. Better safe than sorry :-)

I plan to install a couple of Linux distributions. I mainly use openSUSE on the desktop, but I found instructions for Fedora to accelerate AI on this AMD chip. So, I’ll most likely install both. And, of course, I also plan to install FreeBSD 15 on the machine and see how it works both as a server and as a desktop.

I plan to post updates about my experiences in the coming weeks.

UDP reliability improved in syslog-ng Debian packaging

Posted by Peter Czanik on 2026-02-19 10:19:37 UTC

UDP log collection is a legacy feature that does not provide any security or reliability, but is still in wide use. You can improve its reliability using eBPF on Linux in recent syslog-ng versions. Support for eBPF was added to Debian packages while preparing for the 4.11.0 syslog-ng release.

You can learn more about eBPF support in syslog-ng from the documentation or reading my blog at https://www.syslog-ng.com/community/b/blog/posts/syslog-ng-4-2-extra-udp-performance

Right now, packaging changes only affect the syslog-ng nightly Debian / Ubuntu packages and the syslog-ng nightly container image. You can learn more about how to use them in the syslog-ng README on GitHub at https://github.com/syslog-ng/syslog-ng/ Once the syslog-ng 4.11.0 release is available, using the stable syslog-ng packages will include improved UDP support as well.

Are you interested in improving TCP performance for a single or few high traffic connections? You are looking for the parallelize() option: https://www.syslog-ng.com/community/b/blog/posts/accelerating-single-tcp-connections-in-syslog-ng-parallelize The good news is that the required changes are now available in ivykis upstream, so this feature is not limited to our builds anymore.

syslog-ng logo

Originally published at https://www.syslog-ng.com/community/b/blog/posts/udp-reliability-improved-in-syslog-ng-debian-packaging

Master Podman 5.8: Join Fedora Test Week

Posted by Fedora Community Blog on 2026-02-19 10:00:00 UTC

Want to learn the latest container tech? From February 27 to March 6, 2026, you can join the Podman 5.8 Test Day. It is the perfect time to explore new features and see how the future of Fedora is built.

What is new?

  • Faster Downloads: Try optimized “parallel pulls” to get your images in seconds.
  • Setup: Test the new “automatic database creation” for a smoother start.
  • Expert Skills: Learn how to use latest container environments before they go mainstream.

Why join?

Your setup is unique. By running Podman 5.8 on your machine, you make sure the final version works perfectly for everyone. It is a great way to learn by doing and to see how top-tier open-source software is made.

Start here

We have prepared easy-to-follow steps for you here: https://fedoraproject.org/wiki/Test_Day:2026-02-27_Podman_5.8

The post Master Podman 5.8: Join Fedora Test Week appeared first on Fedora Community Blog.

Replacing my compact calendar spreadsheet with an ICS-powered web app

Posted by Brian (bex) Exelbierd on 2026-02-18 13:30:00 UTC

I’ve used some form of DSri Seah’s Compact Calendar for over seven years. The calendar is a lovingly designed single-page view of the entire year, organized into Monday-through-Sunday weeks with no breaks between months.

The point of the format is simple: my normal calendar is great at telling me what I’m doing on Tuesday. What it’s terrible at is answering planning questions that are above the day level, such as:

  • If we take a vacation the last two weeks of July, will it overlap business travel?
  • Can we connect these two public holidays and get 14 days away for only 8 days of PTO?
  • Do we have any genuinely empty weeks left this year?

For a long time, my compact calendar was a spreadsheet. That worked until it didn’t.

The problem I actually needed to solve

The spreadsheet version served me well for years, but life got more complicated.

My kid is getting older, which means more activities to track: summer camps, school breaks, etc. My partner and I no longer work for the same company, so we don’t share the same corporate holidays and as our roles have changed so has the amount of travel we do. And, honestly, my spreadsheet has bespoke formulas that only I understand … on Thursdays when there is a full moon.

My partner knows how to use a calendar app. She really doesn’t want to learn a special spreadsheet for planning, and I don’t blame her.

The real friction screaming out that there had to be a better way was the double-entry work. If my kid has summer camp in July, I’d put it on the family calendar - and then manually mark those weeks on my compact calendar spreadsheet. Two sources of truth means one of them is eventually wrong.

So the job wasn’t “build a better calendar.” It was: keep the year-at-a-glance view, but make the calendar app the source of truth.

The shape of the solution

I decided to build a web version of the compact calendar that could read directly from standard ICS calendar feeds.

Put the summer camp on the shared calendar once. The compact calendar picks it up automatically.

And if this was going to be something my partner and I actually used together, it needed two things:

  • A simple setup flow (not “copy this spreadsheet and don’t touch column Q”)
  • A way to always be available beyond, “go find this Google Docs Link”

What the tool does

The calendar renders a full year on a single page. Each row is one week, Monday through Sunday.

Parallel to the block of weeks running down the page is a column for displaying committed events and a second for displaying possible events.

  • Committed: events that are definitely happening - travel that’s booked, school terms, confirmed work trips.
  • Possible: things under consideration - a conference I submitted a talk to but haven’t heard back from yet, vacation options we’re weighing.

The tool uses color to signal status at a glance:

  • Blue background: first day of the month (anchors the continuous weeks)
  • Red text: public holidays (per selected country)
  • Green background: committed events
  • Yellow background: possible events
  • Green background with a yellow border: overlaps/conflicts that need attention

Here’s what the full-year view looks like with demo data loaded:

A full-year compact calendar view with one row per week (Monday through Sunday), with committed events shown in green, possible events in yellow, public holidays in red, and overlaps highlighted with a yellow border.

Inputs: URL, file, or demo

While there is demo data available in the system, the key comes from loading your own data. You can choose two different kinds of sources:

  • A URL - a webcal:// or https:// link to a published calendar (iCloud, Google Calendar, etc.)
  • A file - a .ics file uploaded from your computer

We’re an Apple household so our calendars live in iCloud, but the tool doesn’t care about your calendar provider. Anything that produces a standard ICS feed works.

My practical workflow is two shared calendars in Apple Calendar:

  • one for committed travel and events. For me, this is actually my shared calendar that our family maintains.
  • one for possibilities we’re considering

Both are published as webcal URLs, and the compact calendar fetches them and renders the year view. Using my shared calendar works because the app ignores events that aren’t multi-day, all-day blocks - so dentist appointments don’t drown out the year view. You can optionally include single day all-day events if that helps you.

The setup controls are intentionally simple:

Configuration controls showing a country dropdown (for public holidays) and two inputs for selecting the committed and possible calendar sources.

The tech (and the annoying part)

This is a vanilla JavaScript app built with Vite, hosted on Azure Static Web Apps. No framework - just DOM manipulation, a CSS file, and under 500 lines of main application code.

The interesting technical problem was CORS.

Calendar providers like iCloud don’t set CORS headers on their published feeds, which means a browser can’t fetch them directly. The solution is a small Azure Function that acts as a proxy:

  • the browser sends the calendar URL to the server
  • the server fetches the calendar data
  • the server returns it to the browser

The proxy doesn’t store or log anything. It’s a pass-through.

I built the app with an AI coding agent. I provided direction and made decisions, but I didn’t hand-write every line. For this kind of tool, I’m comfortable with that. It’s a static site that renders calendar data client-side, and the risk profile is low. Additionally, nothing in this code represents a new problem or a novelty. This is bog-standard code, and the agent handled the boilerplate well for this project.

Importantly, even though I could have written this code myself, I wouldn’t have. I probably would have gotten myself caught in a bit of analysis paralysis over frameworks. But more importantly, writing a lot of this code is just boring code to write. The AI agent has allowed me to solve my own problem, and that’s the part that matters to me. I didn’t have to suddenly become more disciplined about spreadsheets or get my family dragged onto a tool that really only speaks to me. Instead, I was able to change the shape of the problem and make it more solvable within the context of the humans involved.

Privacy and the honest trade-off

All your data stays in your browser. The app stores the URLs you’re loading, your selected country, and cached holiday data in local storage. This is purely functional and not for tracking.

Calendar URLs necessarily have to go through the server-side proxy because browsers won’t fetch them directly. The proxy is a stateless pass-through — I don’t persist calendar data in the function or in your browser. Calendar URLs are sent via POST request body rather than query parameters, which means they aren’t captured in Azure’s platform-level request logs. Error logging includes only the target hostname (e.g., “iCloud fetch failed”), never the full URL or authentication tokens. If your calendar URL contains authentication tokens (iCloud URLs do), understand that the proxy briefly sees them in transit.

Try it out

The calendar is live at cc.bexelbie.com. You can load the built-in demo data to explore without connecting your own calendars - select “Demo” from either input dropdown.

The source is on GitHub at bexelbie/online-compact-calendar. If you have ideas or find bugs, open an issue.

On first visit, there’s a banner that points you at settings:

A first-run welcome banner that tells the user to use the gear icon to configure the app.

What’s next

I’m going to live with it for a while before adding features. The spreadsheet served me for seven years with almost no changes.

Filtering Devices with LVM Devices File

Posted by Vojtěch Trefný on 2026-02-18 07:13:00 UTC

To control which devices LVM can work with, it was always possible to configure filtering in the devices section of the /etc/lvm/lvm.conf configuration file. But filtering devices this way was not very simple and could lead to problems when using paths like /dev/sda which are not stable. Many users also didn’t know this possibility exists and while using this type of filtering is possible for a single command with the --config option, it is not very user friendly. This all changed recently with the introduction of the new configuration file /etc/lvm/devices/system.devices and the corresponding lvmdevices command in LVM 2.03.12. A new option --devices was also added to the existing LVM commands for a quick way to limit which devices one specific command can use.

LVM Devices File

As was said above, there is a new /etc/lvm/devices/system.devices configuration file. When this file exists, it controls which devices LVM is allowed to scan. Instead of relying on matching the device path, the devices file uses stable identifiers like WWID, serial number or UUID.

A device file on a simple system with a single physical volume on a partition would look like this:

# LVM uses devices listed in this file.
# Created by LVM command vgimportdevices pid 187757 at Fri Feb 13 16:44:45 2026
# HASH=1524312511
PRODUCT_UUID=4d58d0c1-8b67-4fa6-a937-035d2bfbb220
VERSION=1.1.1
IDTYPE=devname IDNAME=/dev/sda2 DEVNAME=/dev/sda2 PVID=rYeMgwy0mO0THDagB6k8mZkoOSqAWfte PART=2

When the devices file is enabled, LVM will only scan and operate on devices listed in it. Any device not present in the file is invisible to LVM, even if it has a valid PV header.

This is the biggest change brought in with this feature. The old lvm.conf based filters were always optional and LVM always scanned all devices in the system, unless told otherwise. This could cause problems on systems with many disks, where LVM (especially during boot) could take a long time scanning devices that did not even “belong” to it.

By default, the LVM devices file is enabled with the latest versions of LVM and on systems without preexisting volume groups, creating new LVM setups with commands like pvcreate or vgcreate will automatically add the new physical volumes to the devices file. If desired, this feature can be disabled by setting use_devicesfile=0 in lvm.conf or by simply removing the existing devices file. On systems without the devices file, LVM will simply scan all devices in the system the same way it did before introduction of this configuration file.

Managing Devices with lvmdevices and vgimportdevices

On most newly installed systems with LVM, the devices file should be already present and populated, but you might want to either create it later on systems installed with an older version of LVM, or manage some devices manually. It is possible to modify the system.devices manually, but a new command lvmdevices was added for simple management of the file.

To simply import all devices in an existing volume group, vgimportdevices <vgname> can be used and for all volume groups in the system, vgimportdevices -a can be used.

A single physical volume can be added to the file with lvmdevices --adddev and removed with lvmdevices --deldev.

To check all entries in the devices file, lvmdevices --check can be used and any issues found by the check command can be fixed with lvmdevices --update.

Backups

In the sample devices file above, you might have noticed the VERSION field. This is the current version of the file. LVM automatically makes a backup of the file with every change and old versions of the file can be found in the /etc/lvm/devices/backup directory. So if you make some mistakes when changing the file with lvmdevices, you can simply restore to a previous version of the file.

Overriding the Devices File and Filtering with Commands

Together with the devices file feature, a new option --devices was added to all LVM commands. This option allows specifying devices which are visible to the command. This overrides the existing devices file so it can be used either to restrict the command to work only on a subset of devices specified in the devices file or even to allow it to run on devices not specified in the file at all.

This option is also very useful when dealing with multiple volume groups with the same name. This is a known limitation of LVM – two volume groups with the same name cannot coexist in one system and LVM will refuse to work without renaming one of them. This can be a problem when dealing with cloned disks or backups. With --devices, commands like vgs can be restricted to “see” only one of the volume groups.

Issue: Missing Volume Group

As mentioned above, when installing a new system with LVM, for the newly created volume groups, the used devices will be added to the devices file. Fedora (and RHEL) installer, Anaconda, will also add all other volume groups present during installation to the devices file so these will also be visible in the installed system. The problems start when a device with a volume group is added to the system after installation. The volume group (and any logical volumes in it) is suddenly invisible. Even commands like vgs will simply ignore it, because its physical volumes are not listed in the devices file.

This can be a problem on dual boot systems with encryption. Because the second system’s volume group is “hidden” by the encryption layer, it is not visible during installation and not added to the devices file. When the user unlocks the LUKS device in their newly installed system, they can’t access their second system. Unfortunately in this situation, the only solution is to manually add the second system’s volume group with vgimportdevices as described above.

Conclusion

The LVM devices file provides a cleaner and more reliable way to control which devices LVM uses, replacing the old lvm.conf based filtering with stable device identifiers and simple management through the lvmdevices command. Overall, for most users the devices file should work transparently without any manual configuration needed.

Mrhbaan Syria! Fedora now available in Syria

Posted by Fedora Magazine on 2026-02-17 08:00:00 UTC
A dark grey banner featuring the Syrian Independence flag alongside the text "Now available in Syria", "Fedora", and the Syrian Arabic phrase "في داركم" below it. The background has a subtle triangular pattern.

Mrhbaan, Fedora community! 👋 I am happy to share that as of 10 February 2026, Fedora is now available in Syria. Last week, the Fedora Infrastructure Team lifted the IP range block on IP addresses in Syria. This action restores download access to Fedora Linux deliverables, such as ISOs. It also restores access from Syria to Fedora Linux RPM repositories, the Fedora Account System, and Fedora build systems. Users can now access the various applications and services that make up the Fedora Project. This change follows a recent update to the Fedora Export Control Policy. Today, anyone connecting to the public Internet from Syria should once again be able to access Fedora.

This article explains why this is happening now. It also covers the work behind the scenes to make this change happen.

Why Syria, why now?

You might wonder: what happened? Why is this happening now? I cannot answer everything in this post. However, the story begins in December 2024 with the fall of the Assad regime in Syria. A new government took control of the country. This began a new era of foreign policy in Syrian international relations.

Fast-forward to 18 December 2025. The United States signed the National Defense Authorization Act for Fiscal Year 2026 into law. This law repealed the 2019 Caesar Act sanctions. This action removed Syria from the list of OFAC embargoed countries. The U.S. Department of the Treasury maintains this list.

This may seem like a small change. Yet, it is significant for Syrians. Some U.S. Commerce Department regulations remain in place. However, the U.S. Department of the Treasury’s policy change now allows open source software availability in Syria. The Fedora Project updated its stance to welcome Syrians back into the Fedora community. This matches actions taken by other major platforms for open source software, such as Microsoft’s GitHub.

Syria & Fedora, behind the scenes

Opening the firewall to Syria took seconds. However, months of conversations and hidden work occurred behind the scenes to make this happen. The story begins with a ticket. Zaid Ballour (@devzaid) opened Ticket #541 to the Fedora Council on 1 September 2025. This escalated the issue to the Fedora Council. It prompted a closer look at the changing political situation in Syria.

Jef Spaleta and I dug deeper into the issue. We wanted to understand the overall context. The United States repealed the 2019 Caesar Act sanctions in December 2025. This indicated that the Fedora Export Policy Control might be outdated.

During this time, Jef and I spoke with legal experts at Red Hat and IBM. We reviewed the situation in Syria. This review process took time. We had to ensure compliance with all United States federal laws and sanctions. The situation for Fedora differs from other open source communities. Much of our development happens within infrastructure that we control. Additionally, Linux serves as digital infrastructure. This context differs from a random open source library on GitHub.

However, the path forward became clear after the repeal of the 2019 Caesar Act. After several months, we received approval. Fedora is accessible to Syrians once again.

Opening the door to Syria

Some folks may have noticed the Fedora Infrastructure ticket last week. It requested the removal of the firewall block. We also submitted a Fedora Legal Docs Merge Request to change the Fedora Export Control Policy.

We wanted to share this exciting announcement now. It aligns with our commitment to the Fedora Project vision:

“The Fedora Project envisions a world where everyone benefits from free and open source software built by inclusive, welcoming, and open-minded communities.“

We look forward to welcoming Syrians back into the Fedora community and the wider open source community at large. Mrhbaan!

💎 PHPUnit 13

Posted by Remi Collet on 2026-02-06 07:59:00 UTC

RPMs of PHPUnit version 13 are available in the remi repository for Fedora ≥ 42 and Enterprise Linux (CentOS, RHEL, Alma, Rocky...).

Documentation :

ℹ️ This new major version requires PHP ≥ 8.4 and is not backward compatible with previous versions, so the package is designed to be installed beside versions 8, 9, 10, 11, and 12.

Installation:

dnf --enablerepo=remi install phpunit13

Notice: This tool is an essential component of PHP QA in Fedora. This version should be available soon in the Fedora ≥ 43 official repository (19 new packages).

misc fedora bits 2nd week of feb 2026

Posted by Kevin Fenzi on 2026-02-14 18:20:52 UTC
Scrye into the crystal ball

Another weekly recap of happenings around fedora for me.

Strange long httpd reload times on proxy11

I spent a fair bit of time looking at one of our proxies. We have them all to a reload (aka 'graceful restart') every hour when we update a ticketkey on them. For the vast majority of them, thats fine and works as expected. However, proxy11 decided to start taking a while (like 12-15seconds) to reload, causing our monitoring to alert that it was down... then back up.

In the end, it seemed the problem was somehow related to some old tls certificates that were present, but not used anywhere. All I can think of is that it's doing some kind of parsing of all certs and somehow those old ones cause it undue processing time. I removed those old certs and reload times went way back down again.

I'm tempted to try and figure out what it's doing exactly here, but I already spent a fair bit of time on it and it's working again now, so I guess I will just shrug and move on.

Anubis and download servers

A while back I had to hurredly deploy anubis in front of our download servers. This was due to the scrapers deciding to just download every rpm / iso from every fedora release since the dawn of time at a massive concurrency. This was saturating one of our 10G links completely, and making another somewhat full. So, I deployed anubis and it dropped things back to 'normal' again.

Fast forward to this last week, and my rush in deploying anubis came back to bite me. We have a cloudfront distribution that uses our download servers as it's 'origin'. Then we point all aws network blocks to use that for any fedora instances in aws. This is a win for us as then everything for them is cached on the aws side saving bandwith, and a win for aws users as that traffic is 'local' to them so faster and doesn't cause them to need to be billed for ingress either.

Last week, anubis started blocking CloudFront, so uses in aws would get a anubis challenge page instead of the actual content they were expecting. But why did it this just happen now? well, as near as I could determine, someone/scrapers were hitting the CloudFront endpoints and crawling our download server (fine, no problem there), but then they hit a directory that they handled poorly.

The directory was used/last updated about 11 years ago with a readme file explaining that the content was moved and no longer there. Great. However, also it had previous subdirectories as links to '.' (ie, the current directory). Since scrapers don't use any of the 20 years of crawling code, and instead just brute force things, this resulted in a bunch of requests like:

GET /foo/ GET /foo/foo/ GET /foo/foo/foo/

and so on. These are all really small (just a directory listing), so that meant it could make requests really really fast. So, after some point anubis started challenging those CloudFront connections and boom.

So, the problem with the hurred deployment I had made there was that The policy file I had deployed was not actually being used. I had allowed CloudFront, but it didn't seem to help any, and it took me far too long to figure out that anubis was starting up, printing one error about not being able to read the policy file and just running with the default configuration. ;( It turned out be a podman/selinux interaction and is now fixed.

I also removed those . links and set that directory tree to just 403 all requests to it.

Anubis and forge

Also this week, folks were reporting problems with our new forgejo forge. Anubis was doing challenges when people were trying to submit comments and it was messing them up.

In the end here, I just needed to adjust the config to allow POSTs through. At least right now scrapers aren't doing any POSTS and just allowing those seems to fix the issues people were having.

Some more scrapers

Friday we had them hitting release-monitoring.org. This time it was what I am calling a 'type 0' scraper. It was all coming from one cloud ip and I could just block them.

This morning a bit ago, we had a group hit/find the 'search' button on koji.fedoraproject.org, taking it offline. I was able to block the endpoint for a few hours and they went away, but no telling if they will be back. These were the 'type 2' kind (botnet using users ip's/browsers from 100's of thousands of different ips).

I am sad that the end game here sounds like there's not going to be so much of a open internet anymore. ie, for self defense sites will all have to go to requiring registration of some kind before working. I can only hope business models change before it comes to that.

comments? additions? reactions?

As always, comment on mastodon: https://fosstodon.org/@nirik/116070476999694239

FOSDEM 2026

Posted by Robert Wright on 2026-02-14 08:00:00 UTC

FOSDEM is one of my favorite events because I get to catch up with friends from across the FOSS communities and meet so many amazing contributors who use and support open source. It’s inspiring to see so many people excited about building software, hardware, and everything in between together. That energy motivates me in my own open source work.

This year, I was also glad to attend two FOSDEM fringe events, CHAOSSCon-EU and CentOS Connect, where I got to sit on panels.

Normal dates in Thunderbird

Posted by Alejandro Sáez Morollón on 2025-11-05 08:15:30 UTC

A few minutes ago I was answering an email in Thunderbird, and I realized one thing that might have been there for years. The date was in the wrong format! (Wrong as in for me, of course).

I use English (US) for my desktop environment, but I change the format of several things because I use the metric system, and I need the Euro sign and normal dates. Sorry, but month, day, and year is a weird format.

The “normal” thing would be to use my country format. But if I select a format from Spain, I get dates in Spanish and in a format that I also hate:

What I want is ISO8601 and English. But I don’t want to modify each field manually. Too much. The weird trick is to use Denmark (English). I am not kidding. And I am not alone. At all.

Why, you may ask? Look at this beauty. It’s just perfect.

Anyway. My problem is with Thunderbird. It looks like it doesn’t support having a language and a format from different regions. Thankfully they documented it here.

So now, I have:

  • intl.date_time.pattern_override.date_short set to yyyy-MM-dd

  • intl.date_time.pattern_override.time_short set to HH:mm

I guess I might need more stuff, but at least for now I don’t see a weird date when I am answering emails.


P.D: Talking about weird things I like to configure… My keyboards are ANSI US QWERTY. But the layout I use is English (intl., with AltGr dead keys). So I can type Spanish letters using the right alt and a key (e.g: AltGr + n gives me ñ).

The syslog-ng Insider 2026-02: stats-exporter; blank filter; Kafka source

Posted by Peter Czanik on 2026-02-13 10:34:09 UTC

The February syslog-ng newsletter is now on-line:

  • The syslog-ng stats-exporter() now has all functionality of syslog-ng-ctl
  • Using the blank() filter of syslog-ng
  • How to test the syslog-ng Kafka source by building the package yourself?

It is available at https://www.syslog-ng.com/community/b/blog/posts/the-syslog-ng-insider-2026-02-stats-exporter-blank-filter-kafka-source

syslog-ng logo

Community Update – Week 07 2026

Posted by Fedora Community Blog on 2026-02-13 10:00:00 UTC

This is a report created by CLE Team, which is a team containing community members working in various Fedora groups for example Infrastructure, Release Engineering, Quality etc. This team is also moving forward some initiatives inside Fedora project.

Week: 09 – 13 February 2026

Fedora Infrastructure

This team is taking care of day to day business regarding Fedora Infrastructure.
It’s responsible for services running in Fedora infrastructure.
Ticket tracker

  • Ported bugzilla2fedmsg to use Red Hat’s Kafka servers, since UMB will be decommissioned. The code with Kafka support is currently in staging but requires firewall ports to be opened (ticket 13133)
  • Resolved outage of stg.pagure.io caused by anubis (ticket 13128)
  • Few more repositories migrated to forge.fedoraproject.org from pagure.io
  • Resolved pushing issues on to src.stg.fedoraproject.org, caused by wrong permissions (ticket 13096)
  • [Tahrir] Add PUT endpoint for updating for USER relation and integrate it with the frontend [Review] [Merged]
  • [Tahrir] List profile invitations when the user has logged in [Pull request]

CentOS Infra including CentOS CI

This team is taking care of day to day business regarding CentOS Infrastructure and CentOS Stream Infrastructure.
It’s responsible for services running in CentOS Infratrusture and CentOS Stream.
CentOS ticket tracker
CentOS Stream ticket tracker

Release Engineering

This team is taking care of day to day business regarding Fedora releases.
It’s responsible for releases, retirement process of packages and package builds.
Ticket tracker

  • Fedora 44 Mass Branching was successfully completed last week! 🎉
  • Beta Freeze is tentatively scheduled to begin next week, Tue 2026-02-17.

RISC-V

This is the summary of the work done regarding the RISC-V architecture in Fedora.

  • Fedora RISC-V “unified kernel” work in progress (Jason Montleon): targeted for F44.  Right now they’re being built in Copr.  Will move to Koji once the F44 buildroot is populated.
  • Work by Fabian Arrotin RISC-V builders in CentOS Build System (CBS) now can build from official sources (git+https from CentOS Stream)

QE

This team is taking care of quality of Fedora. Maintaining CI, organizing test days
and keeping an eye on overall quality of Fedora releases.

  • Co-ordinated with releng team on branching as it involves openQA, critical path and the gating policies – things went much smoother this time but still lots of lessons learned and SOP improvements drafted and suggested
  • Work ongoing to port the blockerbugs tool to forgejo , nearly complete but may be delayed as jgroman will be on PTO
  • Multiple enhancements to testdays-web, see merged PRs
  • Updated openQA packages deployed to production
  • Another heavy week of openQA catching bugs, mostly plasma-setup and GNOME 50, details here

Forgejo

This team is working on introduction of https://forge.fedoraproject.org to Fedora
and migration of repositories from pagure.io.

EPEL

This team is working on keeping Epel running and helping package things.

  • EPEL 11 early planning at CentOS Connect, FOSDEM, and CentOS Docs Day

UX

This team is working on improving User experience. Providing artwork, user experience,
usability, and general design services to the Fedora project

If you have any questions or feedback, please respond to this report or contact us on #admin:fedoraproject.org channel on matrix.

The post Community Update – Week 07 2026 appeared first on Fedora Community Blog.

⚙️ PHP version 8.4.18 and 8.5.3

Posted by Remi Collet on 2026-02-13 05:42:00 UTC

RPMs of PHP version 8.5.3 are available in the remi-modular repository for Fedora ≥ 42 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).

RPMs of PHP version 8.4.18 are available in the remi-modular repository for Fedora ≥ 42 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).

ℹ️ These versions are also available as Software Collections in the remi-safe repository.

ℹ️ The packages are available for x86_64 and aarch64.

ℹ️ There is no security fix this month, so no update for versions 8.2.30 and 8.3.30.

Version announcements:

ℹ️ Installation: Use the Configuration Wizard and choose your version and installation mode.

Replacement of default PHP by version 8.5 installation (simplest):

On Enterprise Linux (dnf 4)

dnf module switch-to php:remi-8.5/common

On Fedora (dnf 5)

dnf module reset php
dnf module enable php:remi-8.5
dnf update

Parallel installation of version 8.5 as Software Collection

yum install php85

Replacement of default PHP by version 8.4 installation (simplest):

On Enterprise Linux (dnf 4)

dnf module switch-to php:remi-8.4/common

On Fedora (dnf 5)

dnf module reset php
dnf module enable php:remi-8.4
dnf update

Parallel installation of version 8.4 as Software Collection

yum install php84

And soon in the official updates:

⚠️ To be noticed :

  • EL-10 RPMs are built using RHEL-10.1
  • EL-9 RPMs are built using RHEL-9.7
  • EL-8 RPMs are built using RHEL-8.10
  • intl extension now uses libicu74 (version 74.2)
  • mbstring extension (EL builds) now uses oniguruma5php (version 6.9.10, instead of the outdated system library)
  • oci8 extension now uses the RPM of Oracle Instant Client version 23.26 on x86_64 and aarch64
  • A lot of extensions are also available; see the PHP extensions RPM status (from PECL and other sources) page

ℹ️ Information:

Base packages (php)

Software Collections (php83 / php84 / php85)

 

⚙️ PHP version 8.3.30, 8.4.17, and 8.5.2

Posted by Remi Collet on 2026-01-16 05:55:00 UTC

RPMs of PHP version 8.5.2 are available in the remi-modular repository for Fedora ≥ 41 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).

RPMs of PHP version 8.4.17 are available in the remi-modular repository for Fedora ≥ 41 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).

RPMs of PHP version 8.3.30 are available in the remi-modular repository for Fedora ≥ 41 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).

ℹ️ These versions are also available as Software Collections in the remi-safe repository.

ℹ️ The packages are available for x86_64 and aarch64.

ℹ️ There is no security fix this month, so no update for version 8.2.30.

Version announcements:

ℹ️ Installation: Use the Configuration Wizard and choose your version and installation mode.

Replacement of default PHP by version 8.5 installation (simplest):

On Enterprise Linux (dnf 4)

dnf module switch-to php:remi-8.5/common

On Fedora (dnf 5)

dnf module reset php
dnf module enable php:remi-8.5
dnf update

Parallel installation of version 8.5 as Software Collection

yum install php85

Replacement of default PHP by version 8.4 installation (simplest):

On Enterprise Linux (dnf 4)

dnf module switch-to php:remi-8.4/common

On Fedora (dnf 5)

dnf module reset php
dnf module enable php:remi-8.4
dnf update

Parallel installation of version 8.4 as Software Collection

yum install php84

Replacement of default PHP by version 8.3 installation (simplest):

On Enterprise Linux (dnf 4)

dnf module switch-to php:remi-8.3/common

On Fedora (dnf 5)

dnf module reset php
dnf module enable php:remi-8.3
dnf update

Parallel installation of version 8.3 as Software Collection

yum install php83

And soon in the official updates:

⚠️ To be noticed :

  • EL-10 RPMs are built using RHEL-10.1
  • EL-9 RPMs are built using RHEL-9.7
  • EL-8 RPMs are built using RHEL-8.10
  • intl extension now uses libicu74 (version 74.2)
  • mbstring extension (EL builds) now uses oniguruma5php (version 6.9.10, instead of the outdated system library)
  • oci8 extension now uses the RPM of Oracle Instant Client version 23.9 on x86_64 and aarch64
  • A lot of extensions are also available; see the PHP extensions RPM status (from PECL and other sources) page

ℹ️ Information:

Base packages (php)

Software Collections (php83 / php84 / php85)

Friday Links 26-06

Posted by Christof Damian on 2026-02-12 23:00:00 UTC

I enjoyed the interview with the OpenClaw creator, even if I hate about half of his opinions. The interview with the baker is also great.

Leadership

Three Bad Managers - three of many.

Engineering

Michael Stapelberg: PSA: Did you know that it’s unsafe to put code diffs into your commit messages? - this is only for patches, not merging PRs

Evolving Git for the next decade [LWN] security & large files are interesting.