/rss20.xml">

Fedora People

Community Day - FOSSAsia 2026

Posted by Akashdeep Dhar on 2026-04-19 18:30:18 UTC
Community Day - FOSSAsia 2026

The FOSSAsia 2026 event prioritized the friendly community interaction aspect of a free and open source software event by frontloading it on 08th March 2026 as a freely accessible event. This was surely a great example to learn from and implement at our flagship annual community conference, Flock To Fedora. I started the day by waking up as early as 0700am Indochina Time and giving Samyak Jain a wake-up call. As the timezone offset from Indian Standard Time was ninety minutes, I had to wait for some time before I could connect back home with my family members. Thankfully, the breakfast meal provided at Lumen Bangkok Udomsuk Station was a lot more flavourful and distinctive as compared to those that I was habituated to having at hotels in Europe. While there was a lot to be had for my dietary preferences, it was a tough time for Samyak in a mostly non-vegetarian selection of meals, so he had to request a custom vegetarian meal. Once we were through with breakfast, we started off for the FOSSAsia 2026 conference venue, True Digital Park West, at around 0930am Indochina Time.

Community Day - FOSSAsia 2026
Checkpoint A

As the venue was barely under a kilometer away from our stay, we were able to make it to the event on foot. While the summer season had not arrived in Thailand, it was still extremely humid, and we wanted to head indoors as soon as possible. Skipping the first couple of early sessions allowed us to get enough respite after a long yesterday, and that would mean that we could also last longer that day. After skipping through some floors of the True Digital Park Campus and connecting with the folks from the FOSSAsia Community Cycling Trip in the Telegram chat, we were finally able to make it to the right location. Following Mishari Muqbil's advice, we took the long escalator that skipped a couple of floors entirely to get us right in front of the event reception. The one thing that took me by surprise in Thailand was the fact that the ground floors were counted as the first floor, and there was no concept of a zero level. It was a pretty interesting observation that we had to mentally train ourselves to get used to, if we wanted to not get lost while visiting most (if not all) multi-storeyed buildings in Bangkok.

Walking through the premises, which astonishingly enough consisted of both student laboratories and shopping stores in equal measure, we entered the hall selection where FOSSAsia 2026 was organized. Surely, the volunteers could have done better at pointing folks to the correct place, but this feeling was quickly overridden when I ran into a bunch of acquainted and recent community friends. Getting to meet these people, mostly from APAC, at the event felt like making a victory lap to celebrate all the connections that I had painstakingly built throughout the last five years or so. Trying not to get swept away in all the interesting conversations, we were guided by an adjacent volunteer to get ourselves the badges, wristbands, and swag kits from the event reception. I was able to get those for myself as I was also participating in the conference as a speaker, but Samyak had to wait until the next day to obtain his badge. While we had the agency to join one of the running workshops at the time, we still decided to hang out at the hallway track a little longer to discuss open source strategies with the community.

After a brief catch-up with Ananthu CV from the Debian Project, whom I was meeting here after having done so during GNOME Asia 2024 and FOSDEM 2025, Samyak and I made it into the Eventyay Developer Workshop room at around 1045am Indochina Time. Although we initially thought that we were late, Norbert Preining, Srivastav Auswin, and Mario Behling were just getting started conducting a round of introductions for the folks present there. Since I was representing the Fedora Infrastructure in this workshop, I presented the case of how we used a similar event service platform, Pretalx, for our use case and how I was interested in learning all the various things EventYay had to offer. Amidst the feature run-through and tooling integrations, the workshop organizers had an interest in how the Fedora Project used Pretalx for running events like CentOS Connect and Flock To Fedora. Amidst the lack of proper seating in the room, I also got to meet Deepesh Nair briefly before agreeing to connect with Mario later for deeper discussions on how we could further collaborate in the future.

Hong Phuc Dang entered sometime later with an announcement for refreshments and some more seats for the workshop hall. This allowed the hall to be filled up a whole lot more as we headed back to the hallway track to meet with the likes of Aqsa Aqeel from DigitalOcean and Leon Nunes, who had begun his open source contribution journey. It was interesting to notice how the Fedora Project was the only RPM-based distribution on the scene there apart from our reliable downstream, Red Hat Enterprise Linux representatives. The distribution aspect of the event felt mostly occupied by the folks from Canonical and Debian, thus giving us the chance to capture some relevance. I briefly met up with Anuvrat Parashar and Shivani Parashar before having a conversation with Hong, who was surprised to note that the Fedora Project did not have a community booth. It turned out that there was another Call For Proposal form that was opened up closer to the commencement timeline, apart from the one for presentations and workshops, that I, as the event owner from the Fedora Project, was not even aware of.

Community Day - FOSSAsia 2026
Checkpoint B

This was also hinted at by Mishari in one of our conversations from the day before about the community booths being made available for free while the commercial booths were being charged a reasonable payment. In our discussions, Hong assured community support, and I discussed strategic collaboration before we had some quick photograph clicks to end the conversation. While not being able to have a community booth during FOSSAsia 2026 felt like a missed chance, I wanted to make the most of my participation as a community prospector for the Fedora Project's future participation in the event. We did not really have concrete plans on which workshops we would want to attend that day and were making our agenda on the fly. Making it back to the hallway track at around 1100am Indochina Time, I met up with the likes of Rajan Shah, Shivraj Patil, Veerkumar Patil, and Gaurav Kamathe from Red Hat, who were just arriving at the conference venue. It was hilarious to catch up with them here barely a couple of weeks after meeting them at the DevConf.IN 2026 conference in Pune, India.

At around 1130am Indochina Time, Samyak and I headed back into the rooms to attend presentations delivered not just in English but in a variety of languages. It was interesting to see just how open the FOSSAsia conference was to folks wanting to present their work in their own language, and when paired with a technology that live-transcribes it to a more generally known language, this could significantly lower the barrier of entry for these conferences. Most sessions, if not all, were bite-sized portions of fifteen minutes, and depending on who you are, the time management aspect of the presentation could be difficult. After exchanging the coffee coupon for a Dunkin-branded cold coffee and the meal coupon for a packed lunch, we headed out to chat with one of Samyak's friends from the Debian Project. We were able to make it back to Wendy Ha's talk on her experiences as an APAC Kubernetes Community Contributor, followed by Punsiri Boonyakiat's talk on balancing motherhood responsibilities with open source, which became a key highlight for International Women's Day.

On the second half of the community day, the host was doing absolutely terrific work keeping the audience entertained while the volunteer team looked into the technical problems that had crept in. Her fantastical oration with a deadpan expression did not allow our spirits to sour when the fixes were taking a little while longer. Contrary to my belief, the community day was not as occupied as I thought it would be due to its free-of-charge participation, but we were able to make the best of whoever was present on the ground. We also ran into Aaditya Singh from the GNOME Foundation and Mishari, who were just arriving at the conference venue in the afternoon, both of whom we were quite pleased to catch up with. After a quick conversation with Tamhant about the technological industry, I met up with Mario again to discuss how the Fedora Project evolved from a fragile ticket-based system to a robust dedicated event platform. It helped me understand how EventYay's Developer Preview Service could be accessed without having to establish the project environment locally.

As the Fedora Infrastructure's CommuniShift Project allowed for getting these deployments up and running in no time, I suggested the same to the EventYay developers, as they could find it handy to demonstrate this to the Fedora Project community. After running into Hong again, Samyak and I talked about how we could further push the adoption of the community day using interactive activities for folks hanging out on the hallway track. This would allow for greater participation on the community day, not just by those who have their talks planned but also by visitors. At around 0120pm Indochina Time, we made it to Peter Zaitsev's talk on Percona Project's attempts to make MySQL Community stronger. Apart from the obligatory jabs hurled at Oracle, there were also some tricky questions around the choice between MariaDB and MySQL if PerconaDB was not an option, addressed. We went on to attend Michael Meskes' talk on Open Source Business Models, which felt quite at home for us working with the Fedora Project as part of our responsibilities.

Community Day - FOSSAsia 2026
Checkpoint C

The "free" in a free and open source software project always comes with an asterisk marked hidden condition about the price being paid by someone else. The price is also usually not in the form of monetary payment, and that is what ends up hurting the longevity of these communities the most. After an insightful conversation on onboarding and retaining contributors, we headed out at around 0200pm Indochina Time to get another round of caffeine. We had discussions around the state of the warring world and how multiple cancelled flights through the Middle East affected selected speakers at FOSSAsia 2026. Aaditya welcomed us to station Fedora Project folks at the GNOME Foundation community booth since there would be enough space for both. As Samyak had stickers saved up from DevConf.IN 2026's Fedora Project community representation, we could use those here. Unfortunately, we could not meet Asmit Malakannawar, who planned to tag along with Aaditya, as his flights through the Middle East were also cancelled, and there were zero provisions made for remote talks.

Samyak and I headed back into the presentation rooms to attend Louis Yoong's talk on Building AI-Powered Interactive Maps with Open Data at around 0300pm Indochina Time, before attending a series of lightning talks from the Women In Tech category. After attending Velia Dang's informative talk on the reviewer's point of view when using EventYay and obtaining learnings to apply for Flock To Fedora, we decided to depart from the conference after the closing notes. While there were some activities planned at Lotus' Eatery for the Pre-Event Community Gathering, we wanted to use the remaining time of the day to explore what Bangkok had to offer. We were able to make it back to Lumen Bangkok Udomsuk Station by 0445pm Indochina Time before leaving again for the BTS SkyTrain after around forty-five minutes. With a rigid agenda to go with, we planned to visit Masaru, an anime collectible franchise store located near BTS Phra Khanong, which was barely four stops away from BTS Udomsuk. Ten minutes of train ride and five minutes of walking got us where we wanted to be.

We spent a little too long there than we could care to admit as Samyak got himself busy finding figurines while I was hunting for some Genshin Impact branded collectibles. Surprisingly enough, I was able to get a brokered change for the 1000 Thai Baht note that I was carrying while we were booking BTS SkyTrain tickets from BTS Phra Khanong to BTS Siam once we were done with our purchases at Masaru. It was going to be a comparatively longer ride with around eight stops and then a ten-minute walk to the world-famous MBK Center. At around 0800pm Indochina Time, we made it to BTS Siam station, but we kept getting sidetracked by the sprawling independent weekend market that we wanted to explore. With Samyak and me splitting again to find our pickings, I could not help but purchase some gifts for my family, including Hawaiian shirts, tourist hats, ornamental pendants, and much more. I was glad to have my wits about me while driving a hard bargain for making all these purchases because that would allow me to eventually purchase some more things in the end.

The humidity was making it tough, but we were finally able to push through to the indoors of the MBK Center by around 0845pm Indochina Time. Coincidentally, this is where we ended up finding one of the best independent handicraft stores, where Samyak and I got ourselves a wide assortment of personalized keychains and magnets for our friends and families. As our order was a massive one, with six for myself and six for him, we decided to depart after placing the order to see if we could make it to the anime café in time. The sprawling layout of the MBK Center only made it difficult for us to find our way - apart from, of course, getting sidetracked into purchasing Thailand incenses and local handicrafts. Splitting from Samyak again, who got busy with his purchases, I was able to make it to the place. To my utter disappointment, the shop had already closed by 0830pm Indochina Time, and I reached almost an hour late. I backtracked to where I last saw him, and we decided to head back to the handicraft store as we had to retrieve our orders right before the operations wrapped up for the day.

Community Day - FOSSAsia 2026
Checkpoint D

The enormous mall could be a treasure trove for those who know what they are looking for, but for the two of us exploring, it felt like an overwhelming labyrinth with similar looking pathways and deeply confusing corridors. We were finally able to make it to the handicraft store by around 0945pm Indochina Time, right at the time of the last handicraft being finalized. Since we had already cleared the bill, we headed back to BTS Siam while clicking pictures and discussing plans, only to end up boarding a rather crowded BTS SkyTrain on a weekend. By around 1045pm Indochina Time, we made it back to BTS Udomsuk, but after exploring the surrounding places to see where we could dine, we decided to order some food from the convenience of our hotel rooms using the Grab application. I got myself freshened up after requesting the hotel reception to retain the order, and I was glad to have ordered a carton of mineral water too. After a quick bite for dinner, I decided to call it a day with multiple precautionary alarms enabled to ensure that I woke up early the next day for the group photo.

misc fedora bits mid april 2026

Posted by Kevin Fenzi on 2026-04-18 16:10:38 UTC
Scrye into the crystal ball

Another frozen week before the Fedora 44 release, just a few notable things:

openssl4

Openssl4 landed in rawhide and caused some issues and then was pulled back out by FESCo. We definitely do need to move to it for Fedora 45, but hopefully we can land it in a way that doesn't break as many things as this last time.

Folks are working on it and I expect we will see it soon.

builder news

We had a aarch64 builder virthost fail to reboot with memory errors a few weeks ago. Finally got someone onsite to pull and reseat all it's memory and that seems to have done the trick. We are back to full on aarch64 builders again. Of course we had enough that I doubt anyone actually noticed that some were down.

I also brought up 3 more big x86_64 builders. They should be added after freeze/sometime soon. Nice to have extra capacity there even thought we aren't hurting for x86_64 builders.

Bots found the wiki

Yesterday our wiki was up and down in the morning. Seems scrapers not only found the wiki, but also found that they could query time ranges for changes in Special:RecentChanges.

We put in some blocking and then increased a bunch of cpu on the backend and everything seems to be back to 'normal' now.

Until the next time...

vacation!

I will be out on a family vacation next week. Our plane leaves super stupid early on tuesday morning and I will be packging and such on monday. So, please don't ping me: file tickets or ask others to take care of any fedora issues you might have.

Hopefully when I am back we will be go for Fedora 44 release!

comments? additions? reactions?

As always, comment on the fediverse: https://fosstodon.org/@nirik/116426624580451263

Arrival Day - FOSSAsia 2026

Posted by Akashdeep Dhar on 2026-04-17 18:30:12 UTC
Arrival Day - FOSSAsia 2026

As my arrival flight to Bangkok was scheduled to depart at around 0700am Indian Standard Time, I had to wake up as early as 0230am Indian Standard Time on 07th March 2026. The packing had already been taken care of previously, so all I had to take care of was ensuring that I got myself an Uber ride to the Pune International Airport in time. Thankfully, unlike my experiences from DevConf.IN 2026, I was able to get one pretty quickly, and at around 0400am Indian Standard Time, I reached the airport. The check-in process went smoothly, since I was not carrying much luggage anyway to begin with on my Air India Express flight. With my physical boarding pass in hand, I headed upstairs to wait for the immigration booths to open for the day. After forty-five minutes of waiting, the gates finally opened up, and I made it to the security check after smooth processing. The fact that I had my Thailand Digital Arrival Card registration done in advance helped me get through to the designated gate 1A without much hassle. I ended up having a lot of time left on my hands, so I decided to connect with the Egencia service about the troubled accommodation booking while I waited for boarding to begin.

Moving away from two noisy groups of travellers - one with senior citizens and one with rowdy men - I got myself a place to sit as I rang up the Egencia customer care helpline. Since 0530am Indian Standard Time was still a little early for their working hours, it took me a while to connect with a human representative. A helpful attendant attempted to connect with the Lumen Bangkok Udomsuk Station hotel employees, but that did not work out. I decided to board the flight anyway at around 0645am Indian Standard Time and leave the concerns about the troubled accommodation booking for when I would have reached Bangkok. There was not much that I could have done at that time to help the situation, and besides the issues that we had with Egencia regarding the flight confirmation, this worry would most likely have soured my entire experience. After a quick switch of seats from 7A to 6A, on a fellow passenger's request, I decided to watch some movies, such as Code 3 (2025) and Zootopia 2 (2025), on my phone. The Kebab Platter was soon served, and that allowed me to catch up on some rest that I was lacking due to having to wake up early in the morning just to make it to the airport.

The flight soon landed at the Suvarnabhumi Bangkok International Airport at around 1230pm Indochina Time, and after connecting with my family to let them know about my safe arrival, I headed swiftly into the immigration queue. The overcrowded traveller collective took me about forty-five minutes to make it through to the other side, where I found that the designated luggage belt #21 had finished delivering all of its luggage. After crossing a big group of Chinese travellers, I got myself a data plan from True 5G at the airport exit. 599 Thai Baht for 8 days of unlimited service was a great deal, and that allowed me to stay connected with both my family and friends, and with Samyak Jain, with whom I was representing the Fedora Project APAC community at FOSSAsia 2026. The humidity I faced after stepping out of the airport took me by surprise because it was even warmer there than it was in India. After unsuccessfully looking for a Grab ride that I had booked for about thirty minutes or so, I finally got one whose driver did a great job crossing over the language barrier and explaining where to find them amidst a rather crowded station of rides available for hire and buses that went into the city.

I connected with Samyak while I started off on the road at around 0230pm Indochina Time to instruct him about Airport Gate #4, where he could avail himself of a Grab ride, and that way he could avoid wasting that thirty minutes like I did. While connecting with my family during the Grab ride, I also commended the driver for just how clear they were with their communication while making the best utilization of the Grab application's live message translation feature for international travellers. I did not realize how swiftly I managed to reach the hotel at about 0315pm Indochina Time after all the immigration and cabbing troubles. Thankfully, the folks at the Lumen Bangkok Udomsuk Station gave me no trouble with the booking, and I was able to check into my room #703 rather swiftly. In contrast to the experience that I thought I would have, they also topped off their welcoming gesture with a cool popsicle-like snack as I headed downstairs to fetch the passport that I had left at the reception. With one less thing to worry about, Samyak and I still had to make it to the BTS Chit Lom station by 0430pm Indochina Time to meet up and join the FOSSAsia Community Cycling trip.

As Samyak touched down at around 0230pm Indochina Time, I had to proceed by myself to meet up with Mishari Muqbil after a quick changeover in my hotel room. After a brief struggle with finding an ATM and then losing about 250 Thai Baht for the international conversion, I made it back to the BTS Udomsuk station, which only accepted cash (and did not even provide receipts to track expenses!) but was thankfully situated right in front of my hotel. It took me thirty minutes to make it to the BTS Chit Lom station at around 0415pm Indochina Time, but I had to spend the remaining thirty minutes chasing Mishari's waypoint on Google Maps. As I flew in my bicycle helmet all the way from Pune, I had no plans of skipping the FOSSAsia Community Cycling Trip, and while it was my first time visiting Thailand, I did find myself audaciously picking trains and walking through as if I was exploring my backyard. I think I might have to credit the hotel reception and the BTS security for their welcoming behaviour, which made me want to leave the hotel room right after arrival because I genuinely wanted to experience more of what Bangkok had to offer, even when I was dead tired from the travels.

I was finally able to make it to the river jetty, where I met up with Mishari, Michael Christen, Anuvrat Parashar, Shivani Parashar, and others. The adventurous ordeal of catching up with them was rewarded with a scenic boat ride to the starting point of the FOSSAsia Community Cycling Trip. I managed to learn more about what Michael does with his work on YaCy and shared what I do as a part of the Fedora Council, the Fedora Mindshare, and the Fedora Infrastructure teams in the Fedora Project community activities. There were folks there for the first time like myself, and there were also those who had attended FOSSAsia since its beginning, so it was enlightening to know their experiences from this conference. During a brisk walk-and-talk with Anuvrat and other participants to the starting point of the cycling trip, I got to know about his frequent involvement in the PyCon organization and DGPLUG communities. Once we were joined by a couple of Mishari's friends and Wendy Ha, we began unlocking the rental bicycles using our HelloRide application, and Mishari gave us a quick orientation about street safety regulations at around 0530pm Indochina Time.

And there began our slow-paced ride through the alleys and streets of Bangkok! With Mishari and his friends leading our collective, I found myself at the start of our sequence, discussing with Bee about his involvement in technology. As a proving ground for their cartographic skills, we wove through a lot of parks, and I noticed a great number of cats along the various pathways we took. Since the cycling trip did not have many elevation changes to deal with, I took the liberty of falling behind in the sequence to chat with the likes of Wendy and Michael every now and then. At around 0630pm Indochina Time, we made our first stop at an independent family-owned chocolate store where we sampled many chocolates and purchased some beverages too. We were able to keep our rental bicycles safe using Mishari's (as Shivani hilariously named) "CYCLE-ogical protection," which mostly consisted of a loosely placed rope. This first stop also allowed Samyak to finally catch up with our collective, as I discovered him coincidentally heading in the opposite direction when we were on our way out. Tracking his location over WhatsApp's location sharing definitely seems to have been the right choice.

It was rather funny to finally catch up with Samyak on a random Bangkok evening street after having missed the chance at the airport and at the hotel. Our collective made the next stop at another independent family-owned ice cream parlour located in a deep alleyway at around 0700pm Indochina Time. While the location was tucked away in a seemingly long-forgotten corner of Bangkok, the place definitely had a very home-like feeling to it. We, of course, got busy sampling undiscovered flavours and ordering favourite ones for the break. After spending another thirty minutes there with my Butterfly Pea cold cone (and some obligatory badly written jokes by Mishari), we had a bunch of photographs clicked. Once we departed from the ice cream parlour, we found ourselves pausing every now and then due to certain cartographically inclined confusions, but our "vibe-riding" (as I hilariously named our fun experience) never had a dull moment. Through the riverside pathways to a restricted university, we seemed to be in the front seat of exploring what stories these streets and alleys of Bangkok had to offer—and as tired as I was, it still felt like we were just getting started and there was more to discover!

While we did have a bunch of registrations for the FOSSAsia community cycling trip, we barely had half of them turn up, so Mishari decided that it would be best if we found ourselves a dining place. After he quickly helped Samyak with his bicycle height, we caught up with the remaining group for yet another round of photographs - this time in front of a Bumblebee statue at a cross-section, if you can believe it. We also halted in front of the Royal Palace for a quick shoot before inching closer to the nearest drop-off point for the HelloRide rental bicycle center. Thankfully, I was able to take Bee's sweet custom ride for a quick spin before leaving, as it was filled to the brim with all the bells and whistles for an exhilarating street cycling experience. With about 80 Thai Baht spent for the HelloRide trip, some of us made it to the KemKon Vegan Experience Restaurant. As both Samyak and I had skipped lunch and exerted ourselves since the morning, we were starving. I was glad to note that while the menu was completely vegan, I still ended up liking the Make-believe Fried Fish Fritters that I had ordered for myself, both for the delicious taste and the quick service.

Apart from the nutritional values, of course, it was astonishing to notice just how close the vegan dish ended up tasting like a non-vegetarian one. Adding some spicy chili-flavoured oil on top of it all made it taste like heaven, and I could not see a better way to end the night than with this amazing meal. After clearing our bills, Shivani and Anuvrat stayed back at the market to explore some more, while Samyak, Mishari, and I headed back to the main road to catch a Grab ride to the hotel. It was magical just how we felt at home connecting with folks from various free and open-source software communities, all while doing activities like riding bicycles or sharing meals. The two of us were soon back in our hotel rooms, and apart from one misadventure of requiring the hotel staff's assistance to unlock the heavily jammed bathroom door, our arrival in Bangkok was super awesome. After a quick message to Julia Bley to inform her about our safe arrival at the conference and to conclude our saga of troubled travels, I called it a day at around 1130pm Indochina Time. There was so much to look forward to at the Community Day in FOSSAsia 2026, and I wanted to ensure that I was rested enough to experience the same.

Wiki struggling against bot attack

Posted by Fedora Infrastructure Status on 2026-04-17 15:00:00 UTC

The Fedora wiki is under very heavy load due to a large number of requests from bots.

  • Wiki services

Community Update – Week 16

Posted by Fedora Community Blog on 2026-04-17 10:00:00 UTC

This is a report created by CLE Team, which is a team containing community members working in various Fedora groups for example Infrastructure, Release Engineering, Quality etc. This team is also moving forward some initiatives inside Fedora project.

Week: 13 – 17 Apr 2026

Fedora Infrastructure

This team is taking care of day to day business regarding Fedora Infrastructure.
It’s responsible for services running in Fedora infrastructure.
Ticket tracker

CentOS Infra including CentOS CI

This team is taking care of day to day business regarding CentOS Infrastructure and CentOS Stream Infrastructure.
It’s responsible for services running in CentOS Infratrusture and CentOS Stream.
CentOS ticket tracker
CentOS Stream ticket tracker

Release Engineering

This team is taking care of day to day business regarding Fedora releases.
It’s responsible for releases, retirement process of packages and package builds.
Ticket tracker

  • Producing release candidates for Fedora 44 Final release.
  • F44 GO/NO-GO meeting is tentatively scheduled for Thursday, April 16th.
  • Some work related to the migration to Forgejo..
  • OpenH264 RPMs are now published for F44 and Rawhide (F45).
  • Otherwise business as usual operations.

RISC-V

This is the summary of the work done regarding the RISC-V architecture in Fedora.

  • F44 rebuild in full swing:
    • GCC 16 builds slowed down some progress, but a workaround version was used to compensate.
    • The diff with F43 is about 1K packages
  • Discussed setting up Pungi for “compose’ artifacts (installation & kickstart trees, ISOs, etc)
  • RISC-V “omni kernels” (formerly “unified kernel”)
    • kernel 7.0 is in the mainline repository and work is proceeding normally (Jason)
    • A new f44-omni tag and target will be in Koji to support a single omni kernel.

QE

This team is taking care of quality of Fedora. Maintaining CI, organizing test days
and keeping an eye on overall quality of Fedora releases.

Forgejo

This team is working on introduction of https://forge.fedoraproject.org to Fedora
and migration of repositories from pagure.io.

EPEL

This team is working on keeping Epel running and helping package things.

  • Routine packaging work, including backporting multiple CVE fixes to tinyproxy and python-cbor2.  Also filed eight FTI (fail-to-install) bugs and updated three packages.
  • Refinement work on EPEL minor EOL SOP, which will be used next month when EPEL 10.1 reaches EOL.
  • Continued collaboration with RHEL Lightspeed team on goose packaging work.

UX

This team is working on improving User experience. Providing artwork, user experience,
usability, and general design services to the Fedora project

  • Emma was on the Fedora Podcast with Justin to talk about Flock 2026 and the branding! [Youtube link]
  • Continuing working with contributor on poster about getting involved with Fedora community [Ticket link]
  • Some progress on Flock designs [Ticket link]

If you have any questions or feedback, please respond to this report or contact us on #admin:fedoraproject.org channel on matrix.

The post Community Update – Week 16 appeared first on Fedora Community Blog.

Browser wars

Posted by Vedran Miletić on 2026-04-17 08:00:21 UTC

Browser wars


brown fox on snow field

Photo source: Ray Hennessy (@rayhennessy) | Unsplash


Last week in Rijeka we held Science festival 2015. This is the (hopefully not unlucky) 13th instance of the festival that started in 2003. Popular science events were organized in 18 cities in Croatia.

I was invited to give a popular lecture at the University departments open day, which is a part of the festival. This is the second time in a row that I got invited to give popular lecture at the open day. In 2014 I talked about The Perfect Storm in information technology caused by the fall of economy during 2008-2012 Great Recession and the simultaneous rise of low-cost, high-value open-source solutions. Open source completely changed the landscape of information technology in just a few years.

The follow-up

Posted by Vedran Miletić on 2026-04-17 08:00:21 UTC

The follow-up


people watching concert

Photo source: Andre Benz (@trapnation) | Unsplash


When Linkin Park released their second album Meteora, they had a quote on their site that went along the lines of

Musicians have their entire lives to come up with a debut album, and only a very short time afterward to release a follow-up.

Open-source magic all around the world

Posted by Vedran Miletić on 2026-04-17 08:00:21 UTC

Open-source magic all around the world


woman blowing sprinkle in her hand

Photo source: Almos Bechtold (@almosbech) | Unsplash


Last week brought us two interesting events related to open-source movement: 2015 Red Hat Summit (June 23-26, Boston, MA) and Skeptics in the pub (June 26, Rijeka, Croatia).

Joys and pains of interdisciplinary research

Posted by Vedran Miletić on 2026-04-17 08:00:21 UTC

Joys and pains of interdisciplinary research


white and black coffee maker

Photo source: Trnava University (@trnavskauni) | Unsplash


In 2012 University of Rijeka became NVIDIA GPU Education Center (back then it was called CUDA Teaching Center). For non-techies: NVIDIA is a company producing graphical processors (GPUs), the computer chips that draw 3D graphics in games and the effects in modern movies. In the last couple of years, NVIDIA and other manufacturers allowed the usage of GPUs for general computations, so one can use them to do really fast multiplication of large matrices, finding paths in graphs, and other mathematical operations.

What is the price of open-source fear, uncertainty, and doubt?

Posted by Vedran Miletić on 2026-04-17 08:00:21 UTC

What is the price of open-source fear, uncertainty, and doubt?


turned on red open LED signage

Photo source: j (@janicetea) | Unsplash


The Journal of Physical Chemistry Letters (JPCL), published by American Chemical Society, recently put out two Viewpoints discussing open-source software:

  1. Open Source and Open Data Should Be Standard Practices by J. Daniel Gezelter, and
  2. What Is the Price of Open-Source Software? by Anna I. Krylov, John M. Herbert, Filipp Furche, Martin Head-Gordon, Peter J. Knowles, Roland Lindh, Frederick R. Manby, Peter Pulay, Chris-Kriton Skylaris, and Hans-Joachim Werner.

Viewpoints are not detailed reviews of the topic, but instead, present the author's view on the state-of-the-art of a particular field.

The first of two articles stands for open source and open data. The article describes Quantum Chemical Program Exchange (QCPE), which was used in the 1980s and 1990s for the exchange of quantum chemistry codes between researchers and is roughly equivalent to the modern-day GitHub. The second of two articles questions the open-source software development practice, advocating the usage and development of proprietary software. I will dissect and counter some of the key points from the second article below.

On having leverage and using it for pushing open-source software adoption

Posted by Vedran Miletić on 2026-04-17 08:00:21 UTC

On having leverage and using it for pushing open-source software adoption


Open 24 Hours neon signage

Photo source: Alina Grubnyak (@alinnnaaaa) | Unsplash


Back in late August and early September, I attended 4th CP2K Tutorial organized by CECAM in Zürich. I had the pleasure of meeting Joost VandeVondele's Nanoscale Simulations group at ETHZ and working with them on improving CP2K. It was both fun and productive; we overhauled the wiki homepage and introduced acronyms page, among other things. During a coffee break, there was a discussion on the JPCL viewpoint that speaks against open-source quantum chemistry software, which I countered in the previous blog post.

But there is a story from the workshop which somehow remained untold, and I wanted to tell it at some point. One of the attendants, Valérie Vaissier, told me how she used proprietary quantum chemistry software during her Ph.D.; if I recall correctly, it was Gaussian. Eventually, she decided to learn CP2K and made the switch. She liked CP2K better than the proprietary software package because it is available free of charge, the reported bugs get fixed quicker, and the group of developers behind it is very enthusiastic about their work and open to outsiders who want to join the development.

AMD and the open-source community are writing history

Posted by Vedran Miletić on 2026-04-17 08:00:21 UTC

AMD and the open-source community are writing history


a close up of a cpu chip on top of a motherboard

Photo source: Andrew Dawes (@andrewdawes) | Unsplash


Over the last few years, AMD has slowly been walking the path towards having fully open source drivers on Linux. AMD did not walk alone, they got help from Red Hat, SUSE, and probably others. Phoronix also mentions PathScale, but I have been told on Freenode channel #radeon this is not the case and found no trace of their involvement.

AMD finally publically unveiled the GPUOpen initiative on the 15th of December 2015. The story was covered on AnandTech, Maximum PC, Ars Technica, Softpedia, and others. For the open-source community that follows the development of Linux graphics and computing stack, this announcement comes as hardly surprising: Alex Deucher and Jammy Zhou presented plans regarding amdgpu on XDC2015 in September 2015. Regardless, public announcement in mainstream media proves that AMD is serious about GPUOpen.

I believe GPUOpen is the best chance we will get in this decade to open up the driver and software stacks in the graphics and computing industry. I will outline the reasons for my optimism below. As for the history behind open-source drivers for ATi/AMD GPUs, I suggest the well-written reminiscence on Phoronix.

I am still not buying the new-open-source-friendly-Microsoft narrative

Posted by Vedran Miletić on 2026-04-17 08:00:21 UTC

I am still not buying the new-open-source-friendly-Microsoft narrative


black framed window

Photo source: Patrick Bellot (@pbellot) | Unsplash


This week Microsoft released Computational Network Toolkit (CNTK) on GitHub, after open sourcing Edge's JavaScript engine last month and a whole bunch of projects before that.

Even though the open sourcing of a bunch of their software is a very nice move from Microsoft, I am still not convinced that they have changed to the core. I am sure there are parts of the company who believe that free and open source is the way to go, but it still looks like a change just on the periphery.

All the projects they have open-sourced so far are not the core of their business. Their latest version of Windows is no more friendly to alternative operating systems than any version of Windows before it, and one could argue it is even less friendly due to more Secure Boot restrictions. Using Office still basically requires you to use Microsoft's formats and, in turn, accept their vendor lock-in.

Put simply, I think all the projects Microsoft has opened up so far are a nice start, but they still have a long way to go to gain respect from the open-source community. What follows are three steps Microsoft could take in that direction.

Free to know: Open access and open source

Posted by Vedran Miletić on 2026-04-17 08:00:21 UTC

Free to know: Open access and open source


yellow and black come in we're open sign

Photo source: Álvaro Serrano (@alvaroserrano) | Unsplash


!!! info Reposted from Free to Know: Open access & open source, originally posted by STEMI education on Medium.

Q&A with Vedran Miletić

In June 2014, Elon Musk opened up all Tesla patents. In a blog post announcing this, he wrote that patents "serve merely to stifle progress, entrench the positions of giant corporations and enrich those in the legal profession, rather than the actual inventors." In other words, he joined those who believe that free knowledge is the prerequisite for a great society -- that it is the vibrancy of the educated masses that can make us capable of handling the strange problems our world is made of.

The movements that promote and cultivate this vibrancy are probably most frequently associated with terms "Open access" and "open source". In order to learn more about them, we Q&A-ed Vedran Miletić, the Rocker of Science -- researcher, developer and teacher, currently working in computational chemistry, and a free and open source software contributor and activist. You can read more of his thoughts on free software and related themes on his great blog, Nudged Elastic Band. We hope you will join him, us, and Elon Musk in promoting free knowledge, cooperation and education.

The academic and the free software community ideals

Posted by Vedran Miletić on 2026-04-17 08:00:21 UTC

The academic and the free software community ideals


book lot on black wooden shelf

Photo source: Giammarco Boscaro (@giamboscaro) | Unsplash


Today I vaguely remembered there was one occasion in 2006 or 2007 when some guy from the academia doing something with Java and Unicode posted on some mailing list related to the free and open-source software about a tool he was developing. What made it interesting was that the tool was open source, and he filed a patent on the algorithm.

Celebrating Graphics and Compute Freedom Day

Posted by Vedran Miletić on 2026-04-17 08:00:21 UTC

Celebrating Graphics and Compute Freedom Day


stack of white and brown ceramic plates

Photo source: Elena Mozhvilo (@miracleday) | Unsplash


Hobbyists, activists, geeks, designers, engineers, etc have always tinkered with technologies for their purposes (in early personal computing, for example). And social activists have long advocated the power of giving tools to people. An open hardware movement driven by these restless innovators is creating ingenious versions of all sorts of technologies, and freely sharing the know-how through the Internet and more recently through social media. Open-source software and more recently hardware is also encroaching upon centers of manufacturing and can empower serious business opportunities and projects.

The free software movement is cited as both an inspiration and a model for open hardware. Free software practices have transformed our culture by making it easier for people to become involved in producing things from magazines to music, movies to games, communities to services. With advances in digital fabrication making it easier to manipulate materials, some now anticipate an analogous opening up of manufacturing to mass participation.

Enabling HTTP/2, HTTPS, and going HTTPS-only on inf2

Posted by Vedran Miletić on 2026-04-17 08:00:21 UTC

Enabling HTTP/2, HTTPS, and going HTTPS-only on inf2


an old padlock on a wooden door

Photo source: Arkadiusz Gąsiorowski (@ambuscade) | Unsplash


Inf2 is a web server at University of Rijeka Department of Informatics, hosting Sphinx-produced static HTML course materials (mirrored elsewhere), some big files, a WordPress instance (archived elsewhere), and an internal instance of Moodle.

HTTPS was enabled on inf2 for a long time, albeit using a self-signed certificate. However, with Let's Encrpyt coming into public beta, we decided to join the movement to HTTPS.

Why we use reStructuredText and Sphinx static site generator for maintaining teaching materials

Posted by Vedran Miletić on 2026-04-17 08:00:21 UTC

Why we use reStructuredText and Sphinx static site generator for maintaining teaching materials


open book lot

Photo source: Patrick Tomasso (@impatrickt) | Unsplash


Yesterday I was asked by Edvin Močibob, a friend and a former student teaching assistant of mine, the following question:

You seem to be using Sphinx for your teaching materials, right? As far as I can see, it doesn't have an online WYSIWYG editor. I would be interested in comparison of your solution with e.g. MediaWiki.

While the advantages and the disadvantages of static site generators, when compared to content management systems, have been written about and discussed already, I will outline our reasons for the choice of Sphinx below. Many of the points have probably already been presented elsewhere.

Fly away, little bird

Posted by Vedran Miletić on 2026-04-17 08:00:21 UTC

Fly away, little bird


macro-photography blue, brown, and white sparrow on branch

Photo source: Vincent van Zalinge (@vincentvanzalinge) | Unsplash


The last day of July happened to be the day that Domagoj Margan, a former student teaching assistant and a great friend of mine, set up his own DigitalOcean droplet running a web server and serving his professional website on his own domain domargan.net. For a few years, I was helping him by providing space on the server I owned and maintained, and I was always glad to do so. Let me explain why.

Mirroring free and open-source software matters

Posted by Vedran Miletić on 2026-04-17 08:00:21 UTC

Mirroring free and open-source software matters


gold and silver steel wall decor

Photo source: Tuva Mathilde Løland (@tuvaloland) | Unsplash


Post theme song: Mirror mirror by Blind Guardian

A mirror is a local copy of a website that's used to speed up access for the users residing in the area geographically close to it and reduce the load on the original website. Content distribution networks (CDNs), which are a newer concept and perhaps more familiar to younger readers, serve the same purpose, but do it in a way that's transparent to the user; when using a mirror, the user will see explicitly which mirror is being used because the domain will be different from the original website, while, in case of CDNs, the domain will remain the same, and the DNS resolution (which is invisible to the user) will select a different server.

Free and open-source software was distributed via (FTP) mirrors, usually residing in the universities, basically since its inception. The story of Linux mentions a directory on ftp.funet.fi (FUNET is the Finnish University and Research Network) where Linus Torvalds uploaded the sources, which was soon after mirrored by Ted Ts'o on MIT's FTP server. The GNU Project's history contains an analogous process of making local copies of the software for faster downloading, which was especially important in the times of pre-broadband Internet, and it continues today.

Markdown vs reStructuredText for teaching materials

Posted by Vedran Miletić on 2026-04-17 08:00:21 UTC

Markdown vs reStructuredText for teaching materials


blue wooden door surrounded by book covered wall

Photo source: Eugenio Mazzone (@eugi1492) | Unsplash


Back in summer 2017. I wrote an article explaining why we used Sphinx and reStructuredText to produce teaching materials and not a wiki. In addition to recommending Sphinx as the solution to use, it was general praise for generating static HTML files from Markdown or reStructuredText.

This summer I made the conversion of teaching materials from reStructuredText to Markdown. Unfortunately, the automated conversion using Pandoc didn't quite produce the result I wanted so I ended up cooking my own Python script that converted the specific dialect of reStructuredText that was used for writing the contents of the group website and fixing a myriad of inconsistencies in the writing style that accumulated over the years.

Don't use RAR

Posted by Vedran Miletić on 2026-04-17 08:00:21 UTC

Don't use RAR


a large white tank

Photo source: Tim Mossholder (@ctimmossholder) | Unsplash


I sometimes joke with my TA Milan Petrović that his usage of RAR does not imply that he will be driving a rari. After all, he is not Devito rapping^Wsinging Uh 😤. Jokes aside, if you search for "should I use RAR" or a similar phrase on your favorite search engine, you'll see articles like 2007 Don't Use ZIP, Use RAR and 2011 Why RAR Is Better Than ZIP & The Best RAR Software Available.

Should I do a Ph.D.?

Posted by Vedran Miletić on 2026-04-17 08:00:21 UTC

Should I do a Ph.D.?


a bike is parked in front of a building

Photo source: Santeri Liukkonen (@iamsanteri) | Unsplash


Tough question, and the one that has been asked and answered over and over. The simplest answer is, of course, it depends on many factors.

As I started blogging at the end of my journey as a doctoral student, the topic of how I selected the field and ultimately decided to enroll in the postgraduate studies never really came up. In the following paragraphs, I will give a personal perspective on my Ph.D. endeavor. Just like other perspectives from doctors of not that kind, it is specific to the person in the situation, but parts of it might apply more broadly.

Alumni Meeting 2023 at HITS and the reminiscence of the postdoc years

Posted by Vedran Miletić on 2026-04-17 08:00:21 UTC

Alumni Meeting 2023 at HITS and the reminiscence of the postdoc years


a fountain in the middle of a town square

Photo source: Jahanzeb Ahsan (@jahan_photobox) | Unsplash


This month we had Alumni Meeting 2023 at the Heidelberg Institute for Theoretical Studies, or HITS for short. I was very glad to attend this whole-day event and reconnect with my former colleagues as well as researchers currently working in the area of computational biochemistry at HITS. After all, this is the place and the institution where I worked for more than half of my time as a postdoc, where I started regularly contributing code to GROMACS molecular dynamics simulator, and published some of my best papers.

My perspective after two years as a research and teaching assistant at FIDIT

Posted by Vedran Miletić on 2026-04-17 08:00:21 UTC

My perspective after two years as a research and teaching assistant at FIDIT


human statues near white building

Photo source: Darran Shen (@darranshen) | Unsplash


My employment as a research and teaching assistant at Faculty of Informatics and Digital Technologies (FIDIT for short), University of Rijeka (UniRi) ended last month with the expiration of the time-limited contract I had. This moment has marked almost two full years I spent in this institution and I think this is a good time to take a look back at everything that happened during that time. Inspired by the recent posts by the PI of my group, I decided to write my perspective on the time that I hope is just the beginning of my academic career.

Rotation of Copr servers credentials

Posted by Fedora Infrastructure Status on 2026-04-17 08:00:00 UTC

Rotation of Copr server credentials. There is possibility of short outage.

Affected Services:

copr-frontend - https://copr.fedorainfracloud.org copr-backend - https://copr-be.cloud.fedoraproject.org/

Friday Links 26-13

Posted by Christof Damian on 2026-04-16 22:00:00 UTC
Two sleeping puppies next to an orange mug and a notebook

I caught up with my YouTube, two good videos from NotJustBikes about cars and oil being bad.

I really liked all the links in leadership this week, all highly recommended.

Quote of the Week
Safe to fail — Organisations need to make it safe to execute a rollback without penalty to the developer. We’ve seen some organisations introduce failure KPIs that demand that all employees (from the CEO down) demonstrate failure at least once per year in order to receive their bonus. The argument is that if they do not demonstrate failure, they are either hiding or not taking enough risks.
#Noprojects
Evan Leybourn and Shane Hastie

Leadership

The Vasa - “Understanding why the Vasa sank also gets to something important about how organisations fail.”

Discussing RTO in my Genesi t-shirt...

Posted by Peter Czanik on 2026-04-16 07:58:48 UTC

This Monday I talked to a couple of friends about work while wearing my Genesi t-shirt. A teacher going back to school after Spring break and an IT guy explaining the nightmare of RTO threat. I love coincidences :-) Why do I say that?

Genesi t-shirt

As I wrote a few years ago about working from home: “After graduating from university, I worked from home for a small US-based company. I never met my boss while working there and met only one of my colleagues at a conference in Brussels. I eventually met my boss some seven years later, when I gave a talk at a conference in Washington, D.C.” The company was Genesi, and that was the work culture which defines me. I received the t-shirt on the photo during my visit to Washington, D.C.. Luckily, I’m still living mostly this way, visiting the office 1-2 times a week: working hybrid.

Imagine the contrast I felt, when I realized that I’m talking to someone who works on a very strict fixed schedule. For a teacher vacation is only possible when there is no school, like Spring break in Hungary last week. There is a fixed schedule all year around. Compare that to my Genesi years: no regular meetings, communicating by e-mail & chat, and working when it was the right time for me: sometimes in the morning, other days during the night. It was fantastic, especially with small kids. I have been working on flexible hours ever since, limited only by meetings.

COVID made remote work less of a niche. Sometimes even mandatory. Many people in IT started to work remotely. Most of our work does not require a fixed place or time. On-line meetings became the norm, teams are often not location based anymore but scattered around the globe. As long as you have an Internet connection and a noise canceling microphone you can join a meeting from anywhere, even from the top of a mountain. It is easy to get used to this flexibility and very difficult to give it up.

RTO became a periodical threat. It’s a lot cheaper to announce RTO and let people leave voluntarily than sending them away. Quite a few friends write me every once in a while that they have to return to the office starting in a few weeks time. Then, a few weeks later they happily share: they gave me an exemption, so they do not want me to leave…

Wearing my Genesi t-shirt all these problems feel so distant. I hope that it stays this way!

Using LLM for Code Review in storaged Projects

Posted by Vojtěch Trefný on 2026-04-16 07:13:00 UTC

For a couple of months now, we have been experimenting with using AI/LLM tools for code review for our projects under the storaged.org umbrella. We originally started with testing various tools for pull request reviews available on GitHub. For that, we are currently using CodeRabbit.

But the newly added code is not the only code that should be reviewed. Our projects have a long history: the first version of libblockdev was released in 2014 and the history of UDisks goes back to 2008. The code, of course, went through many changes and rewrites since then and many people reviewed it. And we have tests and use static analysis and other tools to ensure code quality. But that doesn’t mean there are no bugs or other issues. We know there are, our users report them somewhat regularly.

Libblockdev together with UDisks contain about 100 thousand lines of C code. Doing a full review of all that code would be a huge undertaking. For a human. A machine can do the review in a matter of minutes.

This is a good place to stop and address some questions and concerns you might have. Yes, LLMs, or what most people call AI these days, are not perfect, they make mistakes and results can be unpredictable or completely hallucinated. These are valid issues, but for this specific use case – reviewing existing code written by humans – these are less of a concern. Each issue found was first reviewed by the developer to make sure it is actually a real issue and the proposed fix was also reviewed before applying. And the resulting change went through the standard process before including it in the project – pull request with a review by another developer.

We are also well aware that the code having been reviewed by these tools doesn’t mean there aren’t any issues anymore. But the issues that were found and fixed were real issues and fixing even one issue in any software is always a net positive regardless of whether the issue was found by tests, static analysis, a quality engineer or in this case an LLM/AI tool.

With this out of the way, let’s move to the actual review process, and more importantly, the findings.

The review

The review process itself is pretty simple. We are currently using Anthropic’s Claude with the Opus 4.6 model. We did experiment with other tools and models as well, but so far Opus 4.6 seems to produce the best results (as of March 2026).

We used some basic phrases as prompts, without asking for any specific areas to review. A basic variant of “do code review of the existing code base” was enough for our use case (more about prompts and some unexpected results later). The produced review is usually a formatted numbered list of issues sorted by priority and well documented, with arguments supporting the findings. The short description of the issue was usually good enough for deciding whether the issue is real or a false positive.

Crypto: Wrong strerror_l usage for ioctl errors in OPAL

File: src/plugins/crypto.c:3911

ioctl returns -1 and sets errno, but the code uses strerror_l(-ret, ...) which gives strerror_l(1, ...) (always EPERM).

Fix: Use strerror_l(errno, c_locale).

Example of an issue found in libblockdev

One thing we noticed quite quickly was that one review is not enough. The model seems to stop after a certain number of issues and after fixing these, running the prompt again produces a new set of issues. We did a number of runs for each project (both with and without a new context) and by the last run, the number of real issues found was close to zero. With a decreasing number of “real” issues in the code, the false-positives and “nitpicky” issues in the reports seemed to outnumber the newly discovered issues. This is understandable and kind of expected behavior and we’ve definitely seen human reviewers behave similarly when reviewing code that didn’t have any glaring issues to point out.

We also experimented with limiting the review scope in the prompt: either to one specific module or plugin, or even to just one file. This approach seems to produce slightly better results within the limited scope, but also produces more false-positive reports as the model doesn’t typically recognize relationships and implications at a global scale. But even with this caveat, this is still very well worth the effort.

Findings

Now, let’s talk about the issues that were found and fixed. In total, in all of the “review runs”, 235 issues were reported: 110 in libblockdev and 125 in UDisks. (We didn’t stop at these projects, our other projects went through the same process, but for the sake of simplicity, we’ll focus only on these two here.)

Out of these, the largest group, 41 in total, were related to improper resource handling, mostly memory leaks, closely followed by various error handling issues. And because none of the developers working on these projects are native speakers, third place belongs to fixing typos, grammar and documentation with 27 issues fixed.

The “winners” here show where most of the existing issues in our code are hidden: in various error paths (upon closer inspection, most of the memory leaks were located in the error paths as well). This makes sense. These are hard to cover in automated tests and finding issues in error paths is mostly relying on manual testing, static analysis and of course, on code review.

Fixing grammar and typos might not seem that important, but these also include public API documentation where a misunderstanding about usage or memory ownership can lead to bugs in other projects using this API. And we even found a few nonsensical error messages that would definitely confuse users when shown.

Crypto: Stale strerror_l(0) produces “Success” in error message

File: src/plugins/crypto.c:2255

ret is 0 (from successful crypt_load), so the error message reads “Label can be set only on LUKS 2 devices: Success”.

Fix: Remove the strerror_l call from this error message (it is a type-check error, not a syscall error).

Again in libblockdev. This very much resembles the famous "Task failed successfully" error message.

And what about CVEs?

Memory management, error paths and grammar. Important issues and it’s good these are being fixed, but what about something more serious?

During our experiments with AI review, we didn’t find any serious flaws or security issues. That, of course, doesn’t mean there are none in our code, but if we knew about issues like these, we wouldn’t be doing this exercise. What we can do is test the tools on a security issue we know exists. Or to be more precise, a security issue that existed.

In February, two related security issues were reported against UDisks – CVE-2026-26103 and CVE-2026-26104. Both were caused by missing authorization checks in some UDisks public functions. UDisks is a daemon that runs with root privileges and allows unprivileged users to perform certain operations when some conditions (based on the operation, type of device, type of session the user is logged in, etc.) are met. We use Polkit for this. And in these two cases the authorization check was missing in the code. This went through review by two developers, and neither of us noticed. Now the question is, would AI notice?

Short answer is: yes, it would:

Missing PolicyKit Authorization on LUKS Header Backup/Restore

Files:

  • src/udiskslinuxencrypted.c:1356-1442 (handle_header_backup)
  • src/udiskslinuxblock.c:4238-4312 (handle_restore_encrypted_header)

Both D-Bus method handlers perform privileged operations without any PolicyKit authorization check. Compare with other handlers in the same file (lines 524, 784, 974, 1133, 1288) which all call udisks_daemon_util_check_authorization_sync(). There are also no PolicyKit action IDs defined in data/org.freedesktop.UDisks2.policy.in for these operations.

Severity: CRITICAL

Both CVEs clearly pointed out in the review

But even if this result looks impressive, it was not that easy. As I already mentioned, running the reviews multiple times and with slightly different prompts can change the results a lot. And the same happened here. The first two general “do code review” prompts did not uncover these issues. For the third run (with a new context), I explicitly asked to focus on potential security issues, and the tool happily obliged:

I’ll conduct a security-focused code review of the UDisks codebase. Since this is a privileged system daemon, security issues are critical. Let me explore several attack surface areas in parallel.

and one of the spawned agents explicitly did “review privilege and auth issues” in order to finally find this.

So even AI isn’t infallible and doesn’t just spot everything every time. On the other hand, neither did two senior software engineers working on the project when reviewing the pull request introducing this, so maybe we shouldn’t start throwing stones either.

What is “code review” and what exactly is “code”?

As mentioned multiple times already, the exact prompt wording matters. We have found that what we simply refer to as “code review” can lead to different results based on the wording and context.

Asking simply to “Do a code review on this project” usually leads to a high-level description of the project with a few key findings. These might include some specific issues and bugs, but more often the result is just a summary of overall “code health” of the project. In our case, praising our projects for clean and readable code and good test coverage. This is nice to read, but doesn’t help much.

Overview

UDisks is a well-maintained ~160-file C project (v2.12.0) implementing a root-running storage management daemon using GLib/GObject and D-Bus. The codebase is in active maintenance with recent work focused on stability and safety hardening rather than new features.

Security (Strong)

For a daemon running as root with direct hardware access, the security posture is excellent:

Code Quality Observations

Positives:

  • Consistent coding style (GNU C, spaces-only, proper emacs modelines)
  • Naming conventions (UDisks prefix, snake_case functions) applied uniformly

Technical Debt:

  • ~80+ TODO/FIXME/XXX comments scattered across the codebase. Notable clusters:

Summary

This is a mature, security-conscious codebase. The main areas for improvement are clearing the TODO backlog and tightening a few intermediate NULL checks in getter chains. No critical issues found.

Code review of UDisks. Some TODOs and FIXMEs, but other than that it's perfect, right?

But if we ask to “Do a code review of the existing code base and report any issues or bugs you find” instead, it actually goes through the code more thoroughly and reports individual issues. And suddenly, our great project now has 16 issues, three of them high priority.

Summary

16 issues found across the daemon core, modules, client library, and CLI tool.

Severity Count Key areas
High 3 NULL derefs in partition ops (daemon crash), use-after-free in LVM2, double-close in LSM
Medium 9 Dead code in flag checks, empty DM name, insecure passphrase handling, memory leaks, wrong D-Bus completion
Low 4 Missing bounds checks, minor memory leaks in CLI tool, TOCTTOU race

16 new issues found. Not that perfect after all.

Another interesting thing we found is that “code” in the code review often doesn’t include tests. This might be a specific problem for our projects, where tests are somewhat separated: for both libblockdev and UDisks the test suite is part of the same repository, but it is written in Python (in contrast to C) and for UDisks the biggest part of the test suite uses the DBus API, making it even more distinct. But that doesn’t change the fact that without explicitly asking for “code review of the test suite”, the tests were completely ignored. If included, issues in tests are actually the second biggest part of the issues found in this experiment. Looks like we might need tests for our tests. And that’s only for libblockdev, UDisks “test suite code review” is still in progress.

And the same “tests are not code” issue exists for the other non-code parts of the project: documentation, man pages and various YAML config files. This shows that a simple one line prompt is not enough. For a one time review of the entire project as we did here, writing the prompts manually one by one and trying different tactics and approaches might be a good enough solution, but in the future more detailed instructions will surely be needed.

The obvious next step is to prepare a skill for Claude that would include all these instructions for future code reviews. We don’t intend to do a thorough code review of the entire existing code base for every change (that’s where the AI-assisted code review on pull requests takes over), but we definitely intend to continue with this in the future and are looking forward to new and better models.

Conclusion

Even though this started just as an experiment with the tools that are currently available for us, using Claude and Opus 4.6 for code review showed some really interesting results. We were able to quickly fix more than two hundred issues in our code (and that’s counting only libblockdev and UDisks) and even though these were not critical or security bugs, this will definitely improve the project. We will continue working with AI/LLM tools and hopefully eliminate even more issues and also speed up bringing new features – both by directly implementing these with the help of Claude and simply by offloading some of the other work to it.

GDB source-tracking breakpoints

Posted by Fedora Magazine on 2026-04-15 16:33:47 UTC

One of the main abilities of a debugger is setting breakpoints.
GDB: The GNU Project Debugger now introduces an experimental feature
called source-tracking breakpoints that tracks the source line a breakpoint
was set to.

Introduction

Imagine you are debugging: you set breakpoints on a bunch of
source lines, inspect some values, and get ideas about how to change your
code. You edit the source and recompile, but keep your GDB session running
and type run to reload the newly compiled executable. Because you changed
the source, the breakpoint line numbers shifted. Right now, you have to
disable the existing breakpoints and set new ones.

GDB source-tracking breakpoints change this situation. When you set a
breakpoint using file:line notation, when this feature is enabled, GDB
captures a small window of the surrounding source code. When you recompile
and reload the executable, GDB adjusts any breakpoints whose lines shifted
due to source changes. This is especially helpful in ad-hoc debug sessions
where you want to keep debugging without manually resetting breakpoints
after each edit-compile cycle.

Setting a source-tracking breakpoint

To enable the source-tracking feature, run:

(gdb) set breakpoint source-tracking enabled on

Set a breakpoint using file:line notation:

(gdb) break myfile.c:42
Breakpoint 1 at 0x401234: file myfile.c, line 42.

GDB now tracks the source around this line. The info breakpoints command
shows whether a breakpoint is tracked:

(gdb) info breakpoints
Num Type Disp Enb Address What
1 breakpoint keep y 0x0000000000401234 in calculate at myfile.c:42
source-tracking enabled (tracking 3 lines around line 42)

Now edit the source — say a few lines are added above the breakpoint,
shifting it from line 42 to line 45. After recompiling and reloading the
executable with run, GDB resets the breakpoint to the new line and displays:

Breakpoint 1 adjusted from line 42 to line 45.

Run info breakpoints again to confirm the new location:

(gdb) info breakpoints
Num Type Disp Enb Address What
1 breakpoint keep y 0x0000000000401256 in calculate at myfile.c:45
source-tracking enabled (tracking 3 lines around line 45)

As you can see, GDB updated the breakpoint line to match the new location.

Limitations

The matching algorithm requires an exact string match of the captured source
lines. Whitespace-only changes or trivial reformatting of the tracked lines
will confuse the matcher and may cause the breakpoint not to be found.

GDB only searches within a 12-line window around the original location. If
the code shifted by more than that — for example, because a large block was
inserted above — the breakpoint will not be found. GDB will keep the
original location and print a warning:

warning: Breakpoint 1 source code not found after reload, keeping original
location.

Source context cannot be captured when a breakpoint is created pending
(e.g., with set breakpoint pending on), because no symbol table is available
yet. When the breakpoint later resolves to a location, it will not be
source-tracked.

Source tracking is not supported for ranged breakpoints (set with
break-range).

Breakpoints on inline functions that expand to multiple locations are not
source-tracked, as each location may have moved differently.

How to try this experimental feature

This feature is not yet available in a stable GDB release. There are two
ways to try it.

Install from COPR (for Fedora users)

A pre-built package is available through a COPR repository. Enable it and
install:

sudo dnf copr enable ahajkova/GDB-source-tracking-breakpoints
sudo dnf upgrade gdb

To disable the repository again after testing:

sudo dnf copr disable ahajkova/GDB-source-tracking-breakpoints

The COPR project page is at:
https://copr.fedorainfracloud.org/coprs/ahajkova/GDB-source-tracking-breakpo
ints/

Build from source

  1. Clone the GDB repository:
    git clone git://sourceware.org/git/binutils-gdb.git
    cd binutils-gdb
  2. Download and apply the patch from the upstream mailing list:
    https://sourceware.org/pipermail/gdb-patches/2026-April/226349.html
  3. Build GDB:
    mkdir build && cd build
    ../configure --prefix=/usr/local
    make -j$(nproc) all-gdb
  4. Run the newly built GDB:
    ./gdb/gdb

Conclusion

GDB source-tracking breakpoints are an experimental feature currently under
upstream review and not yet available in a stable GDB release. This link
https://sourceware.org/gdb/current/onlinedocs/gdb.html/Set-Breaks.html
covers all available breakpoint commands. If you try this feature out and
hit any kind of unexpected behavior, feedback is very welcome — you can
follow and respond to the upstream patch discussion on the GDB mailing list
at https://sourceware.org/pipermail/gdb-patches/2026-April/226349.html

Stop building agents, start harnessing Goose

Posted by Adam Miller on 2026-04-15 14:00:00 UTC

Stop building agents, start harnessing Goose

There's a disconnect in the AI Engineering space right now and I think that the open source community has alread risen to the occasion to bridge the gap, but I don't see any signal that it's well understood or widely adopted. The industry is overwhelmingly focused on building agents from scratch via custom frameworks, bespoke orchestration layers, hand-rolled tool-calling loops, etc. when many of the hard problems have already been solved in that layer of the stack. The building block exists. It's open source. It's called goose.

I think for over 90% of use cases, if you're spending your time implementing an agent from scratch, you're already behind or potentially have already lost the race. My hypothesis is that Goose is the building block. It's the small, composable thing that becomes powerful when you wrap it in what the industry is rapidly agreeing is called the Harness.

The composable agent you didn't know you needed

Most people hear "goose" and think either "another AI coding assistant" or "another AI chatbot" (depending on how they came across goose and how they use it). That misunderstanding is the problem. Goose is not a coding assistant. It is not a chatbot. It is not a Claude Code competitor, though it can be configured to act as all of those things. At its core, goose is a small, configurable agent runtime with an extension-based architecture that can be composed into virtually anything.

It operates on three components:

  • Interface: Desktop app or CLI/TUI that collects user input and displays output.

  • Agent: The core logic engine that manages the interactive loop: sending requests to LLM providers, orchestrating tool calls, and handling context revision.

  • Extensions: Pluggable components built on the Model Context Protocol (MCP) that provide specific tools and capabilities.

A small core with a lot of power delivered through native extensions, external plugins, and configuration options. The agent core itself is minimal, it's an interactive loop plus context management. That's it. All capabilities come through the extension system.

You can strip goose down to nothing. No external capabilities. No tool calling. No skills. No plugins. You can even configure it so it cannot access the internet, only the inference service to talk to the model (which can be local). At that point, it's a plain chatbot with no agency whatsoever.

Or you can go the other direction entirely.

From zero to everything

Configure goose with the Developer extension, Computer Controller, Memory, and a handful of MCP servers and you have a working replacement for Claude Code, Codex, Gemini CLI, OpenCode, or any other similar tool. Same capabilities, no vendor lock-in, and you choose your own inference provider from over 25 options (at the time of this writing)including Anthropic, OpenAI, Google Gemini, Groq, Mistral, and more. You can run fully local inference via goose's native inference provider, or offload to Ollama, Ramalama LM Studio, or Docker Model Runner. The full list of providers is in the goose documentation.

If you put this together, you're well on your way to unlocking the full potential but you're just getting started.

Recipes: reproducible, composable workflows

Where goose gets interesting is its composition model. Goose Recipes are reusable, shareable workflow definitions that package together instructions, extensions, parameters, provider settings, retry logic, and structured response schemas. A recipe can be as simple as a single prompt with a specific extension configuration. Alternatively it can be sophisticated, composed of subrecipes where each subrecipe is effectively another goose agent with its own configuration: its own extensions, plugins, inference provider, system prompt, and skills.

Subrecipes run in isolated sessions with no shared conversation history, memory, or state. The main recipe's agent decides when to invoke them, can run them sequentially or in parallel, and chains their outputs through conversation context. Compositional agent orchestration without writing a single line of framework code.

You're not writing an orchestration layer. You're not building a DAG executor. You're not implementing tool-calling logic. You're writing YAML that describes what you want done and goose handles the how.

Goosetown: multi-agent orchestration, no framework required

If want to take this all the way to the extreme of a fully autonomous software factory like the one Steve Yegge outlines in his now infamous blog post, "Welcome to Gas Town", and implemented via his Gastown project. Gastown is a multi-agent workspace manager for orchestrating Claude Code, GitHub Copilot, Codex, Gemini, and other AI agents with persistent work tracking. It's a Go application with concepts like Mayors, Rigs, Polecats, Hooks, Convoys, and Beads. It's a real engineering effort to coordinate 20-30 agents on a codebase.

You can do exactly that by using goose as the building block. The open source community did it. They looked at Gastown and re-implemented its core concepts using goose's native capabilities. The result is Goosetown. Goosetown is a multi-agent coordination system that orchestrates "flocks" of AI agents (researchers, writers, workers, reviewers) to decompose and execute complex tasks. Goosetown uses goose's subagent delegation, skills system for role-based specialization, inter-agent communication via a broadcast channel called the "Town Wall," and multi-model support for adversarial cross-reviews where different LLMs review each other's work.

If you look at the code, it's just a few flat files, some shell scripts, some skills markdown, and some agent definitions.

All of this built on top of goose. Not alongside it. Not wrapping it. On it. Using the primitives goose already provides: skills, subagents, extensions, and recipes.

Goose as a service

Goose also runs as a daemon, exposing itself to other applications via the Agent Client Protocol (ACP) (a standardized JSON-RPC protocol developed by Zed Industries). ACP does for AI agents what LSP did for language servers. ACP decouples agents from editors and frontends, so goose can be embedded directly into Zed, JetBrains, Neovim, or any ACP-compatible environment.

The composability runs both directions. Goose can also consume other ACP agents as providers, routing its LLM calls through Claude Code, Codex, or Gemini while keeping its own extension ecosystem and UI. As Adrian Cole wrote in his blog post "How to Break Up with Your Agent":

"Pick the UI you like. Pick the agent you like. They don't have to be the same thing."

This bidirectional composability — goose as a component and goose as an orchestrator — is what separates it from other agent tools.

Open governance, no vendor lock-in

Goose is fully open source under the leadership of the Agentic AI Foundation (AAIF), which provides vendor-neutral governance under the umbrella of the Linux Foundation. AAIF also hosts the Model Context Protocol (MCP) itself, so the standards goose builds on are governed with the same neutrality.

This matters. When you build your workflows on goose, you're building on a foundation governed by a neutral body with a Governing Board, a Technical Committee, and a transparent contribution model. This is the same open, collaborative, and neutral model that made Linux and Kubernetes into reliable core components of the entire software industry, and it's the same reason I think it's worth investing time and energy into.

It's no secret I'm an open source nerd, and goose checks all the boxes.

The harness is the thing

We've collectively been on a journey. First it was Prompt Engineering, crafting the right words to get the right output. Then it was Context Engineering, making sure the model has the right information at the right time. Now, it seems we've arrived at the next turn in this adventure we all find ourselves in: Harness Engineering.

Ralph Bean nails this in his blog post "What Even Is the Harness?". The harness is the enablement layer. It's everything you add to the agent runtime that gives you control over your outcomes:

"Harness — the enablement layer. AGENTS.md files, skills, custom tools, hand-crafted linters, system prompts for task-oriented agents. These are the things you engineer, iteratively, to increase the chances the agent gets things right. This is what Birgitta Böckeler calls the user harness and is where Mitchell Hashimoto's attention lives."

—Ralph Bean

Read that again. The harness is not the agent. The harness is what you add to the agent. The AGENTS.md files. The skills. The custom MCP tools. The hand-crafted linters. The system prompts. The recipes and subrecipes. The extension configurations. The provider choices. The permission policies.

This is where your engineering effort belongs. Not in building the interactive loop, or implementing tool-calling JSON parsing, or writing context window management, or building MCP client libraries. Goose already does all of that and does so with the full backing of the AAIF, the Linux Foundation, and a vibrant open source community.

In most cases, and I'd argue almost all cases, your job is to build the harness.

The 90% argument

I think for over 90% of use cases where someone is building an agent today, goose is a better starting point than a blank text editor or a vibe coding session (are we calling it Agentic Engineering yet?).

If you need a coding assistant, goose does that. If you need a research agent, configure goose with web scraping extensions and a research-focused recipe or skill. If you need a CI/CD bot, run goose in daemon mode with ACP or orchestrate it with scripts/recipes in your CI job runner of choice. If you need multi-agent orchestration, compose goose instances with subrecipes or build a Goosetown-style flock. If you need local-only, air-gapped inference, point goose at Ollama, Ramalama, LM Studio, or its native inference provider. If you need to integrate with your existing editor, goose speaks ACP natively or you can set GOOSE_PROMPT_EDITOR and run the whole flow from inside your editor of choice. If you need vendor-neutral governance, it's under the Linux Foundation umbrella via AAIF.

The remaining 10%? Those are the genuinely novel agent architectures, the research projects pushing boundaries, the use cases where you do need to control every byte of the agent loop. For those, build from scratch. For everything else, build the harness. I'm not saying you can't build agents from scratch. I'm simply suggesting that you probably don't need to.

A call to action

If you're a professional technologist or an aspiring AI Engineer, I'd encourage you to shift your mental model. Stop thinking about building agents. Start thinking about harnessing them. At this point in the AI hype cycle, the agent is mature enough to be the commodity. The harness is your competitive advantage.

Install goose. Strip it down to nothing and build it back up. Write a recipe. Compose some subrecipes. Add skills. Configure extensions. Point it at different providers. Run it as a daemon. Embed it in your editor. Build a flock. Engineer the harness.

Go forth and harness your agents.

Happy hacking. <3

Streaming syslog-ng data to your lakehouse using OpenTelemetry

Posted by Peter Czanik on 2026-04-15 12:12:22 UTC

Version 4.11.0 of syslog-ng contains contributions from Databricks related to OAuth2 authentication. Recently, they published a blog about how this enables their customers to send logs to their data lake using syslog-ng and the OpenTelemetry protocol.

The syslog-ng project received two contributions from Databricks in the last weeks of 2025. The first one turned the already existing OAuth2 support generic and extensible, so it can be used anywhere, not just with Microsoft Azure (but of course, Azure compatibility was preserved). The next pull request was built on the first one and enabled OAuth2 support for gRPC-based destinations, like OpenTelemetry, Loki, BigQuery, PubSub, ClickHouse, etc. These changes were released as part of the syslog-ng 4.11.0 release. You can read more about these in the release notes at https://github.com/syslog-ng/syslog-ng/releases/tag/syslog-ng-4.11.0

Besides an excellent overview about syslog-ng, the related Databricks blog also provides step-by-step instructions on how to use syslog-ng with their product. You can read it at: https://community.databricks.com/t5/technical-blog/streaming-syslog-ng-data-to-your-lakehouse-powered-by-zerobus/ba-p/153979

syslog-ng logo

Originally published at https://www.syslog-ng.com/community/b/blog/posts/streaming-syslog-ng-data-to-your-lakehouse-using-opentelemetry

The best way to setup a kanban board

Posted by Ben Cotton on 2026-04-15 12:00:00 UTC

The best way to setup a kanban board is…whatever way works best for you. I have at least two distinct styles of board setup across three tools (yes, I have a problem) because that’s what works best for me. How you setup your boards is a matter of style. The most important thing is to set it up in such a way that you’ll actually use it — an unused board is full of lies and can cause confusion with your collaborators. Since I often get asked for help on this, it seems worth writing down some considerations.

Direction of flow

Most tools I’ve used assume a left-to-right flow. I recently switched to using a right-to-left flow after reading Philippe Bourgau’s post on the topic. Using right-to-left means you’re starting with the stuff that’s in progress instead of the backlog. You “pull” cards instead of “pushing” them. I made the suggestion at work, and folks seem to generally like the change. The main problem is that a lot of tools I’ve used treat the leftmost column as the default starting point, so the card creation experience involves an extra click or two.

Number of columns

This is one that I’ve often seen people overthink. In the simplest configuration, you have “to do”, “doing”, and “done.” That’s often enough. Simplicity is good. On the other end, one of the boards I use at work has something like a dozen columns. Not every card flows through every column, so it’s not as wild as it sounds. Much of the column sprawl is because the tool we use doesn’t support swimlanes. I would normally not suggest double-digit columns, but it works well in this case.

A good rule to determine if a column is necessary is if it never has any cards, or cards only briefly live in the column, it’s not worthwhile.

Examples

Here are a few examples of how I have different boards set up.

The meal planning board my wife and I use to plan the week’s dinner has three columns: ideas, need ingredients, and planned. “Ideas” is the backlog, “need ingredients” means we intend to cook it but first we have to go grocery shopping, and “planned” is ready to cook.

The board I use to track laundry has five columns: dirty (the backlog), ready (sorted), washing (duh), drying (also duh), and folding (triple duh). The folding column can probably go away, but sometimes the basket sits for a few hours or until the next day.

My board for Duck Alignment Academy has five columns (plus one). “New” is for ideas that I don’t really fleshed out yet. “Ready” is for posts that I could sit down and write. “In progress” is for posts that I am currently writing. “Scheduled” is for completed posts waiting to publish (I wish this had more cards in it). “Done” is for posts that have published. There’s an extra “archived” column that I move cards to after I send that month’s Duck Alignment Academy newsletter. (I have a column because the tool doesn’t support archiving cards directly.)

Extra fields/metadata

Most tools let you apply labels, add due dates, create custom fields, and so on. I’ve certainly made use of that when setting up boards, but I find myself not really paying attention to it most of the time. There’s often an urge to design a system so that everything will be perfectly organized. But in the same way that a backup that you never test restores from is not reliable, metadata that you only write is not useful. Metadata is so easy to overoptimize, because it requires getting everyone using the board to buy in and then also to have a reason to use it. I wrote a similar piece about issue labels earlier this year.

In my experience, it’s almost always better to ignore metadata when initially creating a board. Only when you can identify a concrete problem that you’re actually experiencing should you add metadata.

A great example of this is in my Duck Alignment Academy board. When I was first creating the board, I was also creating the website. I had cards for tasks like domain registration, hosting setup, page creation, and so on. In the four-plus years since I launched the site, the cards have almost exclusively been blog posts. The “blog post” label that I created doesn’t add a lot of value. What does add value are two custom fields I added: “URL” and “description.” I put the post’s URL and the excerpt (that gets used for social media preview and the like) into those two fields so that later on when I go to add them to the newsletter or share them elsewhere, they’re available and consistent.

Another use that is actually useful is on my meal planning board. I have labels for the primary protein in a meal, which helps me see at a glance if we’ve planned chicken five days in a row.

Work in progress limits

If your tool supports setting work in progress limits, I recommend that you do. If nothing else, it forces you to be honest about what you’re actually working on and what you’ve set aside for one reason or another. I’ve found that WIP limits seem to work better on single-user boards, but I haven’t used them much on collaborative boards.

Automate and integrate

The more your board does the boring work for you, the more likely you are to keep it up to date. If it supports auto-archiving cards when they’re complete, set that up. If you can tie it in to the tools you’re already using, do that. (The kanban board available in GitHub issues works pretty well in that regard.) Make the board a hub of your work and you’ll get use out of it. Make the board a chore that you have to go update and you won’t.

This post’s featured photo by airfocus on Unsplash.

The post The best way to setup a kanban board appeared first on Duck Alignment Academy.

Matrix server maintenance

Posted by Fedora Infrastructure Status on 2026-04-14 11:15:00 UTC

Element Matrix Services is performing scheduled maintenance on our matrix server (fedora.im).

Affected Services:

  • chat.fedoraproject.org
  • fedora.im
  • matrix services

Things I Read: 1-13 April 2026

Posted by Brian (bex) Exelbierd on 2026-04-14 11:00:00 UTC

I’ve long been inspired by the recently read lists published by my friends Ben Cotton and Vadim Rutkovsky. In that spirit, here’s a list of things I found interesting, or that stuck with me for other reasons.

This period was marked by some trips down memory lane. The Supreme Court of the US is always in the news, these days usually for less than pleasant reasons, and this time it was joined by the fast-food burger chain Wendy’s.

Disclaimer: I work at Microsoft on upstream Linux in Azure. These are my personal notes and opinions.

Memory Lane

The Surprising Reason Neil Gorsuch Has Been So Good on Native Rights

I searched for this piece after a random Mastodon toot referred to Justice Gorsuch as being "ride or die" for Native Americans. I knew I wouldn't find anything that would change my opinion of his rulings, but what I did find was still unexpected. The original framers of the Constitution were very specific (weirdly so for the time period) about Native rights, and as an originalist Gorsuch supports this view. Even a broken clock is right twice a day ...

What the Hell Happened to Wendy’s?

It’s also increasingly difficult for any brand to keep anyone’s attention in a manic and incoherent media ecosystem where the aura that a CEO projects while biting into a burger on camera is seemingly more important than the quality of the burger itself.

Wendy's has always had amazing chili (and they provide hot sauce!) and a spicy chicken sandwich that couldn't be beaten. While I wouldn't miss their fries, I don't honestly care if the McDonald's CEO looks good eating a hamburger or not.

LLMs and AI

THE 2028 GLOBAL INTELLIGENCE CRISIS

What if our AI bullishness continues to be right...and what if that’s actually bearish?

This think piece surfaced some ideas about how success with LLMs, and to a degree AGI, could still turn into a downward cycle. I hadn't thought about the threat of disintermediation multiplied by the loss of per-seat revenue. It also makes recent comments by Microsoft about LLM agents needing seat licenses make more sense.

I used AI. It worked. I hated it.

the tool requires expertise to validate, but its use diminishes expertise and stunts its growth. How does one become an expert? There are no shortcuts; there is only continuous hard work and dedication.

There is a continuous drumbeat about the loss of the expertise required to operate LLMs effectively, while simultaneously insisting that the tools are failures unless completely untrained people can use them without trouble. This isn't a new problem. Almost all technological innovation comes at the cost of higher abstraction levels that require a deeper understanding of the space while also eliminating some core knowledge of how things work. I am firmly of the opinion that using any tool effectively requires training, and that we have a long history of doing a bad job training people on computing tools. But I also don't believe upping the abstraction level automatically means we can't train a new generation to be effective. Airline pilots, accountants and machinists all did it. Why can't developers?

Other Technology

Self hosting as much of my online presence as practical

[M]y ISP doesn’t guarantee a static IPv4 [therefore I am] running a Wireguard link between a box that sits in a cupboard in my living room and the smallest OVH instance I can

My ISP literally routes me no inbound internet except in response to an outbound request. It is simultaneously refreshing and frustrating. I had long debated trying to pull off a link like this with Tailscale but hadn't bothered. I may now be inspired to try.

I tested three Windows laptops in the MacBook Neo’s price range — there’s no contest

Despite years of rumors, the MacBook Neo still seemed to take the Windows world by surprise.

Is literally anyone surprised by the lack of actual consumer focus in the low-end laptop market? The problem here has never been the "Windows Tax." It's the unwillingness to do the integration work required to make the hardware and the OS behave like a coherent product. OEMs and Microsoft could do this, but the OEMs won't force their suppliers to meet that bar.

The Nail Test: Why this $54 billion innovation is terrifying Western auto executives

If you can reproduce the failure one hundred times, identically, then and only then have you understood the mechanism.

It's interesting to see failure-based TDD in the industrial world. I knew China was big on EVs and batteries, but this specific engineering drive was one I hadn't seen written up so directly before. It is a Fast Company article so the quality may drift, but it's a solid read.

Fun

🎓 On Geldings and the 'Natural' Social Order of Horses

It also, if you think about it for more than a second, cannot possibly be how natural horse societies work.

Almost everything Eleanor writes is amazing, and this is a great piece on something I never thought I'd be interested in.

Even if horses aren't your thing, her comments on 'bachelor bands' and the place they hold in non-breeding male horse life are worth reading. The introduction of geldings by humans changed how horses interact and socialize. It also reminded me how badly we've warped the ways young men are taught to socialize in an age where garbage like the Manosphere is given a platform.

Test Your Body Awareness

If you're like me and a person approaching a certain age, well ... you need to know. One of the best personal changes I have made is going to the gym twice a week for almost the last year. I am privileged and able to have a personal trainer. It isn't just the accountability, it is the ability to have an expert in a domain where I am not an expert do the thinking. She directs, I lift.

Nominate Your Fedora Heroes: Mentor and Contributor Recognition 2026

Posted by Fedora Magazine on 2026-04-14 09:30:00 UTC
Mentor and Contributor Recognition 2026 - Generated using Gemini Nano Banana Pro by Akashdeep Dhar

It’s time to show our appreciation of the amazing contributors who help shape the Fedora community.

The Fedora Project thrives through the devotion, guidance, and tireless drive of the contributors who consistently perform. From developing testcases to onboarding contributors, from technical writing to coordinating events, it is these vital champions who ensure that the community flourishes. In coordination with the Fedora Mentor Summit 2026, we will be returning to Flock To Fedora 2026 to announce the winners. This wiki reflects the deep gratitude and careful thought behind this community recognition program.

As we prepare to spotlight exceptional mentors and contributors across the Fedora Project, we invite you to help us appreciate the amazing contributors who help shape the community. Whether it is a veteran mentor who helped you begin your journey or a contributor whose efforts have truly reshaped the community’s landscape, now is the moment to celebrate them! Discover more about the nomination guidelines and submit your entry using the link provided below:

👉 Find more information here: https://fedoraproject.org/wiki/Contributor_Recognition_Program_2026
👉 Submit your nominations here: https://forms.gle/mBAVKw4qLu14R5YY7

🗓 Deadline: 15th May 2026

Let us appreciate the amazing contributors who help shape the community. Your nomination could be the recognition that might enable them to do more – and a moment of achievement for the entire community.

Rustbucket

Posted by Tony Asleson on 2026-04-14 01:38:18 UTC

Sorting a terabyte of data in the late 1990s meant serious hardware, serious planning, and probably a serious budget approval process. Today you can do it on a workstation before lunch. I wanted to know how fast, so I wrote rustbucket to find out.

It’s a two-phase external sort implemented in Rust, built around io_uring, and named for reasons that should be obvious to anyone who has spent time with either Rust or storage systems.

RHEL 10 (GNOME 47) Accessibility Conformance Report

Posted by Felipe Borges on 2026-04-13 10:24:49 UTC

Red Hat just published the Accessibility Conformance Report (ACR) for Red Hat Enterprise Linux 10.

Accessibility Conformance Reports basically document how our software measures up against accessibility standards like WCAG and Section 508. Since RHEL 10 is built on GNOME 47, this report is a good look at how our stack handles various accessibility things from screen readers to keyboard navigation.

Getting a desktop environment to meet these requirements is a huge task and it’s only possible because of the work done by our community in projects like: Orca, GTK, Libadwaita, Mutter, GNOME Shell, core apps, etc…

Kudos to everyone in the GNOME project that cares about improving accessibility. We all know there’s a long way to go before desktop computing is fully accessible to everyone, but we are surely working on that.

If you’re curious about the state of accessibility in the 47 release or how these audits work, you can find the full PDF here.

misc fedora bits second week of april 2026

Posted by Kevin Fenzi on 2026-04-12 17:14:00 UTC
Scrye into the crystal ball

Another saturday and... oh wait, it's sunday! I was away almost all the day yesterday (morning at https://beaverbarcamp.org/ and afternoon/evening visiting family ), so this will be a day late. :)

This week we were still in Fedora 44 final freeze (we canceled the go/nogo on thursday because there were still unaddressed blockers) so there was a lot of catching up on old issues/processing docs and other pull requests and the like.

There were a few things that stood out however:

Matrix bots learn new tricks

Diego wrote up a pull request to adjust our matrix bot to point to forge.fedoraproject.org for things that have moved there from pagure.io ( https://github.com/fedora-infra/maubot-fedora/pull/150 ) and so I merged it, figured out how to cut a release there, figured out how to deploy it to first staging and the production.

So, now !epel !ticket !releng should all work for those trackers, and

!forge org repo should work for a generic pointer to any forge project.

Hopefully this will make meetings and discussion on matrix nicer.

Fixed a websocket proxy issue with openqa

We had to move openqa behind anubis as the scrapers discovered it and were making it unusable. Unfortunately, openqa has a mode where you can update test screens that uses websockets and those were not correctly passing though anubis so that functionality was broken.

I was going to go look at apache docs and see if I could track down what needed to be set to do that, but decided to just ride the ai wave and ask a ai agent about it.

It snarfed in the config, thought about it for a bit, then spewed out a solution. The solution was largely for older apache versions (but I didn't tell it what apache version we were running), but at the end it correctly noted that on newer versions passing "upgrade=websocket" to the proxy commandline would fix it.

It did. It definitely saved me time poking through the apache docs.

a few new builders soon

We have had a number of machines we moved from our old datacenter that we wanted to repurpose as builders sitting around. It's not been very high priority to get them setup, but what better time that a freeze to get them online.

So, I got 3 of them ready, which involved updating a bunch of firmware on them, installing them, configuring networking, etc.

Will need a small freeze break to add them into ansible and finish them up, but then there should be 3 more buildhw-x86 builders.

Fedora 44 upgrades

Since I was catching up on things I decided to go ahead and upgrade my main server and it's vmhost to fedora 44 this morning.

Everything went super fast and painlessly, aside one issue with matrix-synapse (The f43 packager is newer than the f44 one so it would not work with my config/database). For now I just "downgraded" (or is that "sidegraded") to the f43 one. It seems to have a pretty nasty tangle of rust crate version changes, so it might not be too easy to sort out quickly.

Family Vacation time

The week after next (the week of april 20th) I will be away all week. I might look in on matrix/email some, but don't count on it. I'll be at a family vacation in hawaii. Please file tickets and be kind to my co-workers who are perfectly capable of handling anything in my absense. :)

comments? additions? reactions?

As always, comment on mastodon: https://fosstodon.org/@nirik/116392979078747195

HowTo: Cómo instalar y anclar una versión específica con Flatpak

Posted by Rénich Bon Ćirić on 2026-04-10 20:30:00 UTC

¿Te ha pasado que sale la actualización de tu software favorito y nomás no puedes saltar a ella? Pues resulta que Bitwig Studio llegó a su versión 6 y, pues, yo andaba sin chamba y sin feria para el upgrade.

Lo uso con Flatpak porque, la neta, yo mismo inicié, propuse y doné la primera implementación en Flathub. Total, me tuve que quedar en la versión 5.x y nunca había tenido que anclar una versión usando Flatpak.

Le pregunté a un LLM cómo hacerle (pa' qué le miento, compa) y me dió la respuesta. Así de fácil se hace:

Paso a paso

Revisar el log:

Primero hay que ver qué versiones hay disponibles en el historial de Flathub.

flatpak remote-info --log flathub com.bitwig.BitwigStudio
Instalar la versión deseada:

Copia el hash del commit que te interese e instálalo.

sudo flatpak update --commit=00535d779d2ebead55f129a406ed819064b7d3a28bd638aa25a0c8dda919197e com.bitwig.BitwigStudio
Anclar (mask):

Esto es lo más importante para que no se te actualice por error la próxima vez que hagas un update general.

flatpak mask com.bitwig.BitwigStudio

No tiene chiste, ¿verdad? Con esto ya puedes estar tranquilo de que tu flujo de trabajo no se va a romper por una actualización que no pediste... o, mejor dicho, pa' la que no te alcanza. ;D

Sobre el software privativo

La neta, yo no soy muy partidario de estas cosas. Siempre recomiendo estar al día pero, con software privativo y que pagas por usar, pues no me quedó de otra. Estos batos de Bitwig no mantienen versiones anteriores de forma sencilla; hay que estar al día con ellos a huevo.

Note

A muchos les parecerá una patraña o una traición usar software no libre en Fedora. En mi caso, pasé varios años intentando grabar mis rolas con software libre. Nunca pude. No por falta de software sino, más que nada, por la falta de infraestructura.

Para grabar con puro Open Source como Ardour, que es una chulada, necesitas un estudio pro: micros, batería, un baterista y que todos tengan tiempo. Como yo soy bien huevón para coordinar gente, si no hago las cosas en el momento, ya no las hice. Bitwig Studio me resuelve la vida porque ya trae todo el arsenal: samples, instrumentos y FX de súper buena calidad.

Al menos no tengo que usar Windows. Es mucho más de lo que tuve en los 2000s con el buen Renoise o LMMS. El flujo de trabajo importa bastante, y aunque en GNU/Linux hay plugins increíbles (como los Calf en LV2), armar una sesión estable era un pinche desmadre impráctico.

Conclusión

Ahí se las dejo, compas. Ojalá y a alguien le sirva este mini-howto para no andar batallando con las versiones de Flatpak. ¡A seguirle dando a la música!

Community Update – Week 15 2026

Posted by Fedora Community Blog on 2026-04-10 10:00:00 UTC

This is a report created by CLE Team, which is a team containing community members working in various Fedora groups for example Infrastructure, Release Engineering, Quality etc. This team is also moving forward some initiatives inside Fedora project.

Week: 06 – 10 Apr 2026

Fedora Infrastructure

This team is taking care of day to day business regarding Fedora Infrastructure.
It’s responsible for services running in Fedora infrastructure.
Ticket tracker

CentOS Infra including CentOS CI

This team is taking care of day to day business regarding CentOS Infrastructure and CentOS Stream Infrastructure.
It’s responsible for services running in CentOS Infratrusture and CentOS Stream.
CentOS ticket tracker
CentOS Stream ticket tracker

QE

This team is taking care of quality of Fedora. Maintaining CI, organizing test days
and keeping an eye on overall quality of Fedora releases.

  • General release validation and bug reporting work throughout the whole week, in preparation for F44 Final.
  • Automated a new openQA test for basic IPv4 and IPv6 connectivity (https://forge.fedoraproject.org/quality/os-autoinst-distri-fedora/pulls/504)
  • Blockerbugs migration to Forge almost ready for staging deployment
  • Ongoing collaboration with OSCI team to improve and rationalize generic test pipelines
  • Contributed Kiwi and Koji logging improvements to Pungi as part of compose-critical script work

Forgejo

This team is working on introduction of https://forge.fedoraproject.org to Fedora
and migration of repositories from pagure.io.

UX

This team is working on improving User experience. Providing artwork, user experience,
usability, and general design services to the Fedora project

  • Madeline submitted a proposal for All Things Open

If you have any questions or feedback, please respond to this report or contact us on #admin:fedoraproject.org channel on matrix.

The post Community Update – Week 15 2026 appeared first on Fedora Community Blog.

⚙️ PHP version 8.4.20 and 8.5.5

Posted by Remi Collet on 2026-04-10 05:48:00 UTC

RPMs of PHP version 8.5.5 are available in the remi-modular repository for Fedora ≥ 42 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).

RPMs of PHP version 8.4.20 are available in the remi-modular repository for Fedora ≥ 42 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).

ℹ️ These versions are also available as Software Collections in the remi-safe repository.

ℹ️ The packages are available for x86_64 and aarch64.

ℹ️ There is no security fix this month, so no update for versions 8.2.30 and 8.3.30.

Version announcements:

ℹ️ Installation: Use the Configuration Wizard and choose your version and installation mode.

Replacement of default PHP by version 8.5 installation (simplest):

On Enterprise Linux (dnf 4)

dnf module switch-to php:remi-8.5/common

On Fedora (dnf 5)

dnf module reset php
dnf module enable php:remi-8.5
dnf update

Parallel installation of version 8.5 as Software Collection

yum install php85

Replacement of default PHP by version 8.4 installation (simplest):

On Enterprise Linux (dnf 4)

dnf module switch-to php:remi-8.4/common

On Fedora (dnf 5)

dnf module reset php
dnf module enable php:remi-8.4
dnf update

Parallel installation of version 8.4 as Software Collection

yum install php84

And soon in the official updates:

⚠️ To be noticed :

  • EL-10 RPMs are built using RHEL-10.1
  • EL-9 RPMs are built using RHEL-9.7
  • EL-8 RPMs are built using RHEL-8.10
  • intl extension now uses libicu74 (version 74.2)
  • mbstring extension (EL builds) now uses oniguruma5php (version 6.9.10, instead of the outdated system library)
  • oci8 extension now uses the RPM of Oracle Instant Client version 23.26 on x86_64 and aarch64
  • A lot of extensions are also available; see the PHP extensions RPM status (from PECL and other sources) page

ℹ️ Information:

Base packages (php)

Software Collections (php83 / php84 / php85)

 

⚙️ PHP version 8.4.19 and 8.5.4

Posted by Remi Collet on 2026-03-13 05:32:00 UTC

RPMs of PHP version 8.5.4 are available in the remi-modular repository for Fedora ≥ 42 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).

RPMs of PHP version 8.4.19 are available in the remi-modular repository for Fedora ≥ 42 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).

ℹ️ These versions are also available as Software Collections in the remi-safe repository.

ℹ️ The packages are available for x86_64 and aarch64.

ℹ️ There is no security fix this month, so no update for versions 8.2.30 and 8.3.30.

Version announcements:

ℹ️ Installation: Use the Configuration Wizard and choose your version and installation mode.

Replacement of default PHP by version 8.5 installation (simplest):

On Enterprise Linux (dnf 4)

dnf module switch-to php:remi-8.5/common

On Fedora (dnf 5)

dnf module reset php
dnf module enable php:remi-8.5
dnf update

Parallel installation of version 8.5 as Software Collection

yum install php85

Replacement of default PHP by version 8.4 installation (simplest):

On Enterprise Linux (dnf 4)

dnf module switch-to php:remi-8.4/common

On Fedora (dnf 5)

dnf module reset php
dnf module enable php:remi-8.4
dnf update

Parallel installation of version 8.4 as Software Collection

yum install php84

And soon in the official updates:

⚠️ To be noticed :

  • EL-10 RPMs are built using RHEL-10.1
  • EL-9 RPMs are built using RHEL-9.7
  • EL-8 RPMs are built using RHEL-8.10
  • intl extension now uses libicu74 (version 74.2)
  • mbstring extension (EL builds) now uses oniguruma5php (version 6.9.10, instead of the outdated system library)
  • oci8 extension now uses the RPM of Oracle Instant Client version 23.26 on x86_64 and aarch64
  • A lot of extensions are also available; see the PHP extensions RPM status (from PECL and other sources) page

ℹ️ Information:

Base packages (php)

Software Collections (php83 / php84 / php85)

 

Friday Links 26-12

Posted by Christof Damian on 2026-04-09 22:00:00 UTC
Open sandwich with vegetarian coronation chicken

Two weeks of links. A lot of AI related articles again. I liked the paper about using chatbots in leadership, and the one about the YAML of the mind. If you want a podcast, check out the one about night trains.

Quote of the Week
The technologies we use to try to “get on top of everything” always fail us, in the end, because they increase the size of the “everything” of which we’re trying to get on top.
Four Thousand Weeks
Oliver Burkeman

Leadership

We-ness: The secret cause of Psychological Safety [Podcast] - very interesting deep dive into how feeling like a team can increase psychological safety.

Sécuriser ses serveurs derrière Cloudflare : Authenticated Origin Pulls

Posted by Guillaume Kulakowski on 2026-04-09 18:24:00 UTC
Cloudflare protège efficacement vos services… sauf si votre serveur reste accessible en direct. Dans cet article, on met en place Authenticated Origin Pulls pour garantir que seules les requêtes provenant de Cloudflare peuvent atteindre votre infrastructure, avec deux niveaux de sécurité.

Handling a PR disaster for your project

Posted by Ben Cotton on 2026-04-08 12:00:00 UTC

I want to say up front that the point of this post is not to disparage Trivy or its maintainers. They’ve had a rough few weeks and I feel for them. I only discuss Trivy as a recent example of a bad day at the office.

The saying “there’s no such thing as bad publicity” always felt a little gross to me. There are a lot of reasons you might get noticed that are bad, and it should feel bad to do bad things. But maybe there’s something to it. If you handle a PR disaster well, you might come out ahead (although you’d still probably prefer to not deal with it in the first place).

Bad publicity is good?

As you may know, the Trivy project fell victim to an attack last month. The compromise affected not just Trivy, but its sponsoring company and many downstream projects and companies. It was — and continues to be — a big deal.

Out of curiosity, I decided to look to see if people were switching from Trivy to other projects. I decided to use GitHub stars as a proxy for interest. Yes, GitHub stars are meaningless. But I figured a relative change might indicate interest in alternatives. Much to my surprise, Trivy’s star count increased pretty dramatically post-compromise. Syft and cdxgen, which seem to be the main alternatives for SBOM generation, saw no such bump. Of course, this doesn’t necessarily mean that people aren’t shifting away from Trivy. But I expected to see the opposite of what the star counts show.

Graph of GitHub stars over time for three projects. Trivy shows a steady increase with a sharp uptick in the last few months. Syft and cdxgen show slower-but-still-steady increases with no recent changes.

The past few weeks have been a PR nightmare for Trivy, and I’m sure it’s been entirely unpleasant for the maintainers. I don’t wish this on anyone, but if you find yourself in this kind of situation, take heart. There’s at least some indication that your project can survive this kind of catastrophic event.

It’s worth noting that it may just be too early to tell what the future holds for Trivy. While they’ve received a lot more stars and attention, there haven’t been any commits to main in three weeks. People are still opening issues and pull requests, so it may just be that the maintainers are still focused on cleanup. I hope that the project comes back stronger and more secure, but time will tell.

Disaster recovery

If your project has a PR nightmare, stay calm. If you have the resources to bring in a crisis PR expert, do that and ignore everything else I say after this. Most likely, though, you don’t have the resources to bring in a pro. So here’s my amateur advice:

  • Mitigate the damage. Take whatever technical steps are necessary to keep the problem from getting worse. This may include rotating credentials, disabling CI pipelines, locking issues, or turning off services. This step is the most important because you don’t want the problem getting worse while you’re handling communication.
  • Create an advisory if appropriate. Do this if there’s a vulnerability in your software that you’re addressing. You can skip this for other flavors of disaster. If you’re on GitHub, you can follow the GHSA creation process. Other hosting platforms may have their own system. As a last resort, you can request a CVE at https://cveform.mitre.org/.
  • Communicate honestly and factually. Tell people what happened and, if applicable, how to know if they’re affected and how to address it. Don’t try to hide uncomfortable facts. Be honest, but don’t speculate or guess. You can always update as new facts come to light, but you can’t un-say things.
  • Don’t feel obligated to correct everyone. As the story spreads, people will get things wrong. Perhaps aggressively so. You don’t need to put your energy into correcting every misstatement posted by some rando online. If news outlets or influential people misstate facts, you can give them the correct version. If they draw inferences that you disagree with, it’s probably best to let it go.

This post’s featured photo by Ante Hamersmit on Unsplash.

The post Handling a PR disaster for your project appeared first on Duck Alignment Academy.

🎲 PHP version 8.4.20RC1 and 8.5.5RC1

Posted by Remi Collet on 2026-03-27 06:38:00 UTC

Release Candidate versions are available in the testing repository for Fedora and Enterprise Linux (RHEL / CentOS / Alma / Rocky and other clones) to allow more people to test them. They are available as Software Collections, for parallel installation, the perfect solution for such tests, and as base packages.

RPMs of PHP version 8.5.5RC1 are available

  • as base packages in the remi-modular-test for Fedora 42-44 and Enterprise Linux ≥ 8
  • as SCL in remi-test repository

RPMs of PHP version 8.4.20RC1 are available

  • as base packages in the remi-modular-test for Fedora 42-44 and Enterprise Linux ≥ 8
  • as SCL in remi-test repository

ℹ️ The packages are available for x86_64 and aarch64.

ℹ️ PHP version 8.3 is now in security mode only, so no more RC will be released.

ℹ️ Installation: follow the wizard instructions.

ℹ️ Announcements:

Parallel installation of version 8.5 as Software Collection:

yum --enablerepo=remi-test install php85

Parallel installation of version 8.4 as Software Collection:

yum --enablerepo=remi-test install php84

Update of system version 8.5:

dnf module switch-to php:remi-8.5
dnf --enablerepo=remi-modular-test update php\*

Update of system version 8.4:

dnf module switch-to php:remi-8.4
dnf --enablerepo=remi-modular-test update php\*

ℹ️ Notice:

  • version 8.5.4RC1 is in Fedora rawhide for QA
  • EL-10 packages are built using RHEL-10.1 and EPEL-10.1
  • EL-9 packages are built using RHEL-9.7 and EPEL-9
  • EL-8 packages are built using RHEL-8.10 and EPEL-8
  • oci8 extension uses the RPM of the Oracle Instant Client version 23.9 on x86_64 and aarch64
  • intl extension uses libicu 74.2
  • RC version is usually the same as the final version (no change accepted after RC, exception for security fix).
  • versions 8.4.19 and 8.5.4 are planed for March 12th, in 2 weeks.

Software Collections (php84, php85)

Base packages (php)

Howto: a very nice way of organizing your bash env variables and settings

Posted by Rénich Bon Ćirić on 2026-04-08 01:30:00 UTC

So, I know you have either ~/.bashrc and ~/.bash_profile or ~/.profile in your installation. We all do. And many apps we use on a daily basis use those. Plus, you like your aliases, your own env variables and maybe even one or two bash functions you like to use.

That creates a problem. You have everything in a single file (or two) and you have a mess. It's hard to read, hard to organize and a single mistake renders that file useless. Well, maybe not useless, but you get the idea. It's a bad idea to have a 500-line config file, ¿no crees?

So, which solutions are there?

Easy, just npm install .... Yeah, right. Who wants more terrible TypeScript/EcmaScript code in their environment? I mean, really. And it comes in troves! Huge amounts of it everywhere! No mames, we can do better with plain old Bash.

The trick is actually much simpler. Here's how I do it:

Filename: ~/.bashrc

## Load any supplementary scripts from ~/.bashrc.d/
if [[ -d $HOME/.bashrc.d ]]; then
   for f in "$HOME"/.bashrc.d/*.bash; do
      [[ -f "$f" ]] && source "$f"
   done

   unset -v f
fi

Note

This snippet goes into your ~/.bashrc. It checks if the directory ~/.bashrc.d exists and then loops through every file ending in .bash to "source" it. This effectively evaluates those files into your current session.

Now, you can do the same for your bash profile, which is the preferred place to put things like environment variables and such.

Filename: ~/.bash_profile

## Load any supplementary scripts from ~/.bash_profile.d/
if [[ -d ~/.bash_profile.d ]]; then
   for f in ~/.bash_profile.d/*.bash; do
      [[ -f "$f" ]] && source "$f"
   done

   unset -v f
fi

This is a neat trick, if I may say so. It enables me to create independent files for different things. For example, I like my $GOPATH env variable to point to ~/Projects/go. Also, I like to add Go's bin directory to my $PATH. Easy enough, right?

But, where do I put it?

Main Differences:
~/.bashrc:
Read every time you open a new interactive terminal. Perfect for aliases and prompts.
~/.bash_profile:
Read only once upon login. Best for environment variables that should be inherited by all child processes.

Tip

If you want to be able to overwrite your $PATH entries and expect them to persist between terminals without re-logging, putting the loading logic in ~/.bashrc is the way to go.

That said, I am putting my go.bash file in ~/.bashrc.d/go.bash:

# go settings
export GOPATH=$HOME/src/go
export PATH=$PATH:$GOPATH/bin

Now, it's as easy as opening a new terminal (I set up my terminal to use a login shell) or I can just source ~/.bash_profile. In Fedora, sourcing ~/.bash_profile will source ~/.bashrc if it exists anyway. ;D

One more customization I really like:

# ~/.bash_profile.d/ls.bash
alias ls='ls --color=auto --group-directories-first'

That one makes my directories appear before the files when using ls. The --color=auto flag is just to make the default colors stay.

Conclusion

Keep your environment clean, dude. Organizing your configs in .d directories makes it much easier to manage and debug. No more messy files!

Referencias

Fedora Code of Conduct Report 2025

Posted by Fedora Community Blog on 2026-04-07 12:00:00 UTC

The Fedora Project’s Code of Conduct and its reports are managed by the Fedora Code of Conduct Committee, the Fedora Community Architect, and the Fedora Project Leader. We publish this summary to demonstrate our commitment to community safety and our project’s social fabric.

This post covers the year of reports received in the 2025 calendar year. The purpose of publishing the annual Code of Conduct Report is to provide transparency, insight, and awareness into the health signs of the community.

How’d it go in 2025

In 2025, we had a slight uptick in engagement from 2024. 14 reports were opened in 2025, compared to 11 reports in 2024. While we saw some members step down this year, the Fedora Code of Conduct Committee (CoCC) also refreshed its membership with new voices. Jef Spaleta, Chris Idoko, and Ankur Sinha were nominated this year to maintain responsiveness and steer our community standards forward.

The majority of issues reported in the year 2025 were largely handled through “shoulder taps” and formal reach-outs. This is in comparison to disciplinary actions or emergency action requiring bans or long-term suspensions. While reports did increase from 2024 to 2025, the difference is negligible. The Committee expects this number to fluctuate annually, as world events and international conflicts often impact the social dynamics of communities like ours.

You can see the full data from 2025 in the table below.

Community Health Assessment

After six years of reporting, looking back at our journey from the modernization of the Code of Conduct to where we stand today, it is encouraging to see how much we have grown together. Yearly reports indicate while our community continues to have conflicts (as any healthy community ought to), incident severity continues to decrease comparing reports spanning 2020 through 2025. We attribute this consistent reduction in “opened reports” and “CoC interventions” to the maturity of our self-moderation culture.

A significant part of this positive atmosphere is thanks to the refreshed CoC guidelines established by Marie Nordin in 2021 successfully addressing the peak in incidents that occurred in the COVID-19. These were roadmaps on how we want to treat each other. Seeing these guidelines in actions in our reports shows that they are working as hoped. We feel the community is in a healthy place at this time, but a healthy committee is one that never stops listening. We would love to hear your thoughts, feedback and suggestions on how we can continue to help our shared spaces feel safe, inclusive and welcoming.  

YearReports OpenedReports ClosedWarnings IssuedModerations IssuedSuspensions IssuedBans Issued
202514141200
202411111010
202317175311
202221246300
202123242101
202020168420

Looking forward to 2026

If you witness or are part of a situation that violates Fedora’s Code of Conduct, please open a private report on the [Code of Conduct repo] or email codeofconduct@fedoraproject.org. As always, your reports are confidential and only visible to the Code of Conduct Committee.

Remember that opening a CoC report does not automatically mean action will be taken. Sometimes things can be clarified, improved, or resolved entirely. Or, it could be something pretty small, but it definitely wasn’t okay, and you don’t want to make a big deal… open that report anyway, because it could show a pattern of behavior that is negatively impacting more people than yourself.

Here is a reminder to our Fedora community to be kind and considerate to each other in all our interactions. We all depend on each other to create a community that is healthy, safe, and happy. Most of all, we love seeing folks self-moderate and stand up for the right thing day to day in our community. Keep it up, and keep being awesome Fedora, we <3 you!

About the Committee

Fedora Project’s Code of Conduct and reports are managed by the Fedora Code of Conduct Committee (CoCC). The Fedora CoCC is made up of the Fedora Project Leader, Jef Spaleta; the Fedora Community Architect, Justin Wheeler; the Red Hat legal team, as appropriate; and community nominated members. Jef Spaleta, Chris Onoja Idoko, Ankur Sinha, nominated this year.

We’re incredibly grateful to Josh Berkus and Laura Santamaria for stepping up as term-limited members of the Fedora Code of Conduct Committee (CoCC). Their commitment ensured we had consistent coverage through September 30th, 2025, providing vital support until our newest nominees were fully onboarded and trained.

The post Fedora Code of Conduct Report 2025 appeared first on Fedora Community Blog.