/rss20.xml">

Fedora People

Updates and Reboots

Posted by Fedora Infrastructure Status on 2025-11-20 22:00:00 UTC

We will be updating and rebooting various servers. Services will be up or down during the outage window.

infra weeksly recap: Early November 2025

Posted by Kevin Fenzi on 2025-11-15 17:08:50 UTC
Scrye into the crystal ball

Well, it's been a few weeks since I made one of these recap blog posts. Last weekend I was recovering from some oral surgery, the weekend before I had been on PTO on friday and was trying to be 'away'.

Lots of things happened in the last few weeks thought!

tcp timeout issue finally solved!

As contributors no doubt know we have been fighting a super anoying tcp timeout issue. Basically just sometimes requests from our proxies to backend services just timeout. I don't know how many hours I spent on this issue trying everything I could think of, coming up with theorys and then disproving them. Debugging was difficult because _most_ of the time everything worked as expected. Finally, after a good deal of pain I was able to get a tcpdump showing that when it happens the sending side sends a SYN and the receiving side sees nothing at all.

This all pointed to the firewall cluster in our datacenter. We don't manage that, our networking folks do. It took some prep work, but last week they were finally able to update the firmware/os in the cluster to the latest recommended version.

After that: The problem was gone!

I don't suppose we will ever know the exact bug that was happening here, but one final thing to note: When they did the upgrade the cluster had over 1 million active connections. After the upgrade it has about 150k. So, seems likely that it was somehow not freeing resources correctly and dropping packets or something along those lines.

I know this problem has been anoying to contributors. It's personally been very anoying to me, my mind kept focusing on it and not anything else. It kept me awake at night. ;(

In any case finally solved!

There is a new outstanding issue that has occurred from the upgrade: https://pagure.io/fedora-infrastructure/issue/12913 basically long running koji cli watch tasks ( watch-task / watch-logs) are getting a 502 error after a while. This does not affect the task in any way, just the watching of it. Hopefully we can get to the bottom of this and fix it soon.

outages outages outages

We have had a number of outages of late. They have been for different reasons, but it does make it frustrating trying to contribute.

A recap of a few of them:

  • AI scrapers continue to mess with us. Even though most of our services are behind anubis now, they find ways around that, like fetching css or js files in loops, hitting things that are not behind anubis and generally making life sad. We continue to block things as we can. The impact here is mostly that src.fedoraproject.org is sensitive to high load and we need to make sure and block things before it impacts commits.

  • We had two outages ( friday 2025-11-07 and later monday 2025-11-10 ) That were caused by a switch loop when I brought up a power10 lpar. This was due to the somewhat weird setup on the power10 lpars where they shouldn't be using the untagged/native vlan at all, but a build vlan. The friday outage took a while for us to figure out what was causing it. The monday outage was very short. All those lpars are correctly configured now and up and operating ok.

  • We had a outage on monday ( 2025-11-10 ) where a set of crashlooping pods filled up our log server with tracebacks and generally messed with everything. Pod was fixed, storage was cleared up.

  • We had some kojipkgs outages on thursday ( 2025-11-13 ) and friday ( 2025-11-14 ). These were caused by many requests for directory listings for some ostree objects directories. Those directories have ~65k files in them each, so apache has to stat 64k files each time it gets those requests. But then, cloudfront (which is making the request) times out after 30s and resends. So, you get a load average of 1000 and very slow processing. So, for now we put that behind varnish, so it just has to do it the first time for a dir and then it can send the cached result to all the rest. If that doesn't fix it, we can look at just disabling indexes there, but I am not sure the implications.

We had a nice discussion in the last fedora infrastructure meeting about tracking outages better and trying to do a RCA on them after the fact to make sure we solved it or at least tried to make it less likely to happen again.

I am really hoping for some non outage days and smooth sailing for a bit.

power10s

I think we are finally done with the power10 setup. Many thanks again to Fabian on figuring out all the bizare and odd things we needed to do to configure the servers as close to the way we want them as possible.

The fedora builder lpars are all up and operating since last week. The buildvm-ppc64les on them should have more memory and cpus that before and hopefully are faster for everyone. We have staging lpars also now.

The only final thing to do is to get the coreos builders installed. The lpars themselves are all setup and ready to go.

rdu2-cc to rdu3 datacenter move

I haven't really been able to think about this due to outages and timeout issue, but things will start heating up next week again.

It seems unlikely that we will get our new machine in time to matter now, so I am moving to a new plan: repurposing another server there to migrate things to. I plan to try and get it setup next week and sync pagure.io data to a new pagure instance there. Depending on how that looks we might move to it first week of december.

Theres so much more going on, but those are some highlights I recall...

comments? additions? reactions?

As always, comment on mastodon: https://fosstodon.org/@nirik/115554940851319300

Infra and RelEng Update – Week 46 2025

Posted by Fedora Community Blog on 2025-11-14 10:00:00 UTC

This is a weekly report from the I&R (Infrastructure & Release Engineering) Team. We provide you both infographic and text version of the weekly report. If you just want to quickly look at what we did, just look at the infographic. If you are interested in more in depth details look below the infographic.

Week: 10th – 14th November 2025

Infrastructure & Release Engineering

The purpose of this team is to take care of day to day business regarding CentOS and Fedora Infrastructure and Fedora release engineering work.
It’s responsible for services running in Fedora and CentOS infrastructure and preparing things for the new Fedora release (mirrors, mass branching, new namespaces etc.).
List of planned/in-progress issues

Fedora Infra

CentOS Infra including CentOS CI

Release Engineering

List of new releases of apps maintained by I&R Team

If you have any questions or feedback, please respond to this report or contact us on #admin:fedoraproject.org channel on matrix.

The post Infra and RelEng Update – Week 46 2025 appeared first on Fedora Community Blog.

Fedora at Kirinyaga University – Docs workshop

Posted by Fedora Magazine on 2025-11-14 08:00:00 UTC
Kirinyaga University students group photo

We did it again, Fedora at Kirinyaga university in Kenya. This time, we didn’t just introduce what open source is – we showed students how to participate and actually contribute in real time.

Many students had heard of open source before, but were not sure how to get started or where they could fit. We did it hands-on and began with a simple explanation of what open source is: people around the world working together to create tools, share knowledge, and support each other. Fedora is one of these communities. It is open, friendly, and built by different people with different skills.

We talked about the many ways someone can contribute, even without deep technical experience. Documentation, writing guides, design work, translation, testing software, and helping new contributors are all important roles in Fedora. Students learned that open source is not only for “experts.” It is also for learners. It is a place to grow.

Hands-on Documentation Workshop

A room full of kirinyaga students on a worskhop

After the introduction, we moved into a hands-on workshop. We opened Fedora Docs and explored how documentation is structured. Students learned how to find issues, read contribution instructions, and make changes step-by-step. We walked together through:

  • Opening or choosing an issue to work on
  • Editing documentation files
  • Making a pull request (PR)
  • Writing a clear contribution message

By the end of the workshop, students had created actual contributions that went to the Fedora project. This moment was important. It showed them that contributing is not something you wait to do “someday.” You can do it today.

“This weekend’s Open Source Event with Fedora, hosted by the Computer Society Of Kirinyaga, was truly inspiring! 💻

Through the guidance of Cornelius Emase, I was able to make my first pull request to the Fedora Project Docs – my first ever contribution to the open-source world. 🌍
– Student at Kirinyaga University

Thank you note

Huge appreciation to:

  • Jona Azizaj — for steady guidance and mentorship.
  • Mat H. — for backing the vision of regional community building.
  • Fedora Mindshare Team — for supporting community growth here in Kenya.
  • Computer Society of Kirinyaga — for hosting and bringing real energy into the room.

And to everyone who played a part – even if your name isn’t listed here, I see you. You made this possible.

Growing the next generation

The students showed interest, curiosity, and energy. Many asked how they can continue contributing and how to connect with the wider Fedora community. I guided them to Fedora Docs, Matrix community chat rooms, and how they can be part of the Fedora local meetups here in Kenya.

We are introducing open source step-by-step in Kenya. There is a new generation of students who want to be part of global technology work. They want to learn, collaborate, and build. Our role is to open the door and walk together(I have a discourse post on this, you’re welcome to add your views).

A group photo of students after the workshop

What Comes Next

This event is part of a growing movement to strengthen Fedora’s presence in Kenya. More events will follow so that learning and contributing can continue.

We believe that open source becomes strong when more people are included. Fedora is a place where students in Kenya can learn, grow, share, and contribute to something global.

We already had a Discourse thread running for this event – from the first announcement, planning, and budget proposal, all the way to the final workshop. Everything happened in the open. Students who attended have already shared reflections there, and anyone who wants to keep contributing or stay connected can join the conversation.

You can check the events photos submitted here on Google photos(sorry that’s not FOSS:))

Cornelius Emase,
Your Friend in Open Source(Open Source Freedom Fighter)

How We Streamed OpenAlt on Vhsky.cz

Posted by Jiri Eischmann on 2025-11-13 11:37:21 UTC

The blog post was originally published on my Czech blog.

When we launched Vhsky.cz a year ago, we did it to provide an alternative to the near-monopoly of YouTube. I believe video distribution is so important today that it’s a skill we should maintain ourselves.

To be honest, it’s bothered me for the past few years that even open-source conferences simply rely on YouTube for streaming talks, without attempting to secure a more open path. We are a community of tech enthusiasts who tinker with everything and take pride in managing things ourselves, yet we just dump our videos onto YouTube, even when we have the tools to handle it internally. Meanwhile, it’s common for conferences abroad to manage this themselves. Just look at FOSDEM or Chaos Communication Congress.

This is why, from the moment Vhsky.cz launched, my ambition was to broadcast talks from OpenAlt—a conference I care about and help organize. The first small step was uploading videos from previous years. Throughout the year, we experimented with streaming from OpenAlt meetups. We found that it worked, but a single stream isn’t quite the stress test needed to prove we could handle broadcasting an entire conference.

For several years, Michal Vašíček has been in charge of recording at OpenAlt, and he has managed to create a system where he handles recording from all rooms almost single-handedly (with assistance from session chairs in each room). All credit to him, because other conferences with a similar scope of recordings have entire teams for this. However, I don’t have insight into this part of the process, so I won’t focus on it. Michal’s job was to get the streams to our server; our job was to get them to the viewers.

OpenAlt’s AV background with running streams. Author: Michal Stanke.

Stress Test

We only got to a real stress test the weekend before the conference, when Bashy prepared a setup with seven streams at 1440p resolution. This was exactly what awaited us at OpenAlt. Vhsky.cz runs on a fairly powerful server with a 32-core i9-13900 processor and 96 GB of RAM. However, it’s not entirely dedicated to PeerTube. It has to share the server with other OSCloud services (OSCloud is a community hosting of open source web services).

We hadn’t been limited by performance until then, but seven 1440p streams were truly at the edge of the server’s capabilities, and streams occasionally dropped. In reality, this meant 14 continuous transcoding processes, as we were streaming in both 1440p and 480p. Even if you don’t change the resolution, you still need to transcode the video to leverage useful distribution features, which I’ll cover later. The 480p resolution was intended for mobile devices and slow connections.

Remote Runner

We knew the Vhsky.cz server alone couldn’t handle it. Fortunately, PeerTube allows for the use of “remote runners”. The PeerTube instance sends video to these runners for transcoding, while the main instance focuses only on distributing tasks, storage, and video distribution to users. However, it’s not possible to do some tasks locally and offload others. If you switch transcoding to remote runners, they must handle all the transcoding. Therefore, we had to find enough performance somewhere to cover everything.

I reached out to several hosting providers known to be friendly to open-source activities. Adam Štrauch from Roští.cz replied almost immediately, saying they had a backup machine that they had filed a warranty claim for over the summer and hadn’t tested under load yet. I wrote back that if they wanted to see how it behaved under load, now was a great opportunity. And so we made a deal.

It was a truly powerful machine: a 48-core Ryzen with 1 TB of RAM. Nothing else was running on it, so we could use all its performance for video transcoding. After installing the runner on it, we passed the stress test. As it turned out, the server with the runner still had a large reserve. For a moment, I toyed with the idea of adding another resolution to transcode the videos into, but then I decided we’d better not tempt fate. The stress test showed us we could keep up with transcoding, but not how it would behave with all the viewers. The performance reserve could come in handy.

Load on the runner server during the stress test. Author: Adam Štrauch.

Smart Video Distribution

Once we solved the transcoding performance, it was time to look at how PeerTube would handle video distribution. Vhsky.cz has a bandwidth of 1 Gbps, which isn’t much for such a service. If we served everyone the 1440p stream, we could serve a maximum of 100 viewers. Fortunately, another excellent PeerTube feature helps with this: support for P2P sharing using HLS and WebRTC.

Thanks to this, every viewer (unless they are on a mobile device and data) also becomes a peer and shares the stream with others. The more viewers watch the stream, the more they share the video among themselves, and the server load doesn’t grow at the same rate.

A two-year-old stress test conducted by the PeerTube developers themselves gave us some idea of what Vhsky could handle. They created a farm of 1,000 browsers, simulating 1,000 viewers watching the same stream or VOD. Even though they used a relatively low-performance server (quad-core i7-8700 CPU @ 3.20GHz, slow hard drive, 4 GB RAM, 1 Gbps connection), they managed to serve 1,000 viewers, primarily thanks to data sharing between them. For VOD, this saved up to 98% of the server’s bandwidth; for a live stream, it was 75%:

If we achieved a similar ratio, then even after subtracting 200 Mbps for overhead (running other services, receiving streams, data exchange with the runner), we could serve over 300 viewers at 1440p and multiples of that at 480p. Considering that OpenAlt had about 160 online viewers in total last year, this was a more than sufficient reserve.

Live Operation

On Saturday, Michal fired up the streams and started sending video to Vhsky.cz via RTMP. And it worked. The streams ran smoothly and without stuttering. In the end, we had a maximum of tens of online viewers at any one time this year, which posed no problem from a distribution perspective.

In practice, the server data download savings were large even with just 5 peers on a single stream and resolution.

Our solution, which PeerTube allowed us to flexibly assemble from servers in different data centers, has one disadvantage: it creates some latency. In our case, however, this meant the stream on Vhsky.cz was about 5-10 seconds behind the stream on YouTube, which I don’t think is a problem. After all, we’re not broadcasting a sports event.

Diagram of the streaming solution for OpenAlt. Labels in Czech, but quite self-explanatory.

Minor Problems

We did, however, run into minor problems and gained experience that one can only get through practice. During Saturday, for example, we found that the stream would occasionally drop from 1440p to 480p, even though the throughput should have been sufficient. This was because the player felt that the delivery of stream chunks was delayed and preemptively switched to a lower resolution. Setting a higher cache increased the stream delay slightly, but it significantly reduced the switching to the lower resolution.

Subjectively, even 480p wasn’t a problem. Most of the screen was taken up by the red frame with the OpenAlt logo and the slides. The speaker was only in a small window. The reduced resolution only caused slight blurring of the text on the slides, which I wouldn’t even have noticed as a problem if I wasn’t focusing on it. I could imagine streaming only in 480p if necessary. But it’s clear that expectations regarding resolution are different today, so we stream in 1440p when we can.

Over the whole weekend, the stream from one room dropped for about two talks. For some rooms, viewers complained that the stream was too quiet, but that was an input problem. This issue was later fixed in the recordings.

When uploading the talks as VOD (Video on Demand), we ran into the fact that PeerTube itself doesn’t support bulk uploads. However, tools exist for this, and we’d like to use them next time to make uploading faster and more convenient. Some videos also uploaded with the wrong orientation, which was likely a problem in their metadata, as PeerTube wasn’t the only player that displayed them that way. YouTube, however, managed to handle it. Re-encoding them solved the problem.

On Saturday, to save performance, we also tried transcoding the first finished talk videos on the external runner. For these, a bar is displayed with a message that the video failed to save to external storage, even though it is clearly stored in object storage. In the end we had to reupload them because they were available to watch, but not indexed.

A small interlude – my talk about PeerTube at this year’s OpenAlt. Streamed, of course, via PeerTube:

Thanks and Support

I think that for our very first time doing this, it turned out very well, and I’m glad we showed that the community can stream such a conference using its own resources. I would like to thank everyone who participated. From Michal, who managed to capture footage in seven lecture rooms at once, to Bashy, who helped us with the stress test, to Archos and Schmaker, who did the work on the Vhsky side, and Adam Štrauch, who lent us the machine for the external runner.

If you like what we do and appreciate that someone is making OpenAlt streams and talks available on an open platform without ads and tracking, we would be grateful if you supported us with a contribution to one of OSCloud’s accounts, under which Vhsky.cz runs. PeerTube is a great tool that allows us to operate such a service without having Google’s infrastructure, but it doesn’t run for free either.

F43 election nominations now open

Posted by Fedora Community Blog on 2025-11-12 13:18:27 UTC

Today, the Fedora Project begins the nomination period during which we accept nominations to the “steering bodies” of the following teams:

This period is open until Wednesday, 2025-11-26 at 23:59:59 UTC.

Candidates may self-nominate. If you nominate someone else, check with them first to ensure that they are willing to be nominated before submitting their name.

Nominees do not yet need to complete an interview. However, interviews are mandatory for all nominees. Nominees not having their interview ready by end of the Interview period (2025-12-03) will be disqualified and removed from the election. Nominees will submit questionnaire answers via a private Pagure issue after the nomination period closes on Wednesday, 2025-11-26. The F43 Election Wrangler (Justin Wheeler) will publish the interviews to the Community Blog before the start of the voting period on Friday, 2025-12-05.

The elected seats on FESCo are for a two-release term (approximately twelve months). For more information about FESCo, please visit the FESCo docs.

The full schedule of the elections is available on the Elections schedule. For more information about the elections, process see the Elections docs.

The post F43 election nominations now open appeared first on Fedora Community Blog.

Managing a manual Alexa Home Assistant Skill via the Web UI

Posted by Brian (bex) Exelbierd on 2025-11-12 12:40:00 UTC

My house has a handful of Amazon Echo Dot devices that we mostly use for timers, turning lights on and off, and playing music. They work well and have been an easy solution. I also use Home Assistant for some basic home automation and serve most everything I want to verbally control to the Echo Dots from Home Assistant.

I don’t use the Nabu Casa Home Assistant Cloud Service. If you’re reading this and you want the easy route, consider it — the cloud service is convenient. One benefit of the service is that there is a UI toggle to mark which entities/devices to expose to voice assistants.

If you take the manual route, like I do, you must set up a developer account, AWS Lambda, and maintain a hand-coded list of entity IDs in a YAML file.

- switch.living_room
- switch.table
- light.kitchen
- sensor.temp_humid_reindeer_marshall_temperature
- sensor.living_room_temperature
- sensor.temp_humid_rubble_chase_temperature
- sensor.temp_humid_olaf_temperature
- sensor.ikea_of_sweden_vindstyrka_temperature
- light.white_lamp_bulb_1_light
- light.white_lamp_bulb_2_light
- light.white_lamp_bulb_3_light
- switch.ikea_smart_plug_2_switch
- switch.ikea_smart_plug_1_switch
- sensor.temp_humid_chase_c_temperature
- light.side_light
- switch.h619a_64c3_power_switch

A list of entity IDs to expose to Alexa.

Fun, right? Maintaining that list is tedious. I generally don’t mess with my Home Assistant installation very often. Therefore, when I need to change what is exposed to Alexa or add a new device, finding the actual entity_id is annoying. This is not helped by how good Home Assistant has gotten at showing only friendly names in most places. I decided there had to be a better way to do this other than manually maintaining YAML.

After some digging through docs and the source, I found there isn’t a built-in way to build this list by labels, categories, or friendly names. The Alexa integration supports only explicit entity IDs or glob includes/excludes.

So I worked out a way to build the list with a Home Assistant automation. It isn’t fully automatic - there’s no trigger that runs right before Home Assistant reboots - and you still need to restart Home Assistant when the list changes. But it lets me maintain the list by labeling entities rather than hand-editing YAML.

After a few experiments and some (occasionally overly imaginative) AI help, I arrived at this process. There are two parts.

Prep and staging

In your configuration.yaml enable the Alexa Smart Home Skill to use an external list of entity IDs. I store mine in /config/alexa_entities.yaml.

alexa:
  smart_home:
    locale: en-US
    endpoint: https://api.amazonalexa.com/v3/events
    client_id: !secret alexa_client_id
    client_secret: !secret alexa_client_secret
    filter:
      include_entities:
         !include alexa_entities.yaml

Add two helper shell commands:

shell_command:
  clear_alexa_entities_file: "truncate -s 0 /config/alexa_entities.yaml"
  append_alexa_entity: '/bin/sh -c "echo \"- {{ entity }}\" >> /config/alexa_entities.yaml"'

A script to find the entities

Place this script in scripts.yaml. It does three things:

  1. Clears the existing file.
  2. Finds all entities labeled with the tag you choose (I use “Alexa”).
  3. Appends each entity ID to the file.
export_alexa_entities:
  alias: Export Entities with Alexa Label
  sequence:
    # 1. Clear the file
    - service: shell_command.clear_alexa_entities_file

    # 2. Loop through each entity and append
    - repeat:
        for_each: "{{ label_entities('Alexa') }}"
        sequence:
          - service: shell_command.append_alexa_entity
            data:
              entity: "{{ repeat.item }}"
  mode: single

Why clear the file and write it line by line? I couldn’t get any file or notify integration to write to /config, and passing a YAML list to a shell command collapses whitespace into a single line. Reformatting that back into proper YAML without invoking Python was painful, so I chose to truncate and append line-by-line. It’s ugly, but it’s simple and it works.

The result is that I can label entities in the UI and avoid tedious bookkeeping.

Home Assistant entity details screen showing an IKEA smart plug named 'tree' with the Alexa label applied in the Labels section

Browser wars

Posted by Vedran Miletić on 2025-11-11 18:43:32 UTC

Browser wars


brown fox on snow field

Photo source: Ray Hennessy (@rayhennessy) | Unsplash


Last week in Rijeka we held Science festival 2015. This is the (hopefully not unlucky) 13th instance of the festival that started in 2003. Popular science events were organized in 18 cities in Croatia.

I was invited to give a popular lecture at the University departments open day, which is a part of the festival. This is the second time in a row that I got invited to give popular lecture at the open day. In 2014 I talked about The Perfect Storm in information technology caused by the fall of economy during 2008-2012 Great Recession and the simultaneous rise of low-cost, high-value open-source solutions. Open source completely changed the landscape of information technology in just a few years.

The follow-up

Posted by Vedran Miletić on 2025-11-11 18:43:32 UTC

The follow-up


people watching concert

Photo source: Andre Benz (@trapnation) | Unsplash


When Linkin Park released their second album Meteora, they had a quote on their site that went along the lines of

Musicians have their entire lives to come up with a debut album, and only a very short time afterward to release a follow-up.

Open-source magic all around the world

Posted by Vedran Miletić on 2025-11-11 18:43:32 UTC

Open-source magic all around the world


woman blowing sprinkle in her hand

Photo source: Almos Bechtold (@almosbech) | Unsplash


Last week brought us two interesting events related to open-source movement: 2015 Red Hat Summit (June 23-26, Boston, MA) and Skeptics in the pub (June 26, Rijeka, Croatia).

Joys and pains of interdisciplinary research

Posted by Vedran Miletić on 2025-11-11 18:43:32 UTC

Joys and pains of interdisciplinary research


white and black coffee maker

Photo source: Trnava University (@trnavskauni) | Unsplash


In 2012 University of Rijeka became NVIDIA GPU Education Center (back then it was called CUDA Teaching Center). For non-techies: NVIDIA is a company producing graphical processors (GPUs), the computer chips that draw 3D graphics in games and the effects in modern movies. In the last couple of years, NVIDIA and other manufacturers allowed the usage of GPUs for general computations, so one can use them to do really fast multiplication of large matrices, finding paths in graphs, and other mathematical operations.

What is the price of open-source fear, uncertainty, and doubt?

Posted by Vedran Miletić on 2025-11-11 18:43:32 UTC

What is the price of open-source fear, uncertainty, and doubt?


turned on red open LED signage

Photo source: j (@janicetea) | Unsplash


The Journal of Physical Chemistry Letters (JPCL), published by American Chemical Society, recently put out two Viewpoints discussing open-source software:

  1. Open Source and Open Data Should Be Standard Practices by J. Daniel Gezelter, and
  2. What Is the Price of Open-Source Software? by Anna I. Krylov, John M. Herbert, Filipp Furche, Martin Head-Gordon, Peter J. Knowles, Roland Lindh, Frederick R. Manby, Peter Pulay, Chris-Kriton Skylaris, and Hans-Joachim Werner.

Viewpoints are not detailed reviews of the topic, but instead, present the author's view on the state-of-the-art of a particular field.

The first of two articles stands for open source and open data. The article describes Quantum Chemical Program Exchange (QCPE), which was used in the 1980s and 1990s for the exchange of quantum chemistry codes between researchers and is roughly equivalent to the modern-day GitHub. The second of two articles questions the open-source software development practice, advocating the usage and development of proprietary software. I will dissect and counter some of the key points from the second article below.

On having leverage and using it for pushing open-source software adoption

Posted by Vedran Miletić on 2025-11-11 18:43:32 UTC

On having leverage and using it for pushing open-source software adoption


Open 24 Hours neon signage

Photo source: Alina Grubnyak (@alinnnaaaa) | Unsplash


Back in late August and early September, I attended 4th CP2K Tutorial organized by CECAM in Zürich. I had the pleasure of meeting Joost VandeVondele's Nanoscale Simulations group at ETHZ and working with them on improving CP2K. It was both fun and productive; we overhauled the wiki homepage and introduced acronyms page, among other things. During a coffee break, there was a discussion on the JPCL viewpoint that speaks against open-source quantum chemistry software, which I countered in the previous blog post.

But there is a story from the workshop which somehow remained untold, and I wanted to tell it at some point. One of the attendants, Valérie Vaissier, told me how she used proprietary quantum chemistry software during her Ph.D.; if I recall correctly, it was Gaussian. Eventually, she decided to learn CP2K and made the switch. She liked CP2K better than the proprietary software package because it is available free of charge, the reported bugs get fixed quicker, and the group of developers behind it is very enthusiastic about their work and open to outsiders who want to join the development.

AMD and the open-source community are writing history

Posted by Vedran Miletić on 2025-11-11 18:43:32 UTC

AMD and the open-source community are writing history


a close up of a cpu chip on top of a motherboard

Photo source: Andrew Dawes (@andrewdawes) | Unsplash


Over the last few years, AMD has slowly been walking the path towards having fully open source drivers on Linux. AMD did not walk alone, they got help from Red Hat, SUSE, and probably others. Phoronix also mentions PathScale, but I have been told on Freenode channel #radeon this is not the case and found no trace of their involvement.

AMD finally publically unveiled the GPUOpen initiative on the 15th of December 2015. The story was covered on AnandTech, Maximum PC, Ars Technica, Softpedia, and others. For the open-source community that follows the development of Linux graphics and computing stack, this announcement comes as hardly surprising: Alex Deucher and Jammy Zhou presented plans regarding amdgpu on XDC2015 in September 2015. Regardless, public announcement in mainstream media proves that AMD is serious about GPUOpen.

I believe GPUOpen is the best chance we will get in this decade to open up the driver and software stacks in the graphics and computing industry. I will outline the reasons for my optimism below. As for the history behind open-source drivers for ATi/AMD GPUs, I suggest the well-written reminiscence on Phoronix.

I am still not buying the new-open-source-friendly-Microsoft narrative

Posted by Vedran Miletić on 2025-11-11 18:43:32 UTC

I am still not buying the new-open-source-friendly-Microsoft narrative


black framed window

Photo source: Patrick Bellot (@pbellot) | Unsplash


This week Microsoft released Computational Network Toolkit (CNTK) on GitHub, after open sourcing Edge's JavaScript engine last month and a whole bunch of projects before that.

Even though the open sourcing of a bunch of their software is a very nice move from Microsoft, I am still not convinced that they have changed to the core. I am sure there are parts of the company who believe that free and open source is the way to go, but it still looks like a change just on the periphery.

All the projects they have open-sourced so far are not the core of their business. Their latest version of Windows is no more friendly to alternative operating systems than any version of Windows before it, and one could argue it is even less friendly due to more Secure Boot restrictions. Using Office still basically requires you to use Microsoft's formats and, in turn, accept their vendor lock-in.

Put simply, I think all the projects Microsoft has opened up so far are a nice start, but they still have a long way to go to gain respect from the open-source community. What follows are three steps Microsoft could take in that direction.

Free to know: Open access and open source

Posted by Vedran Miletić on 2025-11-11 18:43:32 UTC

Free to know: Open access and open source


yellow and black come in we're open sign

Photo source: Álvaro Serrano (@alvaroserrano) | Unsplash


!!! info Reposted from Free to Know: Open access & open source, originally posted by STEMI education on Medium.

Q&A with Vedran Miletić

In June 2014, Elon Musk opened up all Tesla patents. In a blog post announcing this, he wrote that patents "serve merely to stifle progress, entrench the positions of giant corporations and enrich those in the legal profession, rather than the actual inventors." In other words, he joined those who believe that free knowledge is the prerequisite for a great society -- that it is the vibrancy of the educated masses that can make us capable of handling the strange problems our world is made of.

The movements that promote and cultivate this vibrancy are probably most frequently associated with terms "Open access" and "open source". In order to learn more about them, we Q&A-ed Vedran Miletić, the Rocker of Science -- researcher, developer and teacher, currently working in computational chemistry, and a free and open source software contributor and activist. You can read more of his thoughts on free software and related themes on his great blog, Nudged Elastic Band. We hope you will join him, us, and Elon Musk in promoting free knowledge, cooperation and education.

The academic and the free software community ideals

Posted by Vedran Miletić on 2025-11-11 18:43:32 UTC

The academic and the free software community ideals


book lot on black wooden shelf

Photo source: Giammarco Boscaro (@giamboscaro) | Unsplash


Today I vaguely remembered there was one occasion in 2006 or 2007 when some guy from the academia doing something with Java and Unicode posted on some mailing list related to the free and open-source software about a tool he was developing. What made it interesting was that the tool was open source, and he filed a patent on the algorithm.

Celebrating Graphics and Compute Freedom Day

Posted by Vedran Miletić on 2025-11-11 18:43:32 UTC

Celebrating Graphics and Compute Freedom Day


stack of white and brown ceramic plates

Photo source: Elena Mozhvilo (@miracleday) | Unsplash


Hobbyists, activists, geeks, designers, engineers, etc have always tinkered with technologies for their purposes (in early personal computing, for example). And social activists have long advocated the power of giving tools to people. An open hardware movement driven by these restless innovators is creating ingenious versions of all sorts of technologies, and freely sharing the know-how through the Internet and more recently through social media. Open-source software and more recently hardware is also encroaching upon centers of manufacturing and can empower serious business opportunities and projects.

The free software movement is cited as both an inspiration and a model for open hardware. Free software practices have transformed our culture by making it easier for people to become involved in producing things from magazines to music, movies to games, communities to services. With advances in digital fabrication making it easier to manipulate materials, some now anticipate an analogous opening up of manufacturing to mass participation.

Enabling HTTP/2, HTTPS, and going HTTPS-only on inf2

Posted by Vedran Miletić on 2025-11-11 18:43:32 UTC

Enabling HTTP/2, HTTPS, and going HTTPS-only on inf2


an old padlock on a wooden door

Photo source: Arkadiusz Gąsiorowski (@ambuscade) | Unsplash


Inf2 is a web server at University of Rijeka Department of Informatics, hosting Sphinx-produced static HTML course materials (mirrored elsewhere), some big files, a WordPress instance (archived elsewhere), and an internal instance of Moodle.

HTTPS was enabled on inf2 for a long time, albeit using a self-signed certificate. However, with Let's Encrpyt coming into public beta, we decided to join the movement to HTTPS.

Why we use reStructuredText and Sphinx static site generator for maintaining teaching materials

Posted by Vedran Miletić on 2025-11-11 18:43:32 UTC

Why we use reStructuredText and Sphinx static site generator for maintaining teaching materials


open book lot

Photo source: Patrick Tomasso (@impatrickt) | Unsplash


Yesterday I was asked by Edvin Močibob, a friend and a former student teaching assistant of mine, the following question:

You seem to be using Sphinx for your teaching materials, right? As far as I can see, it doesn't have an online WYSIWYG editor. I would be interested in comparison of your solution with e.g. MediaWiki.

While the advantages and the disadvantages of static site generators, when compared to content management systems, have been written about and discussed already, I will outline our reasons for the choice of Sphinx below. Many of the points have probably already been presented elsewhere.

Fly away, little bird

Posted by Vedran Miletić on 2025-11-11 18:43:32 UTC

Fly away, little bird


macro-photography blue, brown, and white sparrow on branch

Photo source: Vincent van Zalinge (@vincentvanzalinge) | Unsplash


The last day of July happened to be the day that Domagoj Margan, a former student teaching assistant and a great friend of mine, set up his own DigitalOcean droplet running a web server and serving his professional website on his own domain domargan.net. For a few years, I was helping him by providing space on the server I owned and maintained, and I was always glad to do so. Let me explain why.

Mirroring free and open-source software matters

Posted by Vedran Miletić on 2025-11-11 18:43:32 UTC

Mirroring free and open-source software matters


gold and silver steel wall decor

Photo source: Tuva Mathilde Løland (@tuvaloland) | Unsplash


Post theme song: Mirror mirror by Blind Guardian

A mirror is a local copy of a website that's used to speed up access for the users residing in the area geographically close to it and reduce the load on the original website. Content distribution networks (CDNs), which are a newer concept and perhaps more familiar to younger readers, serve the same purpose, but do it in a way that's transparent to the user; when using a mirror, the user will see explicitly which mirror is being used because the domain will be different from the original website, while, in case of CDNs, the domain will remain the same, and the DNS resolution (which is invisible to the user) will select a different server.

Free and open-source software was distributed via (FTP) mirrors, usually residing in the universities, basically since its inception. The story of Linux mentions a directory on ftp.funet.fi (FUNET is the Finnish University and Research Network) where Linus Torvalds uploaded the sources, which was soon after mirrored by Ted Ts'o on MIT's FTP server. The GNU Project's history contains an analogous process of making local copies of the software for faster downloading, which was especially important in the times of pre-broadband Internet, and it continues today.

Markdown vs reStructuredText for teaching materials

Posted by Vedran Miletić on 2025-11-11 18:43:32 UTC

Markdown vs reStructuredText for teaching materials


blue wooden door surrounded by book covered wall

Photo source: Eugenio Mazzone (@eugi1492) | Unsplash


Back in summer 2017. I wrote an article explaining why we used Sphinx and reStructuredText to produce teaching materials and not a wiki. In addition to recommending Sphinx as the solution to use, it was general praise for generating static HTML files from Markdown or reStructuredText.

This summer I made the conversion of teaching materials from reStructuredText to Markdown. Unfortunately, the automated conversion using Pandoc didn't quite produce the result I wanted so I ended up cooking my own Python script that converted the specific dialect of reStructuredText that was used for writing the contents of the group website and fixing a myriad of inconsistencies in the writing style that accumulated over the years.

Don't use RAR

Posted by Vedran Miletić on 2025-11-11 18:43:32 UTC

Don't use RAR


a large white tank

Photo source: Tim Mossholder (@ctimmossholder) | Unsplash


I sometimes joke with my TA Milan Petrović that his usage of RAR does not imply that he will be driving a rari. After all, he is not Devito rapping^Wsinging Uh 😤. Jokes aside, if you search for "should I use RAR" or a similar phrase on your favorite search engine, you'll see articles like 2007 Don't Use ZIP, Use RAR and 2011 Why RAR Is Better Than ZIP & The Best RAR Software Available.

Should I do a Ph.D.?

Posted by Vedran Miletić on 2025-11-11 18:43:32 UTC

Should I do a Ph.D.?


a bike is parked in front of a building

Photo source: Santeri Liukkonen (@iamsanteri) | Unsplash


Tough question, and the one that has been asked and answered over and over. The simplest answer is, of course, it depends on many factors.

As I started blogging at the end of my journey as a doctoral student, the topic of how I selected the field and ultimately decided to enroll in the postgraduate studies never really came up. In the following paragraphs, I will give a personal perspective on my Ph.D. endeavor. Just like other perspectives from doctors of not that kind, it is specific to the person in the situation, but parts of it might apply more broadly.

Alumni Meeting 2023 at HITS and the reminiscence of the postdoc years

Posted by Vedran Miletić on 2025-11-11 18:43:32 UTC

Alumni Meeting 2023 at HITS and the reminiscence of the postdoc years


a fountain in the middle of a town square

Photo source: Jahanzeb Ahsan (@jahan_photobox) | Unsplash


This month we had Alumni Meeting 2023 at the Heidelberg Institute for Theoretical Studies, or HITS for short. I was very glad to attend this whole-day event and reconnect with my former colleagues as well as researchers currently working in the area of computational biochemistry at HITS. After all, this is the place and the institution where I worked for more than half of my time as a postdoc, where I started regularly contributing code to GROMACS molecular dynamics simulator, and published some of my best papers.

My perspective after two years as a research and teaching assistant at FIDIT

Posted by Vedran Miletić on 2025-11-11 18:43:32 UTC

My perspective after two years as a research and teaching assistant at FIDIT


human statues near white building

Photo source: Darran Shen (@darranshen) | Unsplash


My employment as a research and teaching assistant at Faculty of Informatics and Digital Technologies (FIDIT for short), University of Rijeka (UniRi) ended last month with the expiration of the time-limited contract I had. This moment has marked almost two full years I spent in this institution and I think this is a good time to take a look back at everything that happened during that time. Inspired by the recent posts by the PI of my group, I decided to write my perspective on the time that I hope is just the beginning of my academic career.

Staying warm at the fire

Posted by Robert Wright on 2025-11-11 15:05:00 UTC

Recently, I listened to a talk by someone I highly respect in the FOSS world, Brian “bex” Exelbierd, at OpenAlt. In his talk “Bring Wood for the Fire”, bex describes how communities come together around the fire to discuss and collaborate about ideas and share different perspectives.

The campfire is used as a metaphor for a gathering or conference of an open source community, as a place where everyone can contribute to keeping the fire alive. Between speakers, organizers, and the attendees in the hallway tracks, those conversations give life to the event and bring everyone around the fire.

Reinitialize rpmautospec in your package

Posted by Pavel Raiskup on 2025-11-11 00:00:00 UTC

From time to time, you might want to re-initialize rpmautospec in your RPM package’s (Dist)Git repo (rpmautospec is the tool that expands %autorelease and %autochangelog templates in specfiles).

The motivations for this step might be simple:

  • the tool is analyzing an excessively long Git history
  • you are attempting to handle shallow Git clones
  • you want to migrate the package from one DistGit to another (e.g., from Fedora to CentOS Stream)
  • you want to avoid executing Turing-Complete macros in Version or Epoch fields.

The principle

TL;DR: we need to prevent rpmautospec logic from analyzing beyond the last two Git commits. The tool behaves this way only when the very last commit (a) modifies the Version: and (b) changes the changelog file (the file with expanded %changelog).

This implies we need to make two precisely formatted commits.

Step-by-step guide

  1. (Re)generate the changelog file (do not ‘git add’ the file for now; this change belongs in the second commit):

    rpmautospec generate-changelog > changelog
    

    TODO: if we move from rpm to norpm, we need to use the full RPM parser one last time here; document how.

  2. Set a placeholder version, e.g., Version: 0.

    Do not worry; no builds are performed for the first commit. You might want to apply other changes here as well, such as dropping %lua macros from the Epoch if present.

  3. Remove the %autochangelog and %autorelease templates from the spec file. To keep things simple, you can just set Release: 0.

  4. Make the first commit (spec file changes only, keep the changelog file changes uncommitted!).

  5. Set the Version: field appropriately (think of Turing-Complete macros).

  6. Reintroduce the %autochangelog and %autorelease templates.

  7. Make the second commit. Include the changelog change. The Version reset we did means that the calculated %autorelease is re-started from Release: 1—if you need a different value, say 5, add the string [bump release: 5] into the commit message.

  8. Optional step: You might want to add one more empty commit using git commit --allow-empty to bump the Release and generate %changelog entry. (This depends on your distribution’s policy.)

You are done! See also an example merge-request.

Double-check

Check that rpmautospec works by:

$ rpmautospec --debug process-distgit *spec *spec
===========================================================
Extracting linear history snippets from branched history...
===========================================================
commit 124279c: reenable autospec with Lua-free Version
Keep processing: commit 124279cc62be84852bb702f49fca5aaae0d93f6c
  epoch_version: 25.04.0.0.84
  child must continue: False
  changelog changed: True
  child changelog removed: None
  our changelog removed: False
  merge unresolvable: False
  spec file present: True
  child must continue (incoming): True
  child must continue (outgoing): False
  parent to follow: da9b55c
commit da9b55c: Disable rpmautospec for one commit
children_visitors_info[]['child_must_continue']: [False, False]
Only traversing: commit da9b55c3a6175643788eed8cbcf5475a574cf333
...

The second commit, da9b55c, was analyzed, resulting in False, False. This indicates there’s no need to analyze any older variants of the spec file.

$ git diff
+## START: Set by rpmautospec
+## (rpmautospec version 0.8.1)
+## RPMAUTOSPEC: autorelease, autochangelog
+%define autorelease(e:s:pb:n) %{?-p:0.}%{lua:
+    release_number = 5;
+    base_release_number = tonumber(rpm.expand("%{?-b*}%{!?-b:1}"));
+    print(release_number + base_release_number - 1);
+}%{?-e:.%{-e*}}%{?-s:.%{-s*}}%{!?-n:%{?dist}}
+## END: Set by rpmautospec
...
 %changelog
-%autochangelog
+## START: Generated by rpmautospec
+* Thu Jul 03 2025 Kamal Heib <kheib@redhat.com> - 25.04.0.0.84-1
+- Update to upstream 25.04.0-0.84
...

Please note the release_number = 5 line and the version-release of the last %changelog entry—that’s what we needed.

Use git checkout -p to reset the changes made to the spec file by the previous rpmautospec process-distgit call. Done.

Ongoing 502 & 503 errors with HTTP endpoints

Posted by Fedora Infrastructure Status on 2025-11-10 12:00:00 UTC

We have ongoing intermittent issues with the communication between the Fedora proxies and the backend services. This manifests as intermittent 502 / 503 errors when talking to services such as Koji, Src, and so on.

We are working with the networking team to track it down, see the Pagure ticket for …

Avería is easy on the eyes

Posted by Joe Brockmeier on 2025-11-10 00:00:00 UTC

Avería: the average font

I spend more time than I care to admit staring at computer screens. It’s an occupational hazard, literally, of writing about tech for a living.

Much of that time is spent staring into the abyss a text editor, in this case Emacs. That being the case, I have test-driven quite a few different fonts to find one that is:

  • Pleasant to look at and isn’t boring.
  • Easy to read.
  • Displays code nicely.
  • Makes it easy to distinguish characters like l and 1.

I ran across Avería recently and it immediately caught my eye. It is, according to its web site, “the average of all fonts” on the creator’s computer. Somehow, at least to my aging eyes, it looks simultaneously classic and modern.

Loadouts For Genshin Impact v0.1.12 Released

Posted by Akashdeep Dhar on 2025-11-09 18:30:04 UTC
Loadouts For Genshin Impact v0.1.12 Released

Hello travelers!

Loadouts for Genshin Impact v0.1.12 is OUT NOW with the addition of support for recently released characters like Nefer and for recently released weapons like Dawning FrostReliquary of Truth and Sacrificer's Staff from Genshin Impact Luna II or v6.1 Phase 1. Take this FREE and OPEN SOURCE application for a spin using the links below to manage the custom equipment of artifacts and weapons for the playable characters.

Resources

Installation

Besides its availability as a repository package on PyPI and as an archived binary on PyInstaller, Loadouts for Genshin Impact is now available as an installable package on Fedora Linux. Travelers using Fedora Linux 42 and above can install the package on their operating system by executing the following command.

$ sudo dnf install gi-loadouts --assumeyes --setopt=install_weak_deps=False

Changelog

  • Automated dependency updates for GI Loadouts by @renovate[bot] in #449
  • Fallback to default Tesseract path on Windows by @sdglitched in #450
  • Format codebase using ruff and add check in CI by @sdglitched in #451
  • Add support for character levels 95 and 100 by @sdglitched in #454
  • Automated dependency updates for GI Loadouts by @renovate[bot] in #456
  • Change the Tool-Tip labels of Info and Help to match the content by @sdglitched in #457
  • Update dependency pillow to v12 by @renovate[bot] in #455
  • Introduce the recently added character Nefer to the roster by @gridhead in #459
  • Introduce the recently added weapon Dawning Frost by @gridhead in #458
  • Introduce the recently added weapon Sacrificer's Staff by @gridhead in #453
  • Introduce the recently added weapon Reliquary of Truth by @gridhead in #452
  • Automated dependency updates for GI Loadouts by @renovate[bot] in #463
  • Update actions/upload-artifact action to v5 by @renovate[bot] in #464
  • Automated dependency updates for GI Loadouts by @renovate[bot] in #466
  • Stage the release v0.1.12 for Genshin Impact Luna I (v6.1 Phase 1) by @sdglitched in #467

Characters

One character has debuted in this version release.

Nefer

Nefer is a catalyst-wielding Dendro character of five-star quality.

Weapons

Three weapons have debuted in this version release.

Dawning Frost

Nocturnal Dreams - Scales on Crit DMG.

Loadouts For Genshin Impact v0.1.12 Released
Dawning Frost - Workspace

Reliquary of Truth

Essence of Falsity - Scales on Crit DMG.

Loadouts For Genshin Impact v0.1.12 Released
Reliquary of Truth - Workspace

Sacrificer's Staff

Untainted Desire - Scales on Crit Rate.

Loadouts For Genshin Impact v0.1.12 Released
Sacrificer's Staff - Workspace

Appeal

While allowing you to experiment with various builds and share them for later, Loadouts for Genshin Impact lets you take calculated risks by showing you the potential of your characters with certain artifacts and weapons equipped that you might not even own. Loadouts for Genshin Impact has been and always will be a free and open source software project, and we are committed to delivering a quality experience with every release we make.

Disclaimer

With an extensive suite of over 1518 diverse functionality tests and impeccable 100% source code coverage, we proudly invite auditors and analysts from MiHoYo and other organizations to review our free and open source codebase. This thorough transparency underscores our unwavering commitment to maintaining the fairness and integrity of the game.

The users of this ecosystem application can have complete confidence that their accounts are safe from warnings, suspensions or terminations when using this project. The ecosystem application ensures complete compliance with the terms of services and the regulations regarding third-party software established by MiHoYo for Genshin Impact.

All rights to Genshin Impact assets used in this project are reserved by miHoYo Ltd. and Cognosphere Pte., Ltd. Other properties belong to their respective owners.

Networking issues at main datacenter

Posted by Fedora Infrastructure Status on 2025-11-07 17:00:00 UTC

Many services at our main datacenter are down/unresponsive. There seems to be a networking event going on. We are working with networking to track down and mitigate things. Updates as they become available.

All services have been restored.

Infra and RelEng Update – Week 45, 2025

Posted by Fedora Community Blog on 2025-11-07 10:00:00 UTC

This is a weekly report from the I&R (Infrastructure & Release Engineering) Team. We provide you with both an infographic and a text version of the weekly report. If you just want to quickly look at what we did, just look at the infographic. If you are interested in more in-depth details, look below the infographic.

Week: 03rd – 07th November 2025

Infrastructure & Release Engineering

The purpose of this team is to take care of day-to-day business regarding CentOS and Fedora Infrastructure and Fedora release engineering work.
It’s responsible for services running in Fedora and CentOS infrastructure and preparing things for the new Fedora release (mirrors, mass branching, new namespaces, etc.).
List of planned/in-progress issues

Fedora Infra

CentOS Infra including CentOS CI

Release Engineering

If you have any questions or feedback, please respond to this report or contact us on #admin:fedoraproject.org channel on matrix.

The post Infra and RelEng Update – Week 45, 2025 appeared first on Fedora Community Blog.

🎲 📝 Redis version 8.4

Posted by Remi Collet on 2025-11-04 12:48:00 UTC

RPMs of Redis version 8.4-rc1 are available in the remi-modular repository for Fedora ≥ 41 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).

⚠️ Warning: this is a pre-release version not ready for production usage.

1. Installation

Packages are available in the redis:remi-8.4 module stream.

1.1. Using dnf4 on Enterprise Linux

# dnf install https://rpms.remirepo.net/enterprise/remi-release-<ver>.rpm
# dnf module switch-to redis:remi-8.4/common

1.2. Using dnf5 on Fedora

# dnf install https://rpms.remirepo.net/fedora/remi-release-<ver>.rpm
# dnf module reset  redis
# dnf module enable redis:remi-8.4
# dnf install redis --allowerasing

You may have to remove the valkey-compat-redis compatibilty package.

2. Modules

Some optional modules are also available:

These packages are weak dependencies of Redis, so they are installed by default (if install_weak_deps is not disabled in the dnf configuration).

The modules are automatically loaded after installation and service (re)start.

The modules are not available for Enterprise Linux 8.

3. Future

Valkey also provides a similar set of modules, requiring some packaging changes already proposed for Fedora official repository.

Redis may be proposed for unretirement and be back in the Fedora official repository, by me if I find enough motivation and energy, or by someone else.

I may also try to solve packaging issues for other modules (e.g. RediSearch). For now, module packages are very far from Packaging Guidelines, so obviously not ready for a review.

4. Statistics

redis

redis-bloom

redis-json

redis-timeseries

Join Us for the Fedora Linux 43 Release Party!

Posted by Fedora Magazine on 2025-11-07 08:00:00 UTC

The Fedora community is coming together once again to celebrate the release of Fedora Linux 43, and you’re invited! Join us on Friday, November 21, 2025, from 13:00 to 16:00 UTC on Matrix for our virtual Fedora 43 Release Party.

This is our chance to celebrate the latest release, hear from contributors across the project, and see what’s new in Fedora Workstation, KDE, Atomic Desktops, and more. Whether you’re a long-time Fedora user or new to the community, it’s the perfect way to connect with the broader community, learn more about Fedora, and hang out in Matrix chat with your Fedora friends.

We have a lineup of talks and updates from across the Fedora ecosystem, including updates directly from teams who have been working on changes in this release. We’ll kick things off with Fedora Project Leader Jef Spaleta and Fedora Community Architect Justin Wheeler, followed by sessions with community members like Timothée Ravier on Atomic Desktops, Peter Boy and Petr Bokoč on the new Fedora Docs initiative, and Neal Gompa and Michel Lind discussing the Wayland-only GNOME experience. You’ll also hear from teams across Fedora sharing insights, demos, and what’s next for the project.

Registration is free but required to join the Matrix event room. Once registered, you’ll receive an invitation in your Matrix account before the event begins.

Sign up on the Fedora Linux 43 Release Party event page. We can’t wait to see you there to come celebrate Fedora 43 with us!

🎲 PHP version 8.3.28RC1 and 8.4.15RC1

Posted by Remi Collet on 2025-11-07 06:27:00 UTC

Release Candidate versions are available in the testing repository for Fedora and Enterprise Linux (RHEL / CentOS / Alma / Rocky and other clones) to allow more people to test them. They are available as Software Collections, for parallel installation, the perfect solution for such tests, and as base packages.

RPMs of PHP version 8.4.15RC1 are available

  • as base packages in the remi-modular-test for Fedora 41-43 and Enterprise Linux ≥ 8
  • as SCL in remi-test repository

RPMs of PHP version 8.3.28RC1 are available

  • as base packages in the remi-modular-test for Fedora 41-43 and Enterprise Linux ≥ 8
  • as SCL in remi-test repository

ℹ️ The packages are available for x86_64 and aarch64.

ℹ️ PHP version 8.2 is now in security mode only, so no more RC will be released.

ℹ️ Installation: follow the wizard instructions.

ℹ️ Announcements:

Parallel installation of version 8.4 as Software Collection:

yum --enablerepo=remi-test install php84

Parallel installation of version 8.3 as Software Collection:

yum --enablerepo=remi-test install php83

Update of system version 8.4:

dnf module switch-to php:remi-8.4
dnf --enablerepo=remi-modular-test update php\*

Update of system version 8.3:

dnf module switch-to php:remi-8.3
dnf --enablerepo=remi-modular-test update php\*

ℹ️ Notice:

  • version 8.5.0RC4 is in Fedora rawhide for QA
  • version 8.5.0RC4 is also available in the repository
  • EL-10 packages are built using RHEL-10.0 and EPEL-10.0
  • EL-9 packages are built using RHEL-9.6
  • EL-8 packages are built using RHEL-8.10
  • oci8 extension uses the RPM of the Oracle Instant Client version 23.9 on x86_64 and aarch64
  • intl extension uses libicu 74.2
  • RC version is usually the same as the final version (no change accepted after RC, exception for security fix).
  • versions 8.3.28 and 8.4.15 are planed for November 20th, in 2 weeks.

Software Collections (php83, php84)

Base packages (php)

Giving contribution gifts: the risks and rewards

Posted by Ben Cotton on 2025-11-05 12:00:00 UTC

A while back, Emily Omier posted on LinkedIn about what she called “transactional” open source, that is to say, giving contribution gifts. Emily was against it. “The magic of open source,” she wrote, “is that it’s not purely transactional.” Turning contributions to transactional relationships changes the nature of the community. I understand her argument, but I think the answer is more nuanced.

The rewards of gifts

There’s nothing inherently wrong with transactional open source. Not every project or contributor wants to work that way, but when they both do, go for it! Everyone has different motivations, so it’s important to recognize and reward contributors in a way that is meaningful to them.

Some people may participate solely to earn a t-shirt. That’s okay. They still made a contribution they wouldn’t have otherwise. Your project benefits from that.

Plus, getting people in the door is the first step in converting them into long-term contributors. A lot of people who come just for a gift won’t stick around, but some will. And gifts can lead to more contributions from the existing contributor base, too.

The risks of gifts

Gifts are often (relatively) expensive and logistically-challenging. The money you spend acquiring and shipping gifts is money that your project can’t spend on things with a better return, like test hardware. Plus, it can take a long time to get the gift into the hands of the recipient. I’ve had packages to India take close to a year to reach their destination. With tariff changes, if you’re shipping into the US from the rest of the world, your contributor may be on the hook for import fees.

If you order ahead to get volume discounts, you have to store the stuff somewhere. And, of course, you have to know how you’re going to decide who gets the gifts. As I wrote in chapter 4 of Program Management for Open Source Projects: “figuring out an equitable way to distribute [1,000 t-shirts] is hard, but it’s a good problem to have. It’s still a problem to solve, though.”

For company-backed projects, there’s a risk of using transaction-avoidance as an excuse to be extractive. The argument could go something like this: “we want to be good open source participants, so we won’t tarnish the purity of these volunteer contributions by spending our money to give people gifts.” The real motivation is to keep the money.

Planning to give contribution gifts

If you’ve decided that you want to give gifts for contributions, you have to start with a plan. The first thing to understand is “why?” Everything else flows from the answer to that question. What behavior are you trying to encourage? Will the gift you offer induce that behavior? Who will handle the gifts?

You have to know what action or actions will trigger a gift. Is it the first contribution? Every contribution? Contributions to specific under-developed areas (like documentation or tests)? Personal or project milestones?

Next, what gifts will you give? It’s important to recognize and reward contributors in a way that is meaningful to them and encourages a sense of belonging. For some people, simple recognition in release notes or a blog post is enough. Some might love a great pair of socks, while others have no more room in their drawer. The environmental impact of cheap thumb drives, USB cables, and t-shirts will harm your reputation with some people. There’s no universal answer.

Then there’s the question of who will take the time to collect information, distribute gifts, and handle follow-up questions. If a gift doesn’t ship for months (or does ship, but takes a long time to be delivered), you’ll undoubtedly have people asking about it. Time spent handling this work is time not spent elsewhere.

If you choose to give contribution gifts, it needs to be part of a broader recognition and incentive program. In general, I suggest saving physical gifts for established contributors, but be very liberal with shout outs and digital badges.

This post’s featured photo by Jess Bailey on Unsplash.

The post Giving contribution gifts: the risks and rewards appeared first on Duck Alignment Academy.

Normal dates in Thunderbird

Posted by Alejandro Sáez Morollón on 2025-11-05 08:15:30 UTC

A few minutes ago I was answering an email in Thunderbird, and I realized one thing that might have been there for years. The date was in the wrong format! (Wrong as in for me, of course).

I use English (US) for my desktop environment, but I change the format of several things because I don’t use the metric system, and I need the Euro sign and normal dates. Sorry, but month, day, and year is a weird format.

The “normal” thing would be to use my country format. But if I select a format from Spain, I get dates in Spanish and in a format that I also hate:

What I want is ISO8601 and English. But I don’t want to modify each field manually. Too much. The weird trick is to use Denmark (English). I am not kidding. And I am not alone. At all.

Why, you may ask? Look at this beauty. It’s just perfect.

Anyway. My problem is with Thunderbird. It looks like it doesn’t support having a language and a format from different regions. Thankfully they documented it here.

So now, I have:

  • intl.date_time.pattern_override.date_short set to yyyy-MM-dd

  • intl.date_time.pattern_override.time_short set to HH:mm

I guess I might need more stuff, but at least for now I don’t see a weird date when I am answering emails.


P.D: Talking about weird things I like to configure… My keyboards are ANSI US QWERTY. But the layout I use is English (intl., with AltGr dead keys). So I can type Spanish letters using the right alt and a key (e.g: AltGr + n gives me ñ).

Our Goal with Google Summer of Code: Contributor Selection

Posted by Felipe Borges on 2025-11-04 12:54:01 UTC

Last week, as I was writing my trip report about the Google Summer of Code Mentor Summit, I found myself going on a tangent about the program in our community, so I decided to split the content off into a couple of posts. In this post, I want to elaborate a bit on our goal with the program and how intern selection helps us with that.

I have long been saying that GSoC is not a “pay-for-code” program for GNOME. It is an opportunity to bring new contributors to our community, improve our projects, and sustain our development model.

Mentoring is hard and time consuming. GNOME Developers heroically dedicate hours of their weeks to helping new people learn how to contribute.

Our goal with GSoC is to attract contributors that want to become GNOME Developers. We want contributors that will spend time helping others learn and keep the torch going.

Merge-requests are very important, but so are the abilities to articulate ideas, hold healthy discussions, and build consensus among other contributors.

For years, the project proposal was the main deciding factor for a contributor to get an internship with GNOME. That isn’t working anymore, especially in an era of AI-generated proposals. We need to up our game and dig deeper to find the right contributors.

This might even mean asking for fewer internship slots. I believe that if we select a smaller group of people with the right motivations, we can give them the focused attention and support to continue their involvement long after the internship is completed.

My suggestion for improving the intern selection process is to focus on three factors:

  • History of Contributions in gitlab.gnome.org: applicants should solve a few ~Newcomers issues, report bugs, and/or participate in discussions. This gives us an idea of how they perform in the contributing process as a whole.
  • Project Proposal: a document describing the project’s goals and detailing how the contributors plans to tackle the project. Containing some reasonable time estimates.
  • An interview: a 10 or 15 minutes call where admins and mentors can ask applicants a few questions about their Project Proposal and their History of Contributions.

The final decision to select an intern should be a consideration of how the applicant performed across these aspects.

Contributor selection is super important, and we must continue improving our process. This is about investing in the long-term health and sustainability of our project by finding and nurturing its future developers.

If you want to find more about GSoC with GNOME, visit gsoc.gnome.org

The Practitioner’s Keynote: A Look Behind the Curtain

Posted by Brian (bex) Exelbierd on 2025-11-04 12:50:00 UTC

It’s worth reflecting on the process of public speaking. Many of us who are practitioners in a field see leaders or well-known figures present and we compare ourselves. That comparison isn’t always fair. When an executive stands on stage, they often have an entire team of writers, marketers, and coaches who helped refine the message, build the slides, and critique the delivery. They have access to professional teleprompters and structured practice sessions. Other times, they are in roles where public speaking is a specifically defined function and they have time built into their calendar for preparing and delivering talks. For the rest of us, trying to self-organize that level of time can be difficult and getting that kind of support is a huge ask of friends and colleagues.

My experience delivering my first invited keynote at OpenAlt 2025 this past weekend was a perfect case study in the practitioner’s reality. This post is a look behind that curtain.

The journey began on a compressed timeline, with just over a week from final confirmation to delivery. The organizers asked for a talk on community and collaboration - a “fuzzy” message that is challenging to nail without a team to brainstorm and refine it with. To this goal they added the request that people leave the talk feeling excited about the conference, even if they didn’t “learn anything new.” This is the first hurdle for the practitioner: you are your own strategist, writer, and editor.

Then came the realities of life. The preparation week was a chaotic mix of school holidays and a sick child. While introducing my daughter to Star Wars was a joy, it meant that talk preparation happened in fragmented bursts, often late at night. Not a complaint - it’s the environment in which most of us do this “extra” work.

I developed a habit of not using slides when there is no real visual requirement for the message. I feel like it focuses both the audience and me on the message of the talk. This was that kind of talk. To manage the terror of speaking without slides, I wrote a full script. But my delivery aids were DIY and not a professional setup. I tried using an iPad instead of printed notes, which turned out to be a mistake. Paper is static and aids spatial memory, while the iPad’s scrolling and flawed teleprompter mode were more of a distraction than a help. This is another practitioner reality: we make do with the tools we have, and sometimes learn lessons the hard way.

When it’s time to present, last-minute changes and diversions start piling up. A sudden request to introduce myself, or help craft an intro to be said by someone else, required an on-the-fly pivot before the talk even began. Throughout the talk, I made small adjustments and ad-libs to match the energy of the room. The script isn’t designed to be read, instead it internalizes the themes and points for me and helps me hit my time goals. The time goal is always a factor when you finally give the talk. When you’re not reading a script, there’s a temptation to pull in discarded material and risk a rushed ending.

In the end, the talk, “Bring Wood for the Fire,” was a success. I received positive feedback from attendees able to cite specific comments that resonated with them, not just generic platitudes. The organizers let me know that I exceeded their goals and they are pleased with the final product. I’m proud that the core message held strong despite the chaotic process and live detours. It tells me I developed a solid message and then internalized the narrative, which is a speaker’s ultimate goal. As a practitioner, I may not have a team of professionals, but I can know my story inside and out.

Key takeaways

  • Prepare a tight script when skipping slides — it helps you keep time and structure.
  • Use static notes (paper or static electronic notes) for spatial memory; avoid scrolling teleprompters unless you’re used to them and they sit naturally in your sightline.
  • Internalize the narrative so you can adapt and ad-lib without losing the core message.
  • Expect last-minute pivots; prepare a flexible intro and a short fallback.

So when you watch a polished keynote, appreciate the craft, but don’t use it as a stick to beat yourself with. As practitioners, our process is different. It’s messier, more chaotic, and intensely personal. And succeeding within those constraints is a victory worth celebrating.

A recording of the talk, including Czech subtitles, is available on the conference’s PeerTube instance. An alternate version is also available on Youtube. I appreciate the organizers uploading these so quickly and for doing the translation.

Note: Updated on 5 November 2025 with the PeerTube Link.

Announcing Flock to Fedora 2026 (14-16 June): Join Us in Prague!

Posted by Fedora Magazine on 2025-11-04 08:00:00 UTC
Banner image for the Flock to Fedora 2026 conference. The image shows Colúr, the animated mascot of Flock, holding a megaphone. The "Flock" logo appears with "Prague, Czech Republic" and "June 14 - 16, 2026" written below the Flock logo.

The official dates and location are set for Flock to Fedora 2026, the premier annual conference for Fedora Project contributors. The event will take place from 14-16 June 2026, in Prague, Czechia.

For Flock 2026, we are returning to the Vienna House by Wyndham Andel’s Prague, located at:

Stroupeznickeho 21
Prague, 150 00
Czech Republic

While all three days will be full conference days, the arrangement of the schedule will change slightly in 2026. Sunday, 14 June, will be designated as Day 0, featuring workshops, team meetups, and hands-on contributor sessions. The main conference activities, including streamed content, the opening keynote, and other sessions, are scheduled for Monday, 15 June, and Tuesday, 16 June.

Coordinated Scheduling with DevConf CZ

Following community feedback from last year, Flock 2026 has been scheduled to align more closely with DevConf.CZ. The conference will conclude just before DevConf.CZ begins in Brno (18-20 June 2026). This compressed travel schedule is intended to make it easier for community members who wish to attend both events.

Call for Proposals & Conference Themes

The Call for Proposals (CFP) for Flock 2026 will open in early December 2025 and close shortly after FOSDEM 2026 (31 January – 1 February). Speaker confirmations are scheduled to be sent in March 2026.

For Flock 2026, we are taking a more focused approach to session content. The Fedora Council, FESCo, and the Mindshare Committee are shaping key themes for the CFP. All presentation and workshop submissions should align with one of these themes. More details will be shared when the CFP opens.

Planning for Flock 2026

Here is what you need to know to plan your attendance:

  • Registration: Conference registration is scheduled to open in January 2026.
  • Sponsorship: Is your company or organization interested in sponsoring Flock 2026? Our sponsorship prospectus for Flock 2026 is now available on the Flock 2026 website. Organizations interested in supporting Flock and the Fedora community are encouraged to review the prospectus and contact the organizing team with any questions.
  • Hotel Block: A discounted block of rooms is arranged at the conference hotel. More information about the discounted hotel block can be found on the Flock website.
  • Travel Day & Connections: 17 June is designated as a free travel day between Flock to Fedora 2026 and DevConf.CZ. Frequent bus and train connections are available for travel between Prague and Brno.
  • Sponsored Travel: We intend to offer sponsored travel again for Flock to Fedora 2026. More details will follow in December 2025.

Get Involved & Ask Questions

The official Flock to Fedora 2026 Matrix room, #flock:fedoraproject.org, is the best place to connect with organizers and other community members. We encourage you to join the channel for the latest updates and to ask any questions you may have.

Flock to Fedora 2026 web site

A Note on Our Flock to Fedora 2026 & 2027 Plans

We recognize that returning to the same city and venue for a second consecutive year is a departure from Flock’s tradition. This decision was made intentionally with two key goals in mind.

First, by working with a familiar venue, our organizing team can optimize its processes and plan further in advance. This stability for Flock to Fedora 2026 will give us more opportunity to improve our internal processes and explore new ways to incorporate community input into the design of Fedora’s flagship contributor conference.

Second, this allows us to plan for a significant change in 2027. The Flock organizing team is committed to exploring new locations for Flock 2027, with a particular focus on regions outside of North America and Europe. We acknowledge the travel difficulties many of our contributors in regions like LATAM and APAC face. We learned valuable lessons from past planning cycles and are eager to achieve this goal, while also recognizing that unforeseen circumstances can impact our plans. We will work with community members in these regions to explore possible options and conduct thorough research on pricing and availability for 2027.

We look forward to seeing you in Prague for Flock 2026, 14-16 June.

— The Flock to Fedora Planning Team

Backups with btrbk

Posted by Tomasz Torcz on 2025-11-02 19:52:18 UTC

Storage setup of my home server is btrfs raid1 over two, dm-crypt'ed 16TB HDDs cached with bcache on NVMe. It works fine, however for PostgreSQL database and my homedir I prefer full NVMe speed.

Therefore I've put those two directories on (dm-crypt'ed) btrfs subvolumes directly on NVMe. Thanks to DUP profile there's a protection against bitrot. But it's still a single device which may just die. Regardless of full backups, I was doing daily rsync into main drives, but there's faster and more capable way.

Enter btrbk, which operates on btrfs subvolumes. It uses btrfs' native send capability to copy the subvolume between filesystems effectively.

Additionally, it's very easy to keep some number of historic subvolume snapshots. They utilize copy-on-write, minimizing space usage. This let me recover files quickly or compare filesystem's state over last few days.

The config it bit tricky, that's why I'm posting this. My full backup config below, divided in three sections for explanations.

timestamp_format        long
snapshot_preserve       14d
snapshot_preserve_min   2d      # defaults to 'all'

The source definition. preserve option combination is needed to have daily snapshots kept for last two weeks and have older snapshots removed.

target_preserve         7d
target_preserve_min     latest  # defaults to 'all'

What to do with subvolumes copies at the target directory. Above combination of options keeps last seven days of snapshots copies.

send_compressed_data    yes

volume /run/btrbk-work
        target /home/poligon/backs/btrbk_snaps

        subvolume home_zdzichu

        subvolume var_lib_pgsql

Job definition. /run/btrbk-work is a directory where I temporarily mount NVMe drive root volume with subvolumes beneath. /home/poligon/backs/btrbk_snaps is the directory on my main (raid1) pool where subvolume copies are stored. And the last two lines are specific subvolumes to copy.

That works for me. btrbk is run by cron.daily/ from a short script ensuring everything is mounted where it should be.

020/100 of #100DaysToOffload

infra weeksly recap: late October 2025

Posted by Kevin Fenzi on 2025-11-01 17:25:28 UTC
Scrye into the crystal ball

I didn't do a recap last week (because I was on PTO on friday and monday) and thought about not doing one today either (I was on PTO friday/yesterday), but I thought of a few good items to talk about. :)

Fedora Linux 43 released

Of course Fedora linux 43 was relased, you should install/upgrade to it today.

I typically upgrade all my machines at home (that aren't running rawhide) the week before release. I did that this time with all of them except one. On those other machines f43 was a typical nothing burger, no real problems everything working as expected.

On my main server however I held off on the upgrade for now. This is due to:

  • I run my own matrix server, using the matrix-synapse package in fedora. Sadly, this package has had issues in F43+ due to python stack changes. As I understand it, it uses pydandic, but via a v1 compatibility mode, which changed a bit due to python 3.14. Luckily the Fedora maintainer worked on a patch to move it to v2 and worked through all the tests. It's merged upstream now. I expect f43/rawhide builds soon.

  • There's some issues around postgresql. f42 uses postgresql16, but f43 provides postgresql18 by default. You need to upgrade through 17. I went ahead and just did this on f42. (both 16 and 17 are packaged for f42). So that should be ready for the upgrade to 18 now.

  • dovecot changed it's config file around a great deal. I still need to port my f42 dovecot config to the new version before I upgrade.

So, hopefully all that will be handed soon and I can upgrade that last server.

tcp timeout issues

The tcp timeout issues we are seeing between vlans in the new datacenter ( https://pagure.io/fedora-infrastructure/issue/12814 ) continues to vex. Networking have tried a few things, I think it might be better, but we have not come up with a complete fix yet.

However, I did find another interesting datapoint. Moving our proxies to use port 8080 on backend kojipkgs servers (going directly to httpd there) instead of port 80 ( varnish ) has seen no failures.

So, it's looking like some kind of traffic issue with port 80 flows. Networking is trying to find anything that would just be affecting that.

Power news

Got 3 power9 servers setup and processing copr builds now. This should help with copr ppc64le build capacity, and it also allowed us to test/configure the 'fedora-isolated' vlan that is going to have all the things from the rdu2-cc datacenter moved to it in early december. (This includes pagure.io).

We now (finally, I hope) have a configuration for the power10's that will work for our needs. One of the two is setup this way now. I have created all the lpars and next week hopefully can get them installed. Once those are installed, I can move 1/2 of the existing buildvm-ppc64le's over to it and we can reconfigure the first server. This should allow spreading the load between the two and allow some more resources for them all.

Secure boot signing work

I finally have a ansible pr to setup the siguldry pesign bridge. Hopefully I can land that next week. This will move us from the current secure boot signing (a smart card on one builder) to using sigul (The thing that signs all out other stuff) doing the signing. We can then just configure builders we need to sign, no hardware changes needed. This will also now I sure hope allow us to setup signing for aarch64, something thats been in progress for like 6 years.

comments? additions? reactions?

As always, comment on mastodon: https://fosstodon.org/@nirik/115475810895231150

📝 Valkey version 9.0

Posted by Remi Collet on 2025-10-17 12:29:00 UTC

With version 7.4 Redis Labs choose to switch to RSALv2 and SSPLv1 licenses, so leaving the OpenSource World.

Most linux distributions choose to drop it from their repositories. Various forks exist and Valkey seems a serious one and was chosen as a replacement. 

So starting with Fedora 41 or Entreprise Linux 10 (CentOS, RHEL, AlmaLinux, RockyLinux...) redis is no more available, but valkey is.

With version 8.0 Redis Labs choose to switch to AGPLv3 license, and so is back as an OpenSource project, but lot of users already switch and want to keep valkey.

RPMs of Valkey version 9.0 are available in the remi-modular repository for Fedora ≥ 41 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).

 

So you now have the choice between Redis and Valkey.

1. Installation

Packages are available in the valkey:remi-9.0 module stream.

1.1. Using dnf4 on Enterprise Linux

# dnf install https://rpms.remirepo.net/enterprise/remi-release-<ver>.rpm
# dnf module switch-to valkey:remi-9.0/common

1.2. Using dnf5 on Fedora

# dnf install https://rpms.remirepo.net/fedora/remi-release-<ver>.rpm
# dnf module reset  valkey
# dnf module enable valkey:remi-9.0
# dnf install valkey

The valkey-compat-redis compatibility package is not available in this stream. If you need the Redis commands, you can install the redis package.

2. Modules

Some optional modules are also available:

These packages are weak dependencies of Valkey, so they are installed by default (if install_weak_deps is not disabled in the dnf configuration).

The Modules are automatically loaded after installation and service (re)start.

3. Future

Valkey also provides a set of modules, which may be submitted for the Fedora official repository.

Redis may be proposed for reintegration and return to the Fedora official repository, by me if I find enough motivation and energy, or by someone else.

So users will have the choice and can even use both.

ℹ️ Notices:

  • Enterprise Linux 10.0 and Fedora ≤ 42 have Valkey 8.0 in their repository
  • Fedora 43 will have Valkey 8.1
  • Fedora 44 will have Valkey 9.0
  • CentOS Stream 9 also has valkey 8.0, so it should be part of EL-9.7.

4. Statistics

valkey

📝 Valkey version 8.1

Posted by Remi Collet on 2025-08-01 07:35:00 UTC

With version 7.4 Redis Labs choose to switch to RSALv2 and SSPLv1 licenses, so leaving the OpenSource World.

Most linux distributions choose to drop it from their repositories. Various forks exist and Valkey seems a serious one and was chosen as a replacement. 

So starting with Fedora 41 or Entreprise Linux 10 (CentOS, RHEL, AlmaLinux, RockyLinux...) redis is no more available, but valkey is.

With version 8.0 Redis Labs choose to switch to AGPLv3 license, and so is back as an OpenSource project, but lot of users already switch and want to keep valkey.

RPMs of Valkey version 8.1.3 are available in the remi-modular repository for Fedora ≥ 41 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).

So you now have the choice between Redis and Valkey.

1. Installation

Packages are available in the valkey:remi-8.1 module stream.

1.1. Using dnf4 on Enterprise Linux

# dnf install https://rpms.remirepo.net/enterprise/remi-release-<ver>.rpm
# dnf module switch-to valkey:remi-8.1/common

1.2. Using dnf5 on Fedora

# dnf install https://rpms.remirepo.net/fedora/remi-release-<ver>.rpm
# dnf module enable valkey:remi-8.1

The valkey-compat-redis compatibility package is not available in this stream. If you need the Redis commands, you can install the redis package.

2. Modules

Some optional modules are also available:

These packages are weak dependencies of Valkey, so they are installed by default (if install_weak_deps is not disabled in the dnf configuration).

The Modules are automatically loaded after installation and service (re)start.

3. Future

Valkey also provides a set of modules, requiring some packaging changes already proposed for the Fedora official repository.

Redis may be proposed for reintegration and return to the Fedora official repository, by me if I find enough motivation and energy, or by someone else.

So users will have the choice and can even use both.

ℹ️ Notice: Enterprise Linux 10.0 and Fedora have valkey 8.0 in their repository. Fedora 43 will have valkey 8.1. CentOS Stream 9 also has valkey 8.0, so it should be part of EL-9.7.

4. Statistics

valkey

Not anymore a director at the PSF board

Posted by Kushal Das on 2025-10-31 13:00:33 UTC

This month I did the last meeting as a director of the Python Software Foundation board, the new board already had their first meeting.

I decided not to rerun in the election as:

  • I was a director from 2014 (except 1 year when python's random call decided to choose another name), means 10 years and that is long enough.
  • Being an immigrant in Sweden means my regular travel is very restricted and that stress effects all parts of life.

When I first ran in the election I did not think it would continue this long. But, the Python community is amazing and I felt I should continue. But, the brain told me to give out the space to new folks.

I will continue taking part in all other community activities.

Infra and RelEng Update – Week 44

Posted by Fedora Community Blog on 2025-10-31 10:00:00 UTC

This is a weekly report from the I&R (Infrastructure & Release Engineering) Team. We provide you both infographic and text version of the weekly report. If you just want to quickly look at what we did, just look at the infographic. If you are interested in more in depth details look below the infographic.

Week: 27 October – 31 October 2025

Infrastructure & Release Engineering

The purpose of this team is to take care of day to day business regarding CentOS and Fedora Infrastructure and Fedora release engineering work.
It’s responsible for services running in Fedora and CentOS infrastructure and preparing things for the new Fedora release (mirrors, mass branching, new namespaces etc.).
List of planned/in-progress issues

Fedora Infra

CentOS Infra including CentOS CI

Release Engineering

If you have any questions or feedback, please respond to this report or contact us on #admin:fedoraproject.org channel on matrix.

The post Infra and RelEng Update – Week 44 appeared first on Fedora Community Blog.

Ubuntu 26.04 LTS development is open

Posted by Phil Wyett on 2025-10-29 23:28:00 UTC
For those on the bleeding edge of Ubuntu development. Ubuntu 26.04 (Resolute Raccoon) development is now open for business.

26.04 will be a Long Term Support (LTS) release.

Install images can be found at the link below.

Ubuntu 26.04 latest desktop install images

Google Summer of Code Mentor Summit 2025

Posted by Felipe Borges on 2025-10-29 11:05:09 UTC

Last week, I took a lovely train ride to Munich, Germany, to represent GNOME at the Google Summer of Code Mentor Summit 2025. This was my first time attending the event, as previous editions were held in the US, which was always a bit too hard to travel.

This was also my first time at an event with the “unconference” format, which I found to be quite interesting and engaging. I was able to contribute to a few discussions and hear a variety of different perspectives from other contributors. It seems that when done well, this format can lead to much richer conversations than our usual, pre-scheduled “one-to-many” talks.

The event was attended by a variety of free and open-source communities from all over the world. These groups are building open solutions for everything from cloud software and climate applications to programming languages, academia, and of course, AI. This diversity was a great opportunity to learn about the challenges other software communities face and their unique circumstances.

There was a nice discussion with the people behind MusicBrainz. I was happy and surprised to find out that they are the largest database of music metadata in the world, and that pretty much all popular music streaming services, record labels, and similar groups consume their data in some way.

Funding the project is a constant challenge for them, given that they offer a public API that everyone can consume. They’ve managed over the years by making direct contact with these large companies, developing relationships with the decision-makers inside, and even sometimes publicly highlighting how these highly profitable businesses rely on FOSS projects that struggle with funding. Interesting stories. :)

There was a large discussion about “AI slop,” particularly in GSoC applications. This is a struggle we’ve faced in GNOME as well, with a flood of AI-generated proposals. The Google Open Source team was firm that it’s up to each organization to set its own criteria for accepting interns, including rules for contributions. Many communities shared their experiences, and the common solution seems to be reducing the importance of the GSoC proposal document. Instead, organizations are focusing on requiring a history of small, “first-timer” contributions and conducting short video interviews to discuss that work. This gives us more confidence that the applicant truly understands what they are doing.

GSoC is not a “pay-for-feature” initiative, neither for Google nor for GNOME. We see this as an opportunity to empower newcomers to become long-term GNOME contributors. Funding free and open-source work is hard, especially for people in less privileged places of the world, and initiatives like GSoC and Outreachy allow these people to participate and find career opportunities in our spaces. We have a large number of GSoC interns who have become long-term maintainers and contributors to GNOME. Many others have joined different industries, bringing their GNOME expertise and tech with them. It’s been a net-positive experience for Google, GNOME, and the contributors over the past decades.

Our very own Karen Sandler was there and organized a discussion around diversity. This topic is as relevant as ever, especially given recent challenges to these initiatives in the US. We discussed ideas on how to make communities more diverse and better support the existing diverse members of our communities.

It was quite inspiring. Communities from various other projects shared their stories and results, and to me, it just confirmed my perception: while diverse communities are hard to build, they can achieve much more than non-diverse ones in the long run. It is always worth investing in people.

As always, the “hallway track” was incredibly fruitful. I had great chats with Carl Schwan (from KDE) about event organizing (comparing notes on GUADEC, Akademy, and LAS) and cross-community collaboration around technologies like Flathub and Flatpak. I also caught up with Claudio Wunder, who did engagement work for GNOME in the past and has always been a great supporter of our project. His insights into the dynamics of other non-profit foundations sparked some interesting discussions about the challenges we face in our foundation.

I also had a great conversation with Till Kamppeter (from OpenPrinting) about the future of printing in xdg-desktop-portals. We focused on agreeing on a direction for new dialog features, like print preview and other custom app-embedded settings. This was the third time I’ve run into Till at a conference this year! :)

I met plenty of new people and had various off-topic chats as well. The event was only two days long, but thanks to the unconference format, I ended up engaging far more with participants than I usually do in that amount of time.

I also had the chance to give a lightning talk about GNOME’s long history with Google Summer of Code and how the program has helped us build our community. It was a great opportunity to also share our wider goals, like building a desktop for everyone and our focus on being a people-centric community.

Finally, I’d like to thank Google for sponsoring my trip, and for providing the space for us all to talk about our communities and learn from so many others.