/rss20.xml">

Fedora People

Installing nightly syslog-ng arm64 packages on a Raspberry Pi

Posted by Peter Czanik on 2025-04-01 14:21:10 UTC

Last week, I posted about running nightly syslog-ng container images on arm64. However, you can also install syslog-ng directly on the host (in my case, a Raspberry Pi 3), running the latest Raspberry OS.

Read more at https://www.syslog-ng.com/community/b/blog/posts/installing-nightly-syslog-ng-arm64-packages-on-a-raspberry-pi

syslog-ng logo

UMU - Unified Linux Game Launcher: Revolutionary step towards an Open Source gaming platform without Steam

Posted by Pavlo Rudy on 2025-03-30 15:49:15 UTC

Important projects for getting Windows applications up and running on Linux

UMU - Unified Linux Game Launcher: Revolutionary step towards an Open Source gaming platform without Steam

Let's find out why we need another game launcher for Linux and why famous developers invest their time to create it. The first question from newcomers will probably be "What does the word UMU even mean? UMU comes from Polynesian culture and means hot volcanic stones that are good for cooking. So this project is definitely hot.

Linux as a platform for games is very promising - Valve, CodeWeavers (founders of Wine, the most important application for running Windows software on Linux) and other companies and developers have invested a lot in the ability to run Windows applications and Linux graphics stack. And this is not only about games - Wine is able to run Adobe Suite (Photoshop, Illustrator, InDesign), Microsoft Office and many other applications.

UMU - Unified Linux Game Launcher: Revolutionary step towards an Open Source gaming platform without Steam
The Steam storefront

For the average Linux user, there is a landscape of tools that make life easier:

  • Wine, developed by CodeWeavers and the open source community.
  • Proton - a fork of Wine, created by Valve and also developed with the help of the open source community.

At first sight, these two projects are competitors, but this is not true. Both have the same root and success of one is also success of the other, so cooperation is profitable for both.

Additional tools:

  • DXVK - Vulkan-based implementation of D3D8, 9, 10 and 11 for Linux and Wine.
  • VK3D - is a 3D graphics library based on Vulkan with an API very similar to Direct3D 12.
  • VKD3D-proton - VK3D fork from Valve
  • winetricks - an easy way to work around problems in Wine
  • protontricks - wrapper that does Winetricks things for Proton-enabled games, also requires Winetricks.

Finally, the peak of the iceberg - game launchers:

  • Lutris
  • PlayOnLinux
  • Bottles
  • Heroic Games Launcher
  • Faugus Launcher and many more.

There are a lot of launchers - everyone wants to make life easier for their users. Steam is a launcher too, with the store, community, achievements and a lot of stuff inside. Two negative Steam moments - it's a proprietary 32-bit application. Why open source is better than proprietary is obvious - you can improve it and analyze possible security issues. 32-bit application is painful for Linux maintainers because they have to provide additional 32-bit libraries as dependencies, update them and do QA.

So can we use Proton without Steam for a more open source gaming environment? Short answer - yes, but not without problems. Proton is developed with full Steam compatibility and uses Steam Runtime - this is a compatible environment for running Steam games on various Linux distributions. Here UMU wins advanced score because it is nothing else than modified Steam Runtime Tools and Steam Linux Runtime that Valve uses for Proton.

More competition is always better for customers, so more game launchers will bring more unique features to the Linux ecosystem. Some launchers already support UMU: Lutris, Heroic Games Launcher or Faugus Launcher. Also, UMU works inside Snap or Flatpak packages, but doesn't provide it's own package for both yet.

Who stays behind of UMU

UMU was created by important figures from Linux gaming scene:

They have very different levels of commitment, but just the fact that they are all involved sends a good signal to all of us.

Go hard or go home - how UMU can win the game

Many years ago, when I first tried Wine to run a game on Linux, the algorithm was this:

  • Install Wine and run wine game.exe.
  • If it doesn't work, analyze the log and use winetricks to install the correct library.
  • If it fails, go to the Wine Application Database also called AppDB to find the right way.

UMU is well designed by people with the above experience, so they decided to create UMU Database, which contains all the information needed to run games successfully. There are also differences between game versions: games from Steam and Epic Games Store are not exactly the same and may need different fixes for successful gameplay. The UMU game fixes are called protonfixes and their repository can be found here.

For example, let's search for Grand Theft Auto V in the UMU database. This is the very popular game that doesn't need any advertising:

TITLE,STORE,CODENAME,UMU_ID,COMMON ACRONYM (Optional),NOTE (Optional)
Grand Theft Auto V,egs,9d2d0eb64d5c44529cece33fe2a46482,umu-271590,gtav,
Grand Theft Auto V,none,none,umu-271590,gtav,Standalone Rockstar installer
UMU - Unified Linux Game Launcher: Revolutionary step towards an Open Source gaming platform without Steam
Grand Theft Auto V Enhanced on Steam

As you can see this is the same game with id umu-271590 but from different stores: Epic Games and Rockstar. Currently the UMU database has 1090 games, this is a good result because the first UMU release was in February 2024.

Non-Steam Games

First you need to install Proton. You can do it using Steam or in case of Proton GE use it's README. You don't need to worry about the Steam Runtime, the latest version will be downloaded to $HOME/.local/share/umu.

Now start with no options:

$ umu-run game.exe

UMU will automatically set Proton and create a Wine prefix. It will work in some cases, but many games will need more tuning. The next command will run Star Citizen, apply protonfix and create Wine prefix in a different location:

$ WINEPREFIX=~/.wine-starcit GAMEID=umu-starcitizen PROTONPATH=~/GE-Proton9-4 umu-run StarCitizen.exe
UMU - Unified Linux Game Launcher: Revolutionary step towards an Open Source gaming platform without Steam
Star Citizen gameplay

If something goes wrong, the debug option is available:

$ UMU_LOG=debug WINEPREFIX=~/.wine-starcit GAMEID=umu-starcitizen PROTONPATH=~/GE-Proton9-4 umu-run StarCitizen.exe

How to run winetricks command to install some libraries:

PROTONPATH=GE-Proton umu-run winetricks quartz corefonts

In case if protonfix is broken or the game needs to be tested in clean environment:

$ PROTONFIXES_DISABLE=1 WINEPREFIX=~/.wine-starcit GAMEID=umu-starcitizen PROTONPATH=~/GE-Proton9-4 umu-run StarCitizen.exe

Want to try another experimental Proton build? No problem, UMU allows you to use it with these environment variables:

GAMEID=0 PROTONPATH=/path/to/proton-ge-custom-bin umu-run game.exe

If you want to run native Linux game and disable Proton there's special option for it:

UMU_NO_PROTON=1 umu-run game.sh

Steam Games

They are supported - run games from any store just like you do on Steam. The main difference from doing it manually with Wine or Proton is that UMU can automatically apply Proton fixes to the game title. When Steam runs a game through Proton, it shares some data about specific game title and personal configuration that title needs - for example, does it need OpenGL or Vulkan. UMU is able to talk to Proton in the same way Steam does.

Final Note

UMU - Unified Linux Game Launcher: Revolutionary step towards an Open Source gaming platform without Steam
Unsplash / Uriel Soberanes

UMU is the unified launcher for Linux that allows you to run Windows games outside of Steam and does the heavy lifting of setting up the game environment correctly. It has a large database of games with preconfigured protonfixes for each, but also supports many flags and environment variables for complex configurations. You can use UMU as a standalone launcher or together with other launchers like Lutris to get a stable Proton environment for many games. Thank the contributors and remember - you can win even faster with open source software!

Late March infra bits 2025

Posted by Kevin Fenzi on 2025-03-29 20:04:01 UTC
Scrye into the crystal ball

Another week, another saturday blog post.

Mass updates/reboots

We did another mass update/reboot cycle. We try and do these every so often, as the fedora release schedule permits. We usually do all our staging hosts on a monday, on tuesday a bunch of hosts that we can reboot without anyone really noticing (ie, we have HA/failover/other paths or the service is just something that we consume, like backups), and finally on wednsday we do everything else (hosts that do cause outages).

Things went pretty smoothly this time, I had several folks helping out this time and thats really nice. I have done them all by myself, but it takes a while. We also fixed a number of minor issues with hosts: serial consoles not working right and nbde not running correctly and also zabbix users being setup correctly locally. There was also a hosted server where reverse dns was wrong, causing ansible to have the wrong fqdn and messing up our update/reboot playbook. Thanks James, Greg and Pedro!

I also used this outage to upgrade our proxies from Fedora 40 to Fedora 41.

After that our distribution of instances is:

number / ansible_distribution_version

252 41

105 9.5

21 8.10

8 40

2 9

1 43

It's interesting that we now have 2.5x as many Fedora instances as RHEL. Although thats mostly the case due to all the builders being Fedora.

The Fedora 40 GA compose breakage

Last week we got very low on space on our main fedora_koji volume. This was mostly caused by the storage folks syncing all the content to the new datacenter, which meant that it kept snapshots as it was syncing.

In an effort to free space (before I found out there was nothing we could do but wait) I removed an old composes/40/ compose. This was the final compose for Fedora 40 before it was released and the reason in the past that we kept it was to allow us to make delta rpms more easily. It's the same content as the base GA stuff, but it's in one place instead of split between fedora and fedora-secondary trees. Unfortunately, there were some other folks using this. Internally they were using it for some things and iot also was using it to make their daily image updates.

Fortunately, I didn't actually fully delete it, I just copied it to an archive volume, so I was able to just point the old location to the archive and everyone should be happy now.

Just goes to show you if you setup something for yourself, often unknown to you others find it helpfull as well, so retiring things is hard. :(

New pagure.io DDoS

For the most part we are handling load ok now on pagure.io. I think this is mostly due to us adding a bunch of resources, tuning things to handle higher load and blocking some larger abusers.

However, on friday we got a new fun one: A number of ip's were crawling an old (large) git repo grabbing git blame on ever rev of every file. This wasn't causing a problem on the webserver or bandwith side, but instead causing problems for the database/git workers. Since they had to query the db on every one of those and get a bunch of old historical data, it saturated the cpus pretty handily. I blocked access to that old repo (thats not even used anymore) and that seemed to be that, but they may come back again doing the same thing. :(

We do have a investigation open for what we want to do long term. We are looking at anubis, rate limiting, mod_qos and other options.

I really suspect these folks are just gathering content which they plan to resell to AI companies for training. Then the AI company can just say they bought it from bobs scraping service and 'openwash' the issues. No proof of course, but just a suspicion.

Final freeze coming up

Finally the final freeze for Fedora 42 starts next tuesday, so we have been trying to land anything last minute. If you're a maintainer or contributor working on Fedora 42, do make sure you get everthing lined up before the freeze!

comments? additions? reactions?

As always, comment on mastodon: https://fosstodon.org/@nirik/114247602988630824

Vangelis

Posted by Peter Czanik on 2025-03-29 09:14:51 UTC

On this day in 1943 Vangelis was born. The very first CD I bought over three decades ago was composed by him: Chariots of Fire. After so many years, I still love his music.

My Vangelis collection

As you can see, I do not have everything by him. I do not like his earliest and latest works that much, but almost everything in between. Unfortunately I could not find everything on CD. For example, I loved “Soil Festivities”, especially since I was a soil engineer during my college years. But not only is it not available on CD (even used), it is also missing from streaming services.

Several times I learned years later that the music I was listening to was actually a movie soundtrack. Chariots of Fire is one of them, as well as Blade Runner. It became one of my favorite movies, and it’s the only movie I have on 4K bluray.

I’m listening to Vangelis right now and expect to listen to a few more of his albums today :-)

Or, on TIDAL: https://listen.tidal.com/album/103208768

How artifacts are signed in Fedora

Posted by Jeremy Cline on 2025-03-28 19:39:00 UTC

For the last few months, one of the things I’ve been working on in Fedora is adding support for SecureBoot on Arm64. The details of that work will be the subject of a later post, but as part of this work I’ve become somewhat familiar with the signing infrastructure in Fedora and how it works. This post introduces the various pieces of the current infrastructure, and how they fit together.

Signed?

Pretty much anything Fedora produces and distributes is digitally signed so users can verify it did, in fact, come from the Fedora project. Perhaps the most obvious example of this is the RPM packages Fedora produces. However, plenty of other artifacts are also signed, like OSTree commits.

Signing works using public-key cryptography. We have the private key that we need to keep secret, and we distribute the public keys to users so they can verify the artifact.

Robosignatory

Signing is (mostly) an automated process. A service, called robosignatory, connects to an AMQP message broker and subscribes to several message topics. When an artifact is created that needs signing, the creator of the artifact sends a message to the AMQP broker using one of these topics.

Robosignatory does not sign artifacts itself. It collects requests from various other services and submits them to the signing server on behalf of those systems. The signing server is the system that contains and protects the private keys.

Sigul

Sigul is the signing server that holds the private keys and performs signing operations. It is composed of three parts: the client, the bridge, and the server.

The Server

The Sigul server is designed to control access to signing keys and to provide a minimal attack surface. It does not behave like a traditional server in that it does not accept incoming network connections. Instead, it connects to the Sigul bridge, and the bridge forwards requests to it via that connection.

This allows the firewall to be configured to allow outgoing traffic to a single host, and to block all incoming connections. This does mean that the host cannot be managed via normal SSH operations. Instead this is done in Fedora via the host’s out-of-band management console.

Private keys are encrypted on disk. When users are granted access to keys, the key is encrypted for that particular user.

The Bridge

The Sigul bridge acts as a specialized proxy. Both the client and server connect to the bridge, and it forwards requests and responses between them.

Unlike the typical proxy, the Sigul bridge does a few interesting things. Firstly, it requires the client to authenticate via TLS certificates. The client then sends the bridge the request it wishes to make. The bridge forwards that request to the server. The bridge then expects the client and server to send an arbitrary amount of data to each other. This arbitrary data is framed as chunks and it forwards that data back and forth until an end of stream signal is sent by the server and client.

Finally, it forwards the server’s response to the client.

The Client

The client is a command-line interface that sends requests to the server via the bridge. This client can be used to create users and keys, grant and revoke access to keys, and request signatures in various formats. Robosignatory invokes the CLI to sign artifacts.

The client connects to the bridge as described above. After authenticating with the bridge and sending the request, it begins a second TLS connection configured with the Sigul server’s hostname, and sends and receives data on this connection over the TLS connection it has with the Sigul bridge. It relies on this inner TLS connection to send a set of session keys to the server without the bridge being able to intercept them. These session keys are used to sign the Sigul server’s responses so the client can be sure the bridge did not tamper with them.

After it has established a set of secrets with the server, the remainder of the interaction occurs over the TLS connection with the bridge, but since the responses are signed both the client and server can be confident the bridge has not tampered with the conversation.

Conclusion

It’s an interesting setup, and has served Fedora for many years. There is plenty of code in Sigul which dates back to 2009, not long after Python 2.6 was released with exciting new features like the standard library json module. It was likely developed for RHEL 5 which means Python 2.4.

Unfortunately, at least one of the libraries (python-nss) it depends on to work are not maintained and not in Fedora 42, so something will have to be done in the near future. Still, it’s not ready to sign off just yet.

Modern Minimum Laptop

Posted by Avi Alkalay on 2025-03-28 10:49:49 UTC

Now that people have a more independent and cloud-based set of work tools, they can finally consider leaving Windows behind and getting a better and cheaper laptop, like a MacBook Air.

“What? A MacBook Air cheaper than a Windows laptop? Impossible!”

Well, if you look in the Wintel (Windows + Intel) universe for a laptop with the same features as Apple’s entry-level laptop — the MacBook Air — the Wintel version will be more expensive.

A basic laptop today should have a 3K or 4K display (Full HD is already obsolete), solid-state storage, high-quality camera, microphone, and audio, and be thin, lightweight, and stylish. Most importantly, the battery should last all day. This last feature is not possible in Intel-based laptops, where, in practice, the battery lasts no more than 90 minutes. A modern laptop is used like a smartphone: unplugged from the outlet, carried around all day, and recharged while you have lunch or sleep.

So yes, you can find cheaper laptops, but they will have worse features than the minimum standard. They also use lower-quality components (camera, microphone, display) and outdated technology (storage, CPU).

Also on LinkedIn and Facebook.

Infra and RelEng Update – Week 13 2025

Posted by Fedora Community Blog on 2025-03-28 10:00:00 UTC

This is a weekly report from the I&R (Infrastructure & Release Engineering) Team. We provide you both infographic and text version of the weekly report. If you just want to quickly look at what we did, just look at the infographic. If you are interested in more in depth details look below the infographic.

Week: 24 Mar – 28 Mar 2025

Infrastructure & Release Engineering

The purpose of this team is to take care of day to day business regarding CentOS and Fedora Infrastructure and Fedora release engineering work.
It’s responsible for services running in Fedora and CentOS infrastructure and preparing things for the new Fedora release (mirrors, mass branching, new namespaces etc.).
List of planned/in-progress issues

Fedora Infra

CentOS Infra including CentOS CI

Release Engineering

If you have any questions or feedback, please respond to this report or contact us on #redhat-cpe channel on matrix.

The post Infra and RelEng Update – Week 13 2025 appeared first on Fedora Community Blog.

Contribute to Fedora 42 KDE, Virtualization, and Upgrade Test Days

Posted by Fedora Magazine on 2025-03-28 08:00:00 UTC

Fedora test days are events where anyone can help make certain that changes in Fedora Linux work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed to Fedora before, this is a perfect way to get started.

There are three test periods occurring in the coming days:

  • Monday March 31 through April 7, is to test the KDE Desktop and Apps
  • Wednesday April 2 through April 6, is to test Upgrade Test Days
  • Saturday April 5 through April 7, is to test Virtualization

Come and test with us to make Fedora 42 even better. Read more below on how to do it.

KDE Plasma and Apps

The KDE SIG is working on final integration for Fedora 42. Some of the app versions were recently released and will soon arrive in Fedora Linux 42. As a result, the KDE SIG and QA teams have organized a test week from Monday, March 31, 2025, through Monday, April 07, 2025. The wiki page contains links to the test images you’ll need to participate.

Upgrade test day

As we approach the Fedora Linux 42 release date, it’s time to test upgrades. This release has many changes, and it becomes essential that we test the graphical upgrade methods as well as the command-line methods.

This test period will run from Wednesday, April 2 through Sunday, April 6. It will test upgrading from a fully updated F40 or F41 to F42 for all architectures (x86_64, ARM, aarch64) and variants (WS, cloud, server, silverblue, IoT). See this wiki page for information and details. For this test period, we also want to test DNF5 Plugins before and after upgrade. Recently noted regressions resulted in a Blocker Bug. The DNF5 Plugin details are available here.

Virtualization test day

This test period will run from Saturday, April 5 through Monday, April 7 and will test all forms of virtualization possible in Fedora 42. The test period will focus on testing Fedora Linux, or your favorite distro, inside a bare metal implementation of Fedora Linux running Boxes, KVM, VirtualBox and whatever you have. The test cases outline the general features of installing the OS and working with it. These cases are available on the results page.

How do test days work?

A test period is an event where anyone can help make certain that changes in Fedora work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. Test days are the perfect way to start contributing if you not in the past.

The only requirement to get started is the ability to download test materials (which include some large files) and then read and follow directions step by step.

Detailed information about all the test days are on the wiki page links provided above. If you are available on or around the days of the events, please do some testing and report your results

🎲 PHP version 8.3.20RC1 and 8.4.6RC1

Posted by Remi Collet on 2025-03-28 06:06:00 UTC

Release Candidate versions are available in the testing repository for Fedora and Enterprise Linux (RHEL / CentOS / Alma / Rocky and other clones) to allow more people to test them. They are available as Software Collections, for parallel installation, the perfect solution for such tests, and as base packages.

RPMs of PHP version 8.4.6RC1 are available

  • as base packages in the remi-modular-test for Fedora 40-42 and Enterprise Linux ≥ 8
  • as SCL in remi-test repository

RPMs of PHP version 8.3.20RC1 are available

  • as base packages in the remi-modular-test for Fedora 40-42 and Enterprise Linux ≥ 8
  • as SCL in remi-test repository

ℹ️ The packages are available for x86_64 and aarch64.

ℹ️ PHP version 8.2 is now in security mode only, so no more RC will be released.

ℹ️ Installation: follow the wizard instructions.

ℹ️ Announcements:

Parallel installation of version 8.4 as Software Collection:

yum --enablerepo=remi-test install php84

Parallel installation of version 8.3 as Software Collection:

yum --enablerepo=remi-test install php83

Update of system version 8.4:

dnf module switch-to php:remi-8.4
dnf --enablerepo=remi-modular-test update php\*

Update of system version 8.3:

dnf module switch-to php:remi-8.3
dnf --enablerepo=remi-modular-test update php\*

ℹ️ Notice:

  • version 8.4.4RC2 is in Fedora rawhide for QA
  • EL-10 packages are built using RHEL-10.0-beta and EPEL-10.0
  • EL-9 packages are built using RHEL-9.5
  • EL-8 packages are built using RHEL-8.10
  • oci8 extension uses the RPM of the Oracle Instant Client version 23.7 on x86_64 and aarch64
  • intl extension uses libicu 74.2
  • RC version is usually the same as the final version (no change accepted after RC, exception for security fix).
  • versions 8.3.18 and 8.4.5 are planed for March 13th, in 2 weeks.

Software Collections (php83, php84)

Base packages (php)

Updates and Reboots

Posted by Fedora Infrastructure Status on 2025-03-26 21:00:00 UTC

We will be applying updates to all our servers and rebooting into newer kernels. Services will be up or down during the outage window.

End of OpenID authentication in Fedora Account System

Posted by Fedora Magazine on 2025-03-26 16:20:42 UTC

The Fedora Infrastructure Team is announcing the end of OpenID in Fedora Account System (FAS). This will occur on 20th May 2025.

Why the change?

OpenID is being replaced by OpenIDConnect (OIDC) in most of the modern web and most of the Fedora infrastructure is already using OIDC as the default authentication method. OIDC offers better security by handling both authentication and authorization. It also allows us to have more control over services that are using Fedora Account System (FAS) for authentication.

What will change for you?

With the End Of Life of OpenID we will switch to OIDC for everything and no longer support authentication with OpenID.

If your web or service is already using OIDC for authentication nothing will change for you. If you are still using OpenID open a ticket on Fedora Infrastructure issue tracker and we will help you with migration to OIDC

For users using FAS as authentication option there should be no change at all.

How to check if a service you maintain is using OpenID?

You may quickly check if your service is using OpenID for FAS authentication by looking at where you are redirected when logging in with FAS.

If you are redirected to https://id.fedoraproject.org/openidc/Authorization you are already using OIDC and you can just ignore this announcement.

If you are being redirected to https://id.fedoraproject.org/openid you are still using the OpenID authentication method. You should open a ticket on Fedora Infrastructure issue tracker so we can help you with migration.

What will happen now?

We will be reaching out directly to services we identify as using OpenID. But since we don’t have control over OpenID authentication, we can’t identify everyone.

If you are interested in following this work feel free to watch this ticket.

What hardware, software, and cloud services do we use?

Posted by Vedran Miletić on 2025-03-26 16:19:38 UTC

What hardware, software, and cloud services do we use?


green and black computer motherboard

Photo source: Patrik Kernstock (@pkernstock) | Unsplash


Our everyday scientific and educational work relies heavily on hardware, software, and, in modern times, cloud services. The equipment that we will mention below is specific to our group; common services used by university and/or faculty employees will not be specifically mentioned here.

Publishing (Material for) MkDocs website to GitHub Pages using custom Actions workflow

Posted by Vedran Miletić on 2025-03-26 16:19:38 UTC

Publishing (Material for) MkDocs website to GitHub Pages using custom Actions workflow


blue and black penguin plush toy

Photo source: Roman Synkevych (@synkevych) | Unsplash


As you can probably see, this website is built using the Material theme for MkDocs, which we have been happily using for over one year after using Sphinx for many years prior to that. GitHub Pages offers built-in support for Jekyll, but not for MkDocs and therefore it requires the manual building and deployment of our website. However, it automates many other things, including HTTPS certificate provisioning on our domain via Let's Encrypt.

There are several somewhat related approaches using GitHub Actions for automating the deployment of MkDocs-generated sites, usually with the Material theme, to GitHub Pages. These guides are not only found on blogs written by enthusiasts; the official Getting started section of the Material for MkDocs documentation describes the usage of GitHub Actions for deployment and provides a generic YAML file for that purpose.

Obituary of my favorite CD shop: Stereo

Posted by Peter Czanik on 2025-03-26 06:53:23 UTC

Last December, the CD shop where I bought most of my collection closed its doors for good. I had seen it coming — the owner had been gradually winding down the business in preparation for retirement — but after nearly 30 years of shopping there, it was still a tough moment.

Stereo logo

This logo belongs to Periferic Records - Stereo Kft.. Back in the nineties, during my university years, I used to look for this logo at concerts, always hoping to spot a bearded man selling an incredible selection of CDs. Imagine my surprise when, in 2002, I attended a concert and discovered that the organizer was none other than that same bearded man — who also happened to be one of my second cousins!

From that moment on, I became a regular at the shop. The owner was a publisher of some of my favorite music, including Hungarian progrock and piano albums. Some standout names: After Crying, Vedres Csaba, and Solaris. While the shop specialized in progrock — with a selection unlike anywhere else — it also offered a wide variety of other genres.

When I received my first big paycheck, I went straight to the store and bought dozens of CDs. Today, streaming services like TIDAL and Spotify have recommendation engines, but back then, nothing could beat the personalized recommendations from the shop’s staff. More than once, I walked out with a free CD as a bonus, one of which became an all-time favorite: Townscream – Nagyvárosi Ikonok.

Unlike many music shops that play background music on low-quality systems, Stereo had StandArt speakers from Heed Audio. These speakers, almost as old as the shop itself, created an immersive listening experience. Though I often rushed in just to pick up an order, on the rare occasion that I had time, I would linger to listen — sometimes discovering new music to take home.

The website still exists, and you can get an ever shorter list of available CD titles by e-mail. In December, I spent most of my free time going through their list of albums, listening to samples on TIDAL and YouTube — nearly 1,500 albums in total. Through this process, I found some rare gems, including one CD I bought purely for its intriguing title: God-Sex-Money. Well, actually the description, “Recommended for Wakeman/Emerson fans,” sealed the deal :-)

Even now, whenever I’m near the old shop, I instinctively start walking toward it — only to remember that an important part of my life is gone forever. But it lives on in my CD collection and my memories.

Open Source Survival Guide

Posted by Chris Short on 2025-03-26 04:00:00 UTC
Open Source Survival Guide provides practical rules for navigating the open source ecosystem. Learn how to balance community collaboration with business goals, contribute effectively, build trust, and maintain your sanity in the complex world of open source software development.

Nightly arm64 syslog-ng container builds are now available

Posted by Peter Czanik on 2025-03-25 14:00:58 UTC

Recently we enabled nightly syslog-ng builds and container builds for arm64. It means that from now on, you can run the latest syslog-ng on 64bit ARM platforms. For this test, I used a Raspberry Pi 3 running the latest Raspberry Pi OS. As I use Podman everywhere else (I am an openSUSE / Fedora guy), I also installed it here for container management.

Read more at https://www.syslog-ng.com/community/b/blog/posts/nightly-arm64-syslog-ng-container-builds-are-now-available

syslog-ng logo

Who pays the cost of progress in software?

Posted by Jonathan McDowell on 2025-03-24 21:11:54 UTC

I am told, by friends who have spent time at Google, about the reason Google Reader finally disappeared. Apparently it had become a 20% Project for those who still cared about it internally, and there was some major change happening to one of it upstream dependencies that was either going to cause a significant amount of work rearchitecting Reader to cope, or create additional ongoing maintenance burden. It was no longer viable to support it as a side project, so it had to go. This was a consequence of an internal culture at Google where service owners are able to make changes that can break downstream users, and the downstream users are the ones who have to adapt.

My experience at Meta goes the other way. If you own a service or other dependency and you want to make a change that will break things for the users, it’s on you to do the migration, or at the very least provide significant assistance to those who own the code. You don’t just get to drop your new release and expect others to clean up; doing that tends to lead to changes being reverted. The culture flows the other way; if you break it, you fix it (nothing is someone else’s problem).

There are pluses and minuses to both approaches. Users having to drive the changes to things they own stops them from blocking progress. Service/code owners having to drive the changes avoids the situation where a wildly used component drops a new release that causes a lot of high priority work for folk in order to adapt.

I started thinking about this in the context of Debian a while back, and a few incidents since have resulted in my feeling that we’re closer to the Google model than the Meta model. Anyone can upload a new version of their package to unstable, and that might end up breaking all the users of it. It’s not quite as extreme as rolling out a new service, because it’s unstable that gets affected (the clue is in the name, I really wish more people would realise that), but it can still result in release critical bugs for lots other Debian contributors.

A good example of this are toolchain changes. Major updates to GCC and friends regularly result in FTBFS issues in lots of packages. Now in this instance the maintainer is usually diligent about a heads up before the default changes, but it’s still a whole bunch of work for other maintainers to adapt (see the list of FTBFS bugs for GCC 15 for instance - these are important, but not serious yet). Worse is when a dependency changes and hasn’t managed to catch everyone who might be affected, so by the time it’s discovered it’s release critical, because at least one package no longer builds in unstable.

Commercial organisations try to avoid this with a decent CI/CD setup that either vendors all dependencies, or tracks changes to them and tries rebuilds before allowing things to land. This is one of the instances where a monorepo can really shine; if everything you need is in there, it’s easier to track the interconnections between different components. Debian doesn’t have a CI/CD system that runs for every upload, allowing us to track exact causes of regressions. Instead we have Lucas, who does a tremendous job of running archive wide rebuilds to make sure we can still build everything. Unfortunately that means I am often unfairly grumpy at him; my heart sinks when I see a bug come in with his name attached, because it often means one of my packages has a new RC bug where I’m going to have to figure out what changed elsewhere to cause it. However he’s just (very usefully) surfacing an issue someone else created, rather than actually being the cause of the problem.

I don’t know if I have a point to this post. I think it’s probably that I wish folk in Free Software would try and be mindful of the incompatible changes they might introducing, and the toil they create for other volunteer developers, often not directly visible to the person making the change. The approach done by the Debian toolchain maintainers strikes me as a good balance; they do a bunch of work up front to try and flag all the places that might need to make changes, far enough in advance of the breaking change actually landing. However they don’t then allow a tardy developer to block progress.

Linux Fixed Release vs. Rolling Release Distributions: Which One is Right for You?

Posted by Piju 9M2PJU on 2025-03-23 17:23:29 UTC

Linux distributions come in two main release models: fixed release and rolling release. Each has its own advantages and drawbacks, making the choice between them dependent on user needs, preferences, and use cases. In this article, we’ll dive into the history of Linux releases, explain both models in detail, provide examples of each, and help you determine which one suits you best.


A Brief History of Linux Releases

The Linux operating system was first developed by Linus Torvalds in 1991, and soon after, various distributions (distros) began emerging to make Linux more accessible to users. Early distributions followed a fixed release cycle, similar to traditional commercial software, providing stable versions with long-term support.

As Linux usage grew, developers and power users sought an alternative release model that allowed them to receive continuous updates without waiting for major version upgrades. This led to the birth of the rolling release model, which delivers updates as soon as they are available, without the need for reinstalling or upgrading to a new version.


What is a Fixed Release Distribution?

A fixed release distribution follows a structured development cycle, with periodic major releases that bundle all updates, improvements, and new features into one package. These releases are well-tested before being distributed to users.

Examples of Fixed Release Distros:

  • Ubuntu – Releases a new version every six months, with Long-Term Support (LTS) versions every two years.
  • Debian – Has three main branches: Stable (fixed release), Testing, and Unstable.
  • Fedora – Releases a new version approximately every six months.
  • openSUSE Leap – A stable release that is synchronized with SUSE Linux Enterprise.
  • Linux Mint – Based on Ubuntu LTS releases, focusing on stability and user-friendliness.

Pros of Fixed Release Distros:

✅ Stable and reliable: Thoroughly tested before release.

✅ Long-term support (LTS versions): Security updates for many years.

✅ Predictable update cycles: Users know when a new version will be available.

✅ Ideal for production environments and enterprises.

Cons of Fixed Release Distros:

❌ Software can become outdated between releases.

❌ Requires major upgrades to move to a new version.

❌ May lack the latest features and improvements available in newer software.


What is a Rolling Release Distribution?

A rolling release distribution continuously updates packages as soon as they are available, rather than waiting for a scheduled release. This means that the operating system is always up to date without needing periodic major upgrades.

Examples of Rolling Release Distros:

  • Arch Linux – A minimalist and highly customizable distribution.
  • openSUSE Tumbleweed – A rolling release counterpart to openSUSE Leap.
  • Gentoo Linux – Source-based rolling release with maximum flexibility.
  • EndeavourOS – A user-friendly Arch-based distro.
  • Manjaro – Based on Arch but with added stability and ease of use.

Pros of Rolling Release Distros:

✅ Always up to date: No need to wait for major releases.

✅ Access to the latest software and kernel versions.

✅ No system reinstallation required to upgrade. ✅ Ideal for developers and enthusiasts who want cutting-edge software.

Cons of Rolling Release Distros:

❌ Can be less stable due to frequent updates.

❌ Updates may occasionally break the system if not managed carefully.

❌ Requires more maintenance and troubleshooting knowledge.


Fixed vs. Rolling Release: Which One Should You Choose?

Choosing between a fixed release and a rolling release distribution depends on your needs:

CriteriaFixed ReleaseRolling Release
StabilityMore stableLess stable (but up to date)
Software updatesPeriodic major updatesContinuous updates
Ease of useEasier, especially for beginnersRequires more maintenance
SecurityLong-term security patchesSecurity updates arrive faster
Ideal forEnterprises, production environments, beginnersDevelopers, power users, enthusiasts

If you prefer a stable and predictable system with fewer maintenance requirements, a fixed release distribution like Ubuntu LTS, Debian Stable, or Linux Mint is a great choice.

If you want cutting-edge software, continuous updates, and don’t mind occasional troubleshooting, a rolling release distribution like Arch Linux, Manjaro, or openSUSE Tumbleweed will suit you better.


Both fixed and rolling release distributions have their place in the Linux ecosystem. Understanding their differences allows you to make an informed choice based on your workflow, experience level, and expectations. Whether you prioritize stability or cutting-edge software, there’s a Linux distribution that fits your needs.

The post Linux Fixed Release vs. Rolling Release Distributions: Which One is Right for You? appeared first on Hamradio.my - Amateur Radio, Tech Insights and Product Reviews by 9M2PJU.

Mid Late March infra bits 2025

Posted by Kevin Fenzi on 2025-03-22 17:14:02 UTC
Scrye into the crystal ball

Fedora 42 Beta released

Fedora 42 Beta was released on tuesday. Thanks to everyone in the Fedora community that worked so hard on it. It looks to be a pretty nice relase, lots of things in it and working pretty reasonably already. Do take it for a spin if you like: https://fedoramagazine.org/announcing-fedora-linux-42-beta/

Of course with the Beta out the door, our infrastructure freeze is lifted and so I merged 11 PR's that were waiting for that on Wed. Also, next week we are going to get in a mass update/reboot cycle before the final freeze the week after.

Ansible galaxy / collections fun

One of the things I wanted to clean up what the ansible collections that were installed on our control host. We have a number that are installed via rpm (from EPEL). Those are fine, we know they are there and what version, etc. Then, we have some that are installed via ansible. We have a requirements.txt file and running the playbook on the control host installs those exact versions of roles/collections from ansible galaxy. Finally we had a few collections installed manually. I wanted to get those moved into ansible so we would always know what we have installed and what version it was. So, simple right? Just put them in requirements.txt. I added them in there and... it said they were just not found.

The problem turned out to be that we had roles in there, but no collections anymore so I had not added a 'collections:' section, and it was trying to find 'roles' with those collection names. The error "not found" was 100% right, but it took me a few to realize why they were not found. :)

More A.I. Scrapers

AI scrapers hitting open source projects is getting a lot of buzz. I hope that some of these scraper folks will realize it's counterproductive to scrape things at a rate that makes them not work, but I'm not holding my breath.

We ran into some very heavy traffic and I had to end up blocking brazil for a while to pagure.io. We also added some CPU's and adjusted things to handle higher load. So far we are handling things ok now and I removed the brazil blockage. But no telling when they will be back. We may well have to look at something like anubis, but I fear the scrapers would just adjust to not be something it can catch. Time will tell.

Thats it for this week folks...

comments? additions? reactions?

As always, comment on mastodon: https://fosstodon.org/@nirik/114207270082200302

GNOME 48 Core Apps Update

Posted by Michael Catanzaro on 2025-03-21 16:28:01 UTC

It has been a year and a half since my previous GNOME core apps update. Last time, for GNOME 45, GNOME Photos was removed from GNOME core without replacement, and Loupe and Snapshot (user-facing names: Image Viewer and Camera) entered, replacing Eye of GNOME and Cheese, respectively. There were no core app changes in GNOME 46 or 47.

Now for GNOME 48, Decibels (Audio Player) enters GNOME core. Decibels is intended to close a longstanding flaw in GNOME’s core app set: ever since Totem (Videos) hid its support for opening audio files, there has been no easy way to open an audio file using GNOME core apps. Totem could technically still do so, but you would have to know to attempt it manually. Decibels fixes this problem. Decibels is a simple app that will play your audio file and do nothing else, so it will complement GNOME Music, the music library application. Decibels is maintained by Shema Angelo Verlain and David Keller (thank you!) and is notably the only GNOME core app that is written in TypeScript.

Looking to the future, the GNOME Incubator project tracks future core apps to ensure there is sufficient consensus among GNOME distributors before an app enters core. Currently Papers (future Document Viewer, replacing Evince) and Showtime (future Videos or possibly Video Player, replacing Totem) are still incubating. Applications in Incubator are not yet approved to enter core, so it’s not a done deal yet, but I would expect to see these apps enter core sooner rather than later, hopefully for GNOME 49. Now is the right time for GNOME distributors to provide feedback on these applications: please don’t delay!

On a personal note, I have recently left the GNOME release team to reduce my workload, so I no longer have any direct role in managing the GNOME core apps or Incubation process. But I think it makes sense to continue my tradition of reporting on core app changes anyway!

Infra and RelEng Update – Week 12

Posted by Fedora Community Blog on 2025-03-21 13:41:11 UTC

This is a weekly report from the I&R (Infrastructure & Release Engineering) Team. We provide you both infographic and text version of the weekly report. If you just want to quickly look at what we did, just look at the infographic. If you are interested in more in depth details look below the infographic.

Week: 17 Mar – 21 Mar 2025

Infrastructure & Release Engineering

The purpose of this team is to take care of day to day business regarding CentOS and Fedora Infrastructure and Fedora release engineering work.
It’s responsible for services running in Fedora and CentOS infrastructure and preparing things for the new Fedora release (mirrors, mass branching, new namespaces etc.).
List of planned/in-progress issues

Fedora Infra

CentOS Infra including CentOS CI

Release Engineering

If you have any questions or feedback, please respond to this report or contact us on #redhat-cpe channel on matrix.

The post Infra and RelEng Update – Week 12 appeared first on Fedora Community Blog.

Contribute at the Fedora 42 CoreOS Test Week

Posted by Fedora Magazine on 2025-03-21 08:00:00 UTC

The Fedora 42 CoreOS Test Week focuses on testing FCOS based on Fedora 42. The FCOS next stream has been rebased on Fedora 42 content. This will be coming soon to testing and stable. To prepare for the content being promoted to other streams, the Fedora CoreOS and QA teams have organized test days from Monday, 24 March through Friday, 28 March. Refer to the wiki page for links to the test cases and materials you’ll need to participate. The FCOS and QA team will meet and communicate with the community asynchronously over multiple matrix/element channels. The announcement has other details covered!

How does a test day work?

A test day is an event where anyone can help make sure changes in Fedora work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed before, this is a perfect way to get started.

To contribute, you only need to be able to do the following things:

  • Download test materials, which include some large files
  • Read and follow directions step by step

The wiki page for the test day has a lot of good information on what and how to test. After you’ve done some testing, you can log your results in the test day web application. If you’re available on or around the day of the event, please do some testing and report your results.

Happy testing, and we hope to see you on test day.

End of OpenID authentication in Fedora Account System

Posted by Fedora Community Blog on 2025-03-20 10:00:00 UTC

On the latest Fedora Infrastructure weekly meeting we decided on a date of OpenID authentication sunset. The date is 20th May 2025.

Why the change?

The OpenID is being replaced by OpenIDConnect (OIDC) in most of the modern web and most of the Fedora infrastructure is already using OIDC as the default authentication method. OIDC offers us better security by handling both authentication and authorization. It also allows us to have more control over services that are using Fedora Account System (FAS) for authentication.

What will change for you?

With the End Of Life of OpenID we will switch to OIDC for everything and no longer support authentication with OpenID. If your web or service is already using OIDC for authentication nothing will change for you. If you are still using OpenID open a ticket on Fedora Infrastructure issue tracker and we will help you with migration to OIDC. For users using FAS as authentication option there should be no change at all.

What will happen now?

We will be reaching to services we identified as using OpenID directly, but as we don’t have control over OpenID authentication we can’t identify everyone.

If you are interested in following this work feel free to watch this ticket.

The post End of OpenID authentication in Fedora Account System appeared first on Fedora Community Blog.