/rss20.xml">

Fedora People

GNOME is Sponsoring an Outreachy Internship Project for GNOME Crosswords!

Posted by Felipe Borges on 2025-05-12 08:40:17 UTC

We are excited to announce that the GNOME Foundation is sponsoring an Outreachy internship project for the June 2025 to August 2025 internship round!

Outreachy provides internships to people subject to systemic bias and impacted by underrepresentation in the technical industry where they are living.

The intern will work with mentors Jonathan Blandford, Federico Mena Quintero, and Tanmay Patil on the project Add Wordlist Scoring to the GNOME Crosswords Editor.

The intern’s blog will soon be added to Planet GNOME, where you can follow their project updates and learn more about them. Stay tuned!

Render a Guitar Pro Score in Real Time

Posted by Fedora Magazine on 2025-05-12 08:00:21 UTC

We will use Tuxguitar to render the audio of a Guitar Pro score [5]. Guitar Pro scores are files with a complete band score transcribed (guitars, bass, drums, synths and more).

Introduction

Tuxguitar is a quite powerful application written in a mixture of Java / C. It is able to render a score in real time either via Fluidsynth [6] or via pure MIDI. The development of Tuxguitar started in 2008 on Sourceforce and after a halt in 2022, the project restarted on Github and is still actively developed.

The goal of this article is to try to render a score via Tuxguitar, and various other applications connected to Tuxguitar, via Jack or Pipewire-Jack. The score used throughout this article will be The Pursuit Of Vikings by the band Amon Amarth [7]. It has 2 guitars, a bass and a drum track.

First step, configuration

For this audio rendering, we will use some tools from the Audinux Fedora COPR repository [11] [12]. The COPR repository can be enabled via :

$ dnf copr enable ycollet/audinux

You may activate the official real time kernel, via the GRUB menu by adding the following options to the kernel you want to boot (hit ‘e’ while on the kernel you want to boot on GRUB) :

preempt=full threadirqs nopti

Then hit F10 to start the kernel.

Using a real time kernel is not always a requirement for audio. It really depends on how you use audio. For standard audio rendering, a standard kernel is maybe enough. However, if you want real time processing of the audio of a guitar connected to a USB audio interface via Guitarix, for example, then a real time kernel will be required. This is especially true if you want to have a really low latency.

Last step: ensure that the /etc/security/limits.d/25-pw-rlimits.conf has the following settings:

@pipewire   – rtprio  80
@pipewire   – nice    -15
@pipewire   – memlock unlimited

Add your username to the pipewire group and disconnect / reconnect to your session.

This last setting is required to allow audio to use a high priority to avoid Xruns (cracks in the sound because of a buffer not filled fast enough).

On Pipewire, you can now adjust the global audio latency via the following command:

$ pw-metadata -n settings 0 clock.force-quantum 256

The global sample rate can be adjusted via the following command:

$ pw-metadata -n settings 0 clock.force-rate 44100

Using Tuxguitar

The first step is to activate the Fluidsynth plugin of Tuxguitar.

Fluidsynth [6] is a software MIDI synthesizer based on the SoundFont 2 specifications [28] and can read sound fonts (set of samples used to mimic some instruments) either in format SF2 (samples based on WAV files) or SF3 (samples based on OGG files – [29]). Fluidsynth comes as a library and executable tool, but is often integrated into applications like Tuxguitar or Qsynth (which is a GUI to control Fluidsynth).

To enable the Fluidsynth plugin in Tuxguitar : open Tools→ Plugins and check Fluidsynth output plugin. After that, select Fluidsynth output plugin on Configure. You will then have access to three tabs (Soundfonts, Audio, and Synthesizer) which allow you to select a better sound font and control the audio rendering of this sound font.


Soundfont selection

Audio configuration

Synthesizer tuning

The Musical Artifacts website [1] hosts a lot of interesting artifacts for audio. You can find the Xioad Bank Sound font which has a lot of interesting sounds for Rock and Metal.

Once the sound font is set up in Tuxguitar, you need to select this sound font as audio driver via Tools → Settings → Sound.

First select a MIDI Sequencer (I use the Jack one) and the MIDI Port (TG Fluidsynth – The Xioad Bank).


Sound configuration dialog

After those selections, you can open the score [4], [7].


The main Tuxguitar window

Connecting Tuxguitar to Audio Output

Under Jack or Pipewire-Jack, we will need to connect Tuxguitar to the audio output of the computer. This can be done via Ray-Session [10] or qpwgraph [30].

Ray Session is an audio session manager. It can load a user defined set of applications and remember the audio connections made between application and rewire these connections automatically. With such a tool, you can restart and rewire a complex session in a second.


Ray Session connection window

qpwgraph is a Pipewire graph manager and can be used to perform connections between various applications.

In Ray Session you will need to draw wires between the fluidsynth block to the audio output (a block with blue connectors). When that is complete, select Play in Tuxguitar to render the score.

Fine tuning the drum sound

Some sound fonts provide various drum kits and, via Tuxguitar, it is relatively easy to select a new kit.

First, inspect the sound font using Polyphone [8], [9]. Polyphone is a sound font editor. You can easily create new sound fonts using this tool by assembling samples into instruments and presets.

Polyphone sections menu

List of available ensembles for Xioad sound font


In the Presets subsection, you can see all the available instruments in the sound font. Each instrument is identified by an index. For example, in 006:125 the first number is the bank (006) and the second number is the instrument (125). Thus, 006:125 → bank 6 and instrument 125. In the general MIDI specification [21], bank 128 is dedicated to drums. As you can see, the Xioad sound font gives you access to 15 drum kits. We will now switch the default drum kit used by Tuxguitar from 128:001 – Custom Standard to 128:016 – Custom Power.


The instrument list of the current score

Switch the drum kit by selecting View → Show Instruments and then, in the DrumKit 1 area, select the drop down menu and select ‘Program #16‘ to select the 128:016 – Custom Power drum kit. There is a correspondence between the ‘Program #’ number and the bank ‘128:’ number.

You can now hit Play in Tuxguitar and hear the difference in sound between the default drum kit and the one just selected.

Using Tuxguitar and Hydrogen / QSynth

We will now set up Tuxguitar as a MIDI sequencer using :

  • Qsynth for the audio rendering anything but the drum [15], [16]
  • Hydrogen for the drum audio rendering [13], [14]

To turn Tuxguitar into a MIDI sequencer perform the following:

  • via the Tools → Plugins, ensure that Jack Audio Connection Kit output plugin is checked ;
  • via the Tools → Settings → Sound, select Jack Midi Port in the MIDI Port field.

Tuxguitar will now no longer generate sound but will generate MIDI events.

Start Qsynth and load the Xioad sound font into Qsynth via the Setup button.

For Qsynth, we need to take the following steps:

  • in the MIDI tab, select Jack as MIDI Driver and be sure to check ‘Enable MIDI Input’. Rename the ‘Engine Name’ with a informative name of your choice;
  • in the Audio tab, select Jack as Audio Driver and rename the ‘JACK Client Name ID’;
  • in the Soundfonts tab, load the sound font you want (Xioad sound fount in our example) and remove the default one.

MIDI configuration for Qsynth

Qsynth Audio configuration

Qsynth sound font selection

Now, open View→ Show Instruments and in the Drum Kit area, and select the wrench tool. A dialog shows up from which you select Exclusive Jack Port. This will add a specific output for the drum and a new output will show up and will gather all the other instruments (the 2 guitars and the bass).


MIDI exclusive port selection

Via Ray Session, you will have to create some audio connections (blue ports / blue wires) and some MIDI connections (pink ports / pink wires).


Ray Session connection view

We have now connected all the guitars and the bass to Qsynth and need to focus on the drum.

Now start Hydrogen

Hydrogen [13], [14] is a standalone application which is a drum sequencer with a lot of interesting drum kits. If you want to use Hydrogen as an audio plugin, some plugins exists and will be presented later in this article.


Main window for Hydrogen

Audio configuration for Hydrogen

Hydrogen MIDI configuration

Under Ray Session, Hydrogen has 2 boxes : one for audio output and the other one for MIDI input / output.


Ray Session connection view

You can now install a Hydrogen drum kit from the annex and select it via the View → Instrument rack and then click on Sound Library in the instrument view. You will see all the drum kits installed.

Right click on a drum kit and select Load.

After that you can render the score via Tuxguitar → Play.

If you want to use Hydrogen as a LV2 plugin, you can use the Drmr plugin [18] or select other plugins available in Audinux.

Using Tuxguitar and Hydrogen / DrumGizmo

DrumGizmo is a professional drum sequencer [16]. It comes as an executable or as a LV2 plugin. We will use the executable version. Once installed, you will need to install a drum kit from DrumGizmo [17].

These drum kits are professionally recorded. From DrumGizmo[17], the description of the recording setup of the DRS drum kit : all microphones are connected to its own channel when loading the kit in DrumGizmo. A total of 13 channels. Remember to pan the relevant channels to give you a better stereo effect.


DRS drum kits recording setup:
  • Channel 1: Ambiance left
  • Channel 2: Ambiance right
  • Channel 3: Kick drum back
  • Channel 4: Kick drum front
  • Channel 5: Hihat
  • Channel 6: Overhead left
  • Channel 7: Overhead right
  • Channel 8: Ride cymbal
  • Channel 9: Snare drum bottom
  • Channel 10: Snare drum top
  • Channel 11: Tom1
  • Channel 12: Tom2 (Floor tom)
  • Channel 13: Tom3 (Floor tom)

Using the DRS drum kit

For the demonstration, we will use the DRS drum kit.

The command line used to start the DrumGizmo sequencer (in the DRS kit directory) :

$ drumgizmo -i jackmidi -I midimap=Midimap_minimal.xml -o jackaudio DRSKit_minimal.xml

There are several MIDI map files available for DrumGizmo :


Midimap_basic.xml
Midimap_full.xml
Midimap_minimal.xml
Midimap_no_whiskers.xml
Midimap_whiskers_only.xml

Each file is activating some options in drum rendering. We will use the simple MIDI map available.

The minimal MIDI map has the following MIDI events :

Midi note: 44 -> Hihat_foot sample
Midi note: 37 -> Snare_rim sample
MIDI note: 49 -> Crash_left_shank sample
MIDI note: 57 -> Crash_right_shank sample
MIDI note: 42 -> Hihat_closed_shank sample
MIDI note: 46 -> Hihat_semi_open sample
MIDI note: 35 -> Kdrum_without_contact sample
MIDI note: 36 -> Kdrum_with_contact sample
MIDI note: 51 -> Ride_tip sample
MIDI note: 53 -> Ride_shank_bell sample
MIDI note: 38 -> Snare sample
MIDI note: 40 -> Snare_rest sample
MIDI note: 47 -> Tom1 sample
MIDI note: 43 -> Tom2 sample
MIDI note: 41 -> Tom3 sample

The first number corresponds to the MIDI event: 44 for the Hihat_foot sample.

The full MIDI map has the following MIDI events :

MIDI note 54 -> Crash_left_tip sample
MIDI note 60 -> Crash_left_whisker sample
MIDI note 55 -> Crash_right_tip sample
MIDI note 62 -> Crash_right_whisker sample
MIDI note 56 -> Hihat_closed sample
MIDI note 64 -> Hihat_closed_whisker sample
MIDI note 44 -> Hihat_foot sample
MIDI note 58 -> Hihat_open sample
MIDI note 61 -> Hihat_open_tip sample
MIDI note 65 -> Hihat_open_whisker sample
MIDI note 63 -> Ride_rest sample
MIDI note 66 -> Ride_shank sample
MIDI note 68 -> Ride_tip_bell sample
MIDI note 70 -> Ride_tip_bell_chain sample
MIDI note 73 -> Ride_tip_chain sample
MIDI note 67 -> Ride_whisker sample
MIDI note 69 -> Snare_circle_whisker sample
MIDI note 37 -> Snare_rim sample
MIDI note 71 -> Snare_whisker sample
MIDI note 72 -> Tom1_whisker sample
MIDI note 74 -> Tom2_whisker sample
MIDI note 76 -> Tom3_whisker sample
MIDI note 49 -> Crash_left_shank sample
MIDI note 57 -> Crash_right_shank sample
MIDI note 42 -> Hihat_closed_shank sample
MIDI note 46 -> Hihat_semi_open sample
MIDI note 35 -> Kdrum_without_contact sample
MIDI note 36 -> Kdrum_with_contact sample
MIDI note 51 -> Ride_tip sample
MIDI note 53 -> Ride_shank_bell sample
MIDI note 38 -> Snare sample
MIDI note 40 -> Snare_rest sample
MIDI note 47 -> Tom1 sample
MIDI note 43 -> Tom2 sample
MIDI note 41 -> Tom3 sample

Various options are available in DrumGizmo to enhance the audio rendering like:

  • -t, –timing-humanizer:  Enable moving of notes in time. NOTE: Adds latency to the output so do not use this with a real-time drumkit;
  • -T, –timing-humanizerparms <x>: Timing humanizer options;
  • -x, –velocity-humanizer:  Enables adapting input velocities to make it sound more realistic;
  • -X, –velocity-humanizerparms <x>: Velocity humanizer options.

And we also select Jack for audio output and Jack for MIDI input.

Connecting all the Inputs

Now, we have to connect all the audio and MIDI inputs. We have the following connections on Ray Session :


Tuxguitar, Qsynth and DrumGizmo connections on Ray Session

As you can see, the drum from DrumGizmo comes with many outputs.

There is now a huge difference in audio rendering when we consider the drum. We had a good result with the selection of a specific sound font, this result was improved a little bit via the use of Hydrogen but now, we have a nearly perfect drum rendering.

Some other solutions for the drum

We will now use some plugins as drum sequencers.

Audinux Plugins

The following plugins are available in Audinux:

  • drumkv1 : an old-school drum-kit sampler
  • drumlabooh : LV2/VSTi drum machine that can use Hydrogen, SFZ, and other drumkit formats
  • drumrox : a hydrogen compatible drum LV2 plugin
  • lv2-avldrums-x42-plugin : simple Drum Sample Player LV2 Plugin
  • ChowKick : kick synthesizer based on old-school drum machine circuits
  • boomer : a drum synth
  • drmr : a drum LV2 plugin
  • geonkick : drum Software Synthesizer
  • kickmess : a kick drum synthesizer plugin
  • lv2-fabla : an LV2 drum sequencer
  • onetrick-cryptid : an FM drum synth with the cold clanging heart of a DX7 in the fearsome frame of an 808
  • onetrick-simian2 : an open source drum synth inspired by hexagonal classics like the Simmons SDS-V
  • onetrick-urchin : a hybrid drum synth that models the gritty lo-fi sound of beats from vintage records without sampling.
  • sickbeatbetty : an open source MIDI drum machine / generator VST and standalone application
  • stegosaurus : drum synthesizer

Using Drumrox

We will perform our test using the Drumrox LV2 plugin. To be able to use this sequencer, we will need to use the Carla rack.

Don’t forget to install either a hydrogen drum kit or a drumrox kit via

dnf
before using this plugin.

We can either install the Carla or Carla-mao RPM package. The second one provides some tools to allow loading a Windows VST plugin via Wine. We need also to install the Drumrox package.

Once

carla-r
ack executable is started, we load the Drumrox plugin via the Add Plugin button (and you may have to click on the Refresh button to update the list of installed plugins).


The
carla-rack
main window with the drumrox plugin loaded

Investigating Drumrox

Now display the Drumrox GUI via a click on the nut button near the ON/OFF button and select an installed drum kit in the Kit field. This plugin is able to load a Hydrogen drum kit, but it can also load a Drumrox drum kit.


The drumrox LV2 plugin with the GMRockKit drum kit loaded

Drumrox plugin comes into 2 flavors :

  • Drumrox : a plugin with 2 output channels
  • Drumrox-Multi : a plugin with 36 output channels

We have the following audio / MIDI connections under Ray Session :


Carla / Qsynth / Tuxguitar audio / MIDI connections

Using a sound font by instrument

First we download some SF2 files from the Musical Artifacts website :

  1. Overdriven Guitar [19] as a SF2 guitar
  2. Rock Bass [20] as a SF2 bass

We will need to launch these applications :

  • 4 x
    carla-rack
    . One will host the Drumrox plugin for drum and the 3 others will host one MIDI Event filter to filter out the bank change event sent by Tuxguitar and which resets the channels settings we made in Qsynth ;
  • Qsynth with one Fluidsynth engine by instrument ;
  • Tuxguitar.

We will need these plugins :

  • Drumrox : the drum plugin compatible with Hydrogen drum kits ;
  • lv2-x42-plugins [26], [27] : a set of various plugins dedicated to MIDI management and in this set, we will use MIDI Event Filter.

Ray Session view of the connections made between all the applications

An Issue with Tuxguitar … and a workaround

The problem with Tuxguitar: each time we hit the play button, a MIDI event “bank change” is sent and this event erases all the set up made in Qsynth channelsassignment. So, to avoid this bank change event, we need to use the MIDI Event Filter plugin and filter out this event. The content of the MIDI Event Filter in Carla rack is :


Carla Rack view

MIDI Event Filter settings

For Qsynth, we need to take the following steps:

  • in the MIDI folder :
    • rename the Engine Name (Guitar1 here) ;
    • select ‘jack’ as MIDI Driver
    • uncheck Auto Connect MIDI Inputs
  • in the Audio folder :
    • select ‘jack’ as Audio Driver
    • rename the Jack Client Name ID as ‘Guitar1’
    • uncheck the Auto Connect Jack Outputs
  • in the Soundfonts folder :
    • load the sound font we want
    • remove the default sound font

We need to repeat these steps for each instrument we want to add.


MIDI settings for Qsynth

Audio settings Qsynth

Soundfonts settings for Qsynth

Last step : setting the channels assignment for each instrument.

Select instruments by channel

Depending on the sound font you have loaded, you will need to select some instrument, by channel.

When we selected the ‘Exclusive Jack Port’ in Tuxguitar, the MIDI program numbering switched from ‘General MIDI’ (from 0 to 127) to 2 channels only (one for the instrument and one for the instrument effects). In the Qsynth channel assignment GUI, we will need to make some correspondence between the 1/2 channels of Tuxguitar to the bank of the requested instrument (29 for the guitar sound font we used in the example).


Guitar Qsynth channel assignment

Bass Qsynth channel assignment

Now, in Tuxguitar, to you need to select the Jack MIDI Port in Tools → Settings → Sound → Port MIDI. Open the Instrument view (View → Show Instruments) and click each wrench icon of the guitars and bass to check Exclusive Jack Port. You will have to connect every box in Ray Session like the above figure.

The audio rendering will depend on the SF2 / SF3 sound fonts you loaded but with such a settings, everything is possible.

Conclusion

One of the main strength of music under Linux is the ability to connect several application between them and having the sound rendered in real time with a low latency.

With a tool like Ray Session, the management of complex application connection is really easy. This tool can start a set of application and reconnect all these really quickly.


Ray Session main window with a set of application launched

The ergonomics and efficiency of audio applications has greatly improved and will continue to improve in the coming years. We can also notice that several pro audio tools are now available on Linux like Ardour [22] / Mixbus [23], Bitwig [24], Renoise [25] and many others. This allows us to use audio under Linux as people use audio on Apple and Windows.

So, now, it’s time to make some music on Linux and be careful, Linux can be really addictive.

Links

[17] – https://www.drumgizmo.org/wiki/doku.php?id=kits

Loadouts For Genshin Impact v0.1.8 Released

Posted by Akashdeep Dhar on 2025-05-10 18:30:33 UTC
Loadouts For Genshin Impact v0.1.8 Released

Hello travelers!

Loadouts for Genshin Impact v0.1.8 is OUT NOW with the addition of support for recently released characters like Escoffier and Ifa and for recently released weapons like Symphonist of Scents and Sequence of Solitude from Genshin Impact v5.6 Phase 1. Take this FREE and OPEN SOURCE application for a spin using the links below to manage the custom equipment of artifacts and weapons for the playable characters.

Resources

Changelog

Characters

Escoffier

Escoffier is a polearm-wielding Cryo character of five-star quality.

Ifa

Ifa is a catalyst-wielding Anemo character of four-star quality.

Weapons

Symphonist of Scents

Seasoned Symphony - Scales on Crit DMG.

Loadouts For Genshin Impact v0.1.8 Released
Symphonist of Scents - Workspace

Sequence of Solitude

Silent Trigger - Scales on HP.

Loadouts For Genshin Impact v0.1.8 Released
Sequence of Solitude - Workspace

Appeal

While allowing you to experiment with various builds and share them for later, Loadouts for Genshin Impact lets you take calculated risks by showing you the potential of your characters with certain artifacts and weapons equipped that you might not even own. Loadouts for Genshin Impact has been and always will be a free and open source software project, and we are committed to delivering a quality experience with every release we make.

Disclaimer

With an extensive suite of over 1420 diverse functionality tests and impeccable 100% source code coverage, we proudly invite auditors and analysts from MiHoYo and other organizations to review our free and open source codebase. This thorough transparency underscores our unwavering commitment to maintaining the fairness and integrity of the game.

The users of this ecosystem application can have complete confidence that their accounts are safe from warnings, suspensions or terminations when using this project. The ecosystem application ensures complete compliance with the terms of services and the regulations regarding third-party software established by MiHoYo for Genshin Impact.

All rights to Genshin Impact assets used in this project are reserved by miHoYo Ltd. and Cognosphere Pte., Ltd. Other properties belong to their respective owners.

First full week of May infra bits 2025

Posted by Kevin Fenzi on 2025-05-10 16:37:30 UTC
Scrye into the crystal ball

This week was a lot of heads down playing with firmware settings and doing some benchmarking on new hardware. Also, the usual fires and meetings and such.

Datacenter Move

Spent a fair bit of time this week configuring and looking at the new servers we have in our new datacenter. We only have management access to them, but I still (somewhat painfully) installed a few with RHEL9 to do some testing and benchmarking.

One question I was asked a while back was around our use of linux software raid over hardware raid. Historically, there were a few reasons we choose mdadm raid over hardware raid:

  • It's possble/easy to move disks to a different machine in the event of a controller failure and recover data. Or replace a failed controller with a new one and have things transparently work. With hardware raid you need to have the same exact controller and same firmware version.

  • Reporting/tools are all open source for mdadm. You can tell when a drive fails, you can easily re-add one, reshape, etc. With hardware raid you are using some binary only vendor tool, all of them different.

  • In the distant past being able to offload to a seperate cpu was nice, but anymore servers have a vastly faster/better cpu, so software raid should actually perform better than hardware raid (barring different settings).

So, I installed one with mdadm raid another with a hardware raid and did some fio benchmarking. The software raid won overall. Hardware was actually somewhat faster on writes, but the software raid murdered it in reads. Turns out the cache settings defaults here were write-through for software and write-back for hardware, so the difference in writes seemed explainable to that.

We will hopfully finish configuring firmware on all the machines early next week, then the next milestone should be network on them so we can start bootstrapping up the services there.

Builders with >32bit inodes again

We had a few builders hit the 'larger than 32 bit inode' problem again. Basically btrfs starts allocating inode numbers when installed and builders go through a lot of them by making and deleting and making a bunch of files during builds. When that hits > 4GB, i686 builds start to fail because they cannot get a inode. I reinstalled those builders and hopefully we will be ok for a while more again. I really am looking forward to i686 builds completely going away.

comments? additions? reactions?

As always, comment on mastodon: https://fosstodon.org/@nirik/114484593787412504

GNOME Welcomes Its Google Summer of Code 2025 Contributors!

Posted by Felipe Borges on 2025-05-09 10:18:45 UTC

We are happy to announce that five contributors are joining the GNOME community as part of GSoC 2025!

This year’s contributors will work on backend isolation in GNOME Papers, adding eBPF profiling to Sysprof, adding printing support in GNOME Crosswords, and Vala’s XML/JSON/YAML integration improvements. Let’s give them a warm welcome!

In the coming days, our new contributors will begin onboarding in our community channels and services. Stay tuned to Planet GNOME to read their introduction blog posts and learn more about their projects.

If you want to learn more about Google Summer of Code internships with GNOME, visit gsoc.gnome.org.

🎲 PHP version 8.3.21RC1 and 8.4.7RC1

Posted by Remi Collet on 2025-04-25 05:29:00 UTC

Release Candidate versions are available in the testing repository for Fedora and Enterprise Linux (RHEL / CentOS / Alma / Rocky and other clones) to allow more people to test them. They are available as Software Collections, for parallel installation, the perfect solution for such tests, and as base packages.

RPMs of PHP version 8.4.7RC1 are available

  • as base packages in the remi-modular-test for Fedora 40-42 and Enterprise Linux ≥ 8
  • as SCL in remi-test repository

RPMs of PHP version 8.3.21RC1 are available

  • as base packages in the remi-modular-test for Fedora 40-42 and Enterprise Linux ≥ 8
  • as SCL in remi-test repository

ℹ️ The packages are available for x86_64 and aarch64.

ℹ️ PHP version 8.2 is now in security mode only, so no more RC will be released.

ℹ️ Installation: follow the wizard instructions.

ℹ️ Announcements:

Parallel installation of version 8.4 as Software Collection:

yum --enablerepo=remi-test install php84

Parallel installation of version 8.3 as Software Collection:

yum --enablerepo=remi-test install php83

Update of system version 8.4:

dnf module switch-to php:remi-8.4
dnf --enablerepo=remi-modular-test update php\*

Update of system version 8.3:

dnf module switch-to php:remi-8.3
dnf --enablerepo=remi-modular-test update php\*

ℹ️ Notice:

  • version 8.4.4RC2 is in Fedora rawhide for QA
  • EL-10 packages are built using RHEL-10.0-beta and EPEL-10.0
  • EL-9 packages are built using RHEL-9.5
  • EL-8 packages are built using RHEL-8.10
  • oci8 extension uses the RPM of the Oracle Instant Client version 23.7 on x86_64 and aarch64
  • intl extension uses libicu 74.2
  • RC version is usually the same as the final version (no change accepted after RC, exception for security fix).
  • versions 8.3.21 and 8.4.7 are planed for May 8th, in 2 weeks.

Software Collections (php83, php84)

Base packages (php)

⚙️ PHP version 8.3.20 and 8.4.6

Posted by Remi Collet on 2025-04-11 05:41:00 UTC

RPMs of PHP version 8.4.6 are available in the remi-modular repository for Fedora ≥ 40 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).

RPMs of PHP version 8.3.20 are available in the remi-modular repository for Fedora ≥ 40 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).

ℹ️ The packages are available for x86_64 and aarch64.

ℹ️ There is no security fix this month, so no update for version 8.1.32 and 8.2.28.

⚠️ PHP version 8.0 has reached its end of life and is no longer maintained by the PHP project.

These versions are also available as Software Collections in the remi-safe repository.

Version announcements:

ℹ️ Installation: use the Configuration Wizard and choose your version and installation mode.

Replacement of default PHP by version 8.4 installation (simplest):

dnf module switch-to php:remi-8.4/common

Parallel installation of version 8.4 as Software Collection

yum install php84

Replacement of default PHP by version 8.3 installation (simplest):

dnf module switch-to php:remi-8.3/common

Parallel installation of version 8.3 as Software Collection

yum install php83

And soon in the official updates:

⚠️ To be noticed :

  • EL-10 RPMs are built using RHEL-10.0-beta
  • EL-9 RPMs are built using RHEL-9.5
  • EL-8 RPMs are built using RHEL-8.10
  • intl extension now uses libicu74 (version 74.2)
  • mbstring extension (EL builds) now uses oniguruma5php (version 6.9.10, instead of the outdated system library)
  • oci8 extension now uses the RPM of Oracle Instant Client version 23.7 on x86_64 and aarch64
  • a lot of extensions are also available; see the PHP extensions RPM status (from PECL and other sources) page

ℹ️ Information:

Base packages (php)

Software Collections (php83 / php84)

⚙️ PHP version 8.3.21 and 8.4.7

Posted by Remi Collet on 2025-05-09 05:24:00 UTC

RPMs of PHP version 8.4.7 are available in the remi-modular repository for Fedora ≥ 40 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).

RPMs of PHP version 8.3.21 are available in the remi-modular repository for Fedora ≥ 40 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).

ℹ️ The packages are available for x86_64 and aarch64.

ℹ️ There is no security fix this month, so no update for version 8.1.32 and 8.2.28.

⚠️ PHP version 8.0 has reached its end of life and is no longer maintained by the PHP project.

These versions are also available as Software Collections in the remi-safe repository.

Version announcements:

ℹ️ Installation: use the Configuration Wizard and choose your version and installation mode.

Replacement of default PHP by version 8.4 installation (simplest):

dnf module switch-to php:remi-8.4/common

Parallel installation of version 8.4 as Software Collection

yum install php84

Replacement of default PHP by version 8.3 installation (simplest):

dnf module switch-to php:remi-8.3/common

Parallel installation of version 8.3 as Software Collection

yum install php83

And soon in the official updates:

⚠️ To be noticed :

  • EL-10 RPMs are built using RHEL-10.0-beta
  • EL-9 RPMs are built using RHEL-9.5
  • EL-8 RPMs are built using RHEL-8.10
  • intl extension now uses libicu74 (version 74.2)
  • mbstring extension (EL builds) now uses oniguruma5php (version 6.9.10, instead of the outdated system library)
  • oci8 extension now uses the RPM of Oracle Instant Client version 23.7 on x86_64 and aarch64
  • a lot of extensions are also available; see the PHP extensions RPM status (from PECL and other sources) page

ℹ️ Information:

Base packages (php)

Software Collections (php83 / php84)

syslog-ng 4.8.2 is now available

Posted by Peter Czanik on 2025-05-08 10:56:47 UTC

Finally, a new syslog-ng release! As you can see from its version number, this is a bug fix release. It took a bit longer than expected, as we wanted to release it in sync with syslog-ng PE, the commercial variant of syslog-ng. 4.8.2 serves not just as the foundation of the new syslog-ng PE release, but also provides fixes to 4.8.1, which is included in major Linux distributions. This update ensures that all our recent bug fixes reach the majority of our users.

Read more at https://www.syslog-ng.com/community/b/blog/posts/syslog-ng-4-8-2-is-now-available

syslog-ng logo

Nominate Your Fedora Heroes: Mentor and Contributor Recognition 2025

Posted by Fedora Magazine on 2025-05-07 23:41:42 UTC

The Fedora Project is built on the dedication, mentorship, and relentless efforts of contributors who continuously go above and beyond. From reviewing pull requests to onboarding new community members, from writing documentation to organizing events — it’s these quiet champions who make Fedora thrive. As a part of the Fedora Mentor Summit , we would be declaring the results. This wiki underscores the sentiment and the thought that went into this recognition programme.

As we gear up to recognize outstanding mentors and contributors in our community, we invite you to nominate those individuals who’ve made a lasting impact — the ones who’ve guided, inspired, or stood out through their unwavering contributions. Whether it’s a long-time mentor who helped you take your first steps, or a contributor whose work has left a mark across Fedora’s landscape — now is the time to celebrate them! Read more about the nomination process and submit your nomination at the link below:

👉 Submit your nominations here: https://forms.gle/xB8ng7GH9niT2Sza8

🗓 Deadline: 16 May 2025

Let’s spotlight the amazing humans who power Fedora. Your nomination could be the recognition someone has long deserved — and a moment of pride for our whole community.

Use reserved domains and IPs in examples

Posted by Ben Cotton on 2025-05-07 12:00:00 UTC

A while back I posted in frustration on various social media platforms — I was reading software documentation and it used some made up domain as example text. This is bad! But in the replies to my post, some people weren’t aware of reserved domains and IP addresses, so this seems like a good opportunity to share what I know.

Why reserve domains and IPs?

The most important answer is to protect the users. Imagine I was writing documentation or building an example configuration file for some software. I might think “duckalignment.academy” is a fun domain name to use as a placeholder. It’s unregistered, so there’s no harm.

Until someone registers it. Then it could be whatever the registrant wants, including a malicious service. If someone forgets to update the example configuration before launching the software, they’re at the mercy of the domain owner.

The other reason to use reserved domains and IPs is that it makes placeholders more obvious. If a configuration file or documentation contains “duckalignment.academy”, it’s less obvious that you need to replace it than using “example.com.” Example values that are unambiguously examples are much friendlier to your users.

Which domains and IPs are reserved?

Several standards define reserved domains and IPs. RFC 2606 defines several reserved top-level domains, including .example for use in examples and documentation. It also reserves example.com, example.net, and example.org. RFC 6761 gives instructions on how those domains should be treated.

RFC 5737 reserves three IP address blocks for documentation: 192.0.2.0/24, 198.51.100.0/24, and 203.0.113.0/24. Using IPv6? RFC 3849 reserves 2001:DB8::/32. RFC 9637 added a reservation for 3fff::/20 last year in order to preserve a range big enough to encompass modern real-world networks.

Using example domains and IPs

Please don’t use domains like “foo” or “bar” in your documentation and sample configuration files. They’re not helpful and can actually prove harmful to your users. The reserved domains and IP blocks are almost always what you need. If they aren’t for whatever reason, ensure that you own the domain you’re using and commit to owning it for at least the life of your project (and ideally far beyond that).

This post’s featured photo by mk. s on Unsplash.

The post Use reserved domains and IPs in examples appeared first on Duck Alignment Academy.

Start Planning Fedora 43 Test Days!

Posted by Fedora Magazine on 2025-05-07 08:00:00 UTC

Each Fedora release is only possible thanks to the dedication of many contributors. One of the most important ways you can get involved is by participating in Test Days! This article describes the steps in proposing and scheduling test days.

As Fedora 43 development moves ahead, it’s time to start planning and proposing Test Days. A Test Day is a focused event where contributors and users come together to test a specific feature, component, or area of the Fedora distribution. These events usually happen around a test-day Matrix channel for live interaction. The results are coordinated through a Fedora Wiki page and Test Days App. Test Days play a critical role in ensuring Fedora Linux continues to deliver a stable and high-quality experience.

Test Days can focus on many things — not just code! We regularly host Test Days for localization (l10n), internationalization (i18n), desktop environments like GNOME, and major system components like the Linux kernel. You can learn more about Fedora QA Test Days here!

How to propose a Test Day

Anyone can propose and host a Test Day! Whether you want to lead it yourself, collaborate with the Fedora QA team, or just need a little help getting started, you’re welcome to participate.

To propose a Test Day, simply file a ticket in the fedora-qa pagure and tag it with test days. You can see an example here.

If you’re new to organizing, we have a full guide to help you set up and run a successful event. The information at SOP: Test Day Management will go a long way to help you

The current schedule of Test Days and available slots are available here. When selecting a date, please keep in mind the Fedora 43 development milestones, such as the Beta Freeze and Final Freeze.

Scheduling notes

We traditionally schedule Test Days on Thursdays. However, if you are organizing a series of related Test Days (for example, the Kernel or GNOME Test Weeks), we often schedule them over Tuesday, Wednesday, and Thursday. If Thursday slots are full, or special timing is needed for your topic, don’t worry — we can open up additional days.

Just note your preferred dates when filing your ticket, and we’ll work with you!

Help with ongoing Test Days

If you don’t want to host your own Test Day but would still like to help, you can participate in ongoing events, including:

  • GNOME Test Day
  • i18n Test Day
  • Kernel Test Week(s)
  • Upgrade Test Day
  • IoT Test Week
  • Cloud Test Day
  • Fedora CoreOS Test Week

These recurring Test Days help ensure that major areas of Fedora are working well across the release cycle.

Questions?

If you have any questions about Test Days — whether proposing, organizing, or participating — please don’t hesitate to contact the Fedora QA team via Matrix!
Or via email 📧 test@lists.fedoraproject.org, or IRC 💬 #fedora-qa on Libera Chat: Join here

We look forward to seeing you at Fedora 43 Test Days!

It’s alive! Welcome to the new Planet GNOME!

Posted by Felipe Borges on 2025-05-06 15:46:33 UTC

A few months ago, I announced that I was working on a new implementation of Planet GNOME, powered by GitLab Pages. This work has reached a point where we’re ready to flip the switch and replace the old Planet website.

You can check it out at planet.gnome.org

This was only possible thanks to various other contributors, such as Jakub Steiner, who did a fantastic job with the design and style, and Alexandre Franke, who helped with various papercuts, ideas, and improvements.

As with any software, there might be regressions and issues. It would be a great help if you report any problems you find at https://gitlab.gnome.org/Teams/Websites/planet.gnome.org/-/issues

If you are subscribed to the old Planet’s RSS feed, you don’t need to do anything. But if you are subscribed to the Atom feed at https://planet.gnome.org/atom.xml, you will have to switch to the RSS address at https://planet.gnome.org/rss20.xml

Here’s to blogs, RSS feeds, and the open web!

Building your own Atomic (bootc) Desktop

Posted by Fedora Magazine on 2025-05-05 08:00:00 UTC

Bootc and associated tools provide the basis for building a personalised desktop. This article will describe the process to build your own custom installation.

Disclaimer

Building and using a custom installation is “at your own risk”. Your installation may be harder to find support for when compared with a mainstream solution.

Motivation

There has been an increasing interest in atomic distros, which offer significant benefits in terms of stability and security.

These distros apply updates as a single transaction, known as atomic upgrades, which means if an update doesn’t work as expected, the system can instantly roll back to its last stable state, saving users from potential issues. The immutable nature of the filesystem components reduces the risk of system corruption and unauthorised modifications as the core system files are read-only, making them impossible to modify.

If you are planning to spin off various instances from the same image (e.g. setting up computers for members of your family or work), atomic distros provide a reliable desktop experience where every instance of the desktop is consistent with each other, reducing discrepancies in software versions and behaviour.

Mainstream sources like Fedora and Universal Blue offer various atomic desktops with curated configurations and package selections for the average user. But what if you’re ready to take control of your desktop and customise it entirely, from packages and configurations to firewall, DNS, and update schedules?

Thanks to bootc and the associated tools, building a personalised desktop experience is no longer difficult.

What is bootc?

Using existing container building techniques, bootc allows you to build your own OS. Container images adhere to the OCI specification and utilise container tools for building and transporting your containers. Once installed on a node, the container functions as a regular OS.

The filesystem structure follows ostree specifications:

  • The /usr directory is read-only, with all changes managed by the container image.
  • The /etc directory is editable, but any changes applied in the container image will be transferred to the node unless the file was modified locally.
  • Changes to /var (including /var/home) are made during the first boot. Afterwards, /var remains untouched.

You can find the full documentation for bootc here: https://bootc-dev.github.io/bootc/

Creating your own bootc desktop

The approach described in this article uses quay.io/fedora/fedora-bootc as a base image to create a customizable container for building your personalised Fedora KDE atomic desktop.

Although tailored to KDE Plasma, most of the concepts and methodologies described here also apply to other desktop environments.

The kde-bootc repository

I published kde-bootc as a repository available in GitHub, and I will use it as a reference. It will help this explanation providing additional details, and a source to clone and experiment. You may wish to clone kde-bootc to following along.

Folder structure:

  • scripts/
  • system/
  • systemd/
  • Containerfile

scripts: Scripts to be ran from the Containerfile during building
system: Files to be copied to /usr and /etc
systemd: Systemd unit files to be copied to /usr/lib/systemd

Each file follows a specific naming convention. For instance a file /usr/lib/credstore/home.create.admin is named as usr__lib__credstore__home.create.admin

Explaining the Containerfile

The following will describe and show, step by step, the contents of the example Containerfile created.

Image base

The fedora-bootc project is part of the Cloud Native Computing Foundation (CNCF) Sandbox projects and  generates reference “base images” of bootable containers designed for use with the bootc project.

In this example, I’m using quay.io/fedora/fedora-bootc as the base image. The containerfile starts off with:

FROM quay.io/fedora/fedora-bootc

Setup filesystem

If you plan to install software on day 2, i.e. after the kde-bootc installation is complete, you may need to link /opt to /var/opt. Otherwise, /opt will remain an immutable directory that you can only populate from the container build.

RUN rmdir /opt
RUN ln -s -T /var/opt /opt

In some cases, for successful package installation, the /var/roothome directory must exist. If this folder is missing, the container build may fail. It is advisable to create this directory before installing the packages.

RUN mkdir /var/roothome

Prepare packages

To simplify the installation, and to have a record of installed and removed packages for future reference, I found it useful to keep them as a resource under /usr/share.

  • All additional packages to be installed on top of fedora-bootc and the KDE environment are documented in packages-added.

COPY --chmod=0644 ./system/usr__local__share__kde-bootc__packages-added /usr/local/share/kde-bootc/packages-added

  • Packages to be removed from fedora-bootc and the KDE environment are documented in packages-removed.

COPY --chmod=0644 ./system/usr__local__share__kde-bootc__packages-removed /usr/local/share/kde-bootc/packages-removed

  • For convenience, the packages included in the base fedora-bootc are documented in packages-fedora-bootc.

RUN jq -r .packages[] /usr/share/rpm-ostree/treefile.json > /usr/local/share/kde-bootc/packages-fedora-bootc

Install repositories

This section handles adding extra repositories needed before installing packages.

In this example, I’m adding Tailscale, but the same principle applies to any other source you may add to your repositories.

Adding repositories uses the config-manager verb, available as a DNF5 plugin. This plugin is not pre-installed by default in fedora-bootc, so it will need to be installed beforehand.

RUN dnf -y install dnf5-plugins
RUN dnf config-manager addrepo --from-repofile=https://pkgs.tailscale.com/stable/fedora/tailscale.repo

Install packages

For clarity and task separation, I divided the installation into two steps:

Installation of environment and groups.

RUN dnf -y install @kde-desktop-environment

And the installation of all other individual packages. The script will select all lines not starting with # passing them as arguments to dnf -y install. The --allowerasing option is necessary for cases like installing vim-default-editor, which would conflict with nano-default-editor, removing the latter first.

RUN grep -vE '^#' /usr/local/share/kde-bootc/packages-added | xargs dnf -y install –-allowerasing

PACKAGES-ADDED
# LibreOffice
libreoffice
libreoffice-help-en
# Utilities
vim-default-editor
git
....

Remove packages

Some of the standard packages included in @kde-desktop-environment don’t behave well and sometimes conflict with an immutable desktop, so we will remove them.

This is also an opportunity to remove software you may never use, saving resources and storage.

RUN grep -vE '^#' /usr/local/share/kde-bootc/packages-removed | xargs dnf -y remove
RUN dnf -y autoremove
RUN dnf clean all

The criteria used to remove some packages is listed below:

Conflict with bootc and its immutable nature.
plasma-discover-offline-updates
plasma-discover-packagekit
PackageKit-command-not-found

Bring unwanted dependencies.
tracker
tracker-miners
mariadb-server-utils
abrt
at
dnf-data

Deprecated services.
iptables-services
iptables-utils

Packages that are resource-heavy, or bring unnecessary services.
rsyslog
dracut-config-rescue

Configuration

This section will copy all necessary configuration files to /usr and /etc. As recommended by the bootc project, prioritise using /usr and use /etc as a fallback if needed.

Bash scripts that will be used by systemd services are stored in /usr/local/bin:

COPY --chmod=0755 ./system/usr__local__bin/* /usr/local/bin/

Custom configuration for new users’ home directories will be added to /etc/skel/. As an example you can customise bash.

COPY --chmod=0644 ./system/etc__skel__kde-bootc /etc/skel/.bashrc.d/kde-bootc

If you’re building your container image on GitHub and keeping it private, you’ll need to create a GITHUB_TOKEN to download the image. Further information is available at GitHub container registry.

COPY --chmod=0600 ./system/usr__lib__ostree__auth.json /usr/lib/ostree/auth.json

Users

I opted for systemd-homed users because they are better suited than regular users for immutable desktops, preventing potential drift in case of local modifications in /etc/passwd. Additionally, each user home benefits from LUKS encrypted volume.

The process begins when firstboot-setup runs, triggered by firstboot-setup.service during boot. It executes homectl firstboot, which checks if any regular home areas exist. If none are found, it searches for service credentials starting with home.create. to create users at boot.

The parameter below imports service credentials into the systemd service:

FIRSTBOOT-SETUP.SERVICE
...
ImportCredential=home.create.*

For more details, refer to the homectl and systemd.exec manual pages.

The homed identity file (usr__lib__credstore__home.create.admin) sets the user’s parameters, including username, real name, storage type, etc.

Common systemd-homed parameters:

  • userName: A single word for your username and home directory. In this example, it is admin.
  • realName: Full name for the user
  • diskSize: The size of the LUKS storage volume, calculated in bytes. For instance, 1 GB equals 1024x1024x1024 bytes, which is 1073741824 bytes.
  • rebalanceWeight: Relevant only when multiple user accounts share the available storage. If diskSize is defined, this parameter can be set to false.
  • uid/gid: User and Group ID. The default range for regular users is 1000-6000, and for systemd-homed users, it is 60001-60513. However, you can assign uid/gid for systemd-homed users from both ranges.
  • memberOf: The groups the user belongs to. As a power user, it should be part of the wheel group.
  • hashedPassword: This is the hashed version of the password stored under secret. Setting up an initial password allows homectl firstboot to create the user without prompting. This password should be changed afterwards (homectl passwd admin). The hash password can be created using the mkpasswd utility.

We are storing the identity file in one of the directories where systemd-homed expects to find credentials.

COPY --chmod=0644 ./system/usr__lib__credstore__home.create.admin /usr/lib/credstore/home.create.admin

For more information on user records, visit: https://systemd.io/USER_RECORD/

This section also creates a temporary password for the root user. As I will explain later, having a root user as an alternative login is important.

echo "Temp#SsaP" | passwd root -s

Subuid and Subgid:

Another key parameter to set up is the range for /etc/subuid and /etc/subgid for the admin user. This range is necessary for running rootless containers since each uid inside the container will be mapped to a uid outside the container within this range. Systemd-homed predefines ranges for uid/gid.

The available range is 524288…1879048191. Choosing 1000001 makes it easy to identify the service running in the container. For instance, if the container is running Apache with uid=48, the volume or folder bound to it will have uid=1000048.

echo "admin:1000001:65536">/etc/subuid
echo "admin:1000001:65536">/etc/subgid

For more information on available ranges, visit: https://systemd.io/UIDS-GIDS/

The next step will set up authselect to enable authenticating the admin user on the login page. To achieve this, we need to enable the features with-systemd-homed and with-fingerprint (if your computer has a fingerprint reader) for the local profile.

authselect enable-feature with-systemd-homed
authselect enable-feature with-fingerprint

Systemd services

I decided to install at least two services; One to complete the configuration during machine boot, to run commands that require systemd (firstboot-setup.service), and the other one to automate updates (bootc-fetch.service).

We are enabling, by default, the first systemd service firstboot-setup:

COPY --chmod=0644 ./systemd/usr__lib__systemd__system__firstboot-setup.service /usr/lib/systemd/system/firstboot-setup.service
RUN systemctl enable firstboot-setup.service

USR__LIB__SYSTEMD__SYSTEM__FIRTBOOT-SETUP.SERVICE
[Unit]
Description=Setup USERS and /VAR at boot
After=multi-user.target
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/usr/local/bin/firstboot-setup
ImportCredential=home.create.*
[Install]
WantedBy=multi-user.target

And it runs the script below:

FIRSTBOOT-SETUP
# Setup hostname
HOST_NAME=kde-bootc
hostnamectl hostname $HOST_NAME
# Create user(s)
homectl firstboot
# Setup firewall to allow kdeconnect to functions
firewall-cmd --set-default-zone=public
firewall-cmd --add-service=kdeconnect --permanent

We are triggering bootc-fetch daily by a timer as a second systemd service:

COPY --chmod=0644 ./systemd/usr__lib__systemd__system__ bootc-fetch.service /usr/lib/systemd/system/bootc-fetch.service
COPY --chmod=0644 ./systemd/usr__lib__systemd__system__bootc-fetch.timer /usr/lib/systemd/system/bootc-fetch.timer


USR__LIB__SYSTEMD__SYSTEM__BOOTC-FETCH.TIMER
[Unit]
Description=Fetch bootc image daily
[Timer]
OnCalendar=*-*-* 12:30:00
Persistent=true
[Install]
WantedBy=timers.target

USR__LIB__SYSTEMD__SYSTEM__BOOTC-FETCH.SERVICE
[Unit]
Description=Fetch bootc image
After=network-online.target
[Service]
Type=oneshot
ExecStart=/usr/bin/bootc update --quiet

This service replaces bootc-fetch-apply-updates, which would download and apply updates as soon as they are available. This approach is problematic because it causes your computer to shut down without warning, so it is better to disable by masking the timer:

RUN systemctl mask bootc-fetch-apply-updates.timer

How to create an ISO?

The instructions that follow will build the container locally. You need to do it as root so bootc-image-builder can use the image to make the ISO.

cd /path-to-your-repo
sudo podman build -t kde-bootc .

Then, outside the repository on a different directory, create a folder named output for the ISO image. And also you need to create the configuration file config.toml to feed the installer.

CONFIG.TOML
[customizations.installer.kickstart]
contents = "graphical"

[customizations.installer.modules]
disable = [
"org.fedoraproject.Anaconda.Modules.Users"
]

It instructs the installer to use the graphical interface and disable the module for user creation. We do not need to set up a user during installation, as this is already being taken care of.

Within the directory where ./output/ and ./config.toml exists, run bootc-image-builder utility which is available as a container. It must be run as root.

sudo podman run --rm -it --privileged --pull=newer \
--security-opt label=type:unconfined_t \
-v ./output:/output \
-v /var/lib/containers/storage:/var/lib/containers/storage \
-v ./config.toml:/config.toml:ro \
quay.io/centos-bootc/bootc-image-builder:latest \
--type iso \
--chown 1000:1000 \
localhost/kde-bootc

If everything goes well, the ISO image will be available in the ./output directory. You can use Fedora Media Writer to create a USB and put your images on a portable drive such as flash disk.

At the time of writing, the installer uses Anaconda and functions like any other Fedora flavor installation.

For more information on bootc-image-builder, visit: https://github.com/osbuild/bootc-image-builder

Post installation

The first step is to restore the SELinux context for the systemd-homed home directory. Without this, you may not be able to log in as admin. To complete this task, log in as root, activate admin home area, and then run restorecon to restore the SELinux context.

homectl activate admin
<< enter password for admin
restorecon -R /home/admin
homectl deactivate admin

At this point, you can change the passwords for root and admin:

passwd root
homectl passwd admin

After completing these steps, you can log out from root and log in to admin.

If your computer has a fingerprint reader, setting it up is not possible from Plasma’s user settings, as systemd-homed is not yet recognised by the desktop. However, you can manually enroll your fingerprint by running fprintd-enroll and placing your finger on the reader as you normally would.

sudo fprintd-enroll admin

Same as above, you cannot set up the avatar from Plasma’s user settings, but you can copy an available avatar (PNG file) from Plasma’s avatar directory to the account service’s directory. The file name needs to be the same as the username:

/usr/share/plasma/avatars/<avatar.png> -> /var/lib/AccountsService/icons/admin

Finally, enable the service to keep your system updated and any other desired services:

systemctl enable --now bootc-fetch.timer
systemctl enable --now tailscaled

Troubleshooting

Drifts on /etc

Please note that a configuration file in /etc drifts when it is modified locally. Consequently, bootc will no longer manage this file, and new releases won’t be transferred to your installation. While this might be desired in some cases, it can also lead to issues.

For instance, if /etc/passwd is locally modified, uid or gid allocations for services may not get updated, resulting in service failures.

Use ostree admin config-diff to list the files in your local /etc that are no longer managed by bootc, because they are modified or added.

If a particular configuration file needs to be managed by bootc, you can revert it by copying the version created by the container build from /usr/etc to /etc.

Adding packages after first installation

The /var directory is populated in the container image and transferred to your OS during initial installation. Subsequent updates to the container image will not affect /var. This is the expected behavior of bootc and generally works fine. However, some RPM packages execute scriptlets after installation, resulting in changes to /var that will not be transferred to your OS.

Instead of trying to identify and update the missing bits in /var, I found it easier to overlay /usr (bootc usr-overlay) and reinstall the packages (dnf reinstall ..) after updating and rebooting bootc.

References

GitHub – kde-bootc: https://github.com/sigulete/kde-bootc
GitHub – bootc: https://github.com/bootc-dev/bootc
GitLab – fedora-bootc: https://gitlab.com/fedora/bootc

OpenWRT 24.10 derrière une Freebox: IPv6, DMZ et Bridge

Posted by Guillaume Kulakowski on 2025-05-04 08:12:39 UTC

Bien que je sois le très récent et heureux possesseur d’une Freebox Pop, j’ai fait le choix de continuer à déléguer la gestion de mon réseau ainsi que de mon partage Wi-Fi, non pas à la Pop, mais à OpenWRT. Les avantages pour moi sont les suivants : Je reviendrai sur pas mal de ces […]

Cet article OpenWRT 24.10 derrière une Freebox: IPv6, DMZ et Bridge est apparu en premier sur Guillaume Kulakowski's blog.

Expecting Accountability In Open Source

Posted by Akashdeep Dhar on 2025-05-03 18:30:39 UTC
Expecting Accountability In Open Source

For the longest time in my professional career and while contributing to free and open source software communities, I have struggled with expecting accountability from others. Much of this came from the anxiety that I experienced during the process of holding someone accountable. It often stemmed from concerns about potential conflicts, fears of being perceived negatively or doubts about self-worth. The situation only worsened when it were my friends that I was seeking on holding responsible for their decisions. It made me wonder if at all it was worth risking relationships just to get things done, or if I should rather settle for compromise.

That is, of course, a rhetorical question. I do not want to look like a surgeon who amputates an entire arm just because of a papercut on the little finger. I cannot expect the situations to change if I give up on individuals entirely just to avoid potential friction. After all, letting folks know about how uncomfortable the situation feels is often the best way to prevent the dangerous precedents created from instances of irresponsibility. Since accountability is a two-way street, I want to use this reflective (and possibly, therapeutic) post to share some grounded strategies that I rely on – and maybe, they will be useful to you too someday.

Assume best intentions

Expecting Accountability In Open Source
Photo by Danielle-Claude Bélanger on Unsplash

Last minute meeting cancellations are frustrating – especially when someone gives you the same excuse for the forty second time. But still, assume (not necessarily believe) that they missed the one-one meeting because their truck indeed broke down again. It is of paramount importance to reduce defensiveness by assuming that they are doing their best and figuring out what could be improved. Maybe the meeting time just doesn’t work for them — but that is not worth straining the relationship over. It is crucial to create an environment that is safe for owning mistakes, not one focussed on determining who is right or wrong in an argument.

Elucidate your objective

Expecting Accountability In Open Source
Photo by israel palacio on Unsplash

If you are anything like me, the very thought of being misinterpreted must fill you with excruciating pain. Clarifying where you are coming from helps communicate that you are asking for responsibility (not justification) and improvement (not punishment). This metaphorical white flag prevents others from feeling blindsided or attacked while establishing that both sides are trying to achieve the same goal in different subjective ways. Of course, I have been guilty of over-explaining myself, and that has only opened doors to weakened conviction and unnecessary debate. So, definitely stick to being clear while staying concise in these conversations.

Assemble your facts

Expecting Accountability In Open Source
Photo by Mr Cup / Fabien Barral on Unsplash

Keeping track of what actually happened is perhaps the best way to ensure that the conversation stays grounded and objective. As someone who is good at crucial conversations but avoids unnecessary conflicts, this silver bullet has increased the likelihood of my concerns being taken seriously. I have also noted that people are more open to feedback when I emphasize observable behaviours over subjective opinions. By focusing on what requires changing and respecting their dignity, the energy in the conversation is directed more toward fixing things and personal improvements, and less toward venting problems or emotional escalation.

Discomfort is expected

Expecting Accountability In Open Source
Photo by Олег Мороз on Unsplash

While it might sound silly until put into practice, deep breaths truly help calm the fight-or-flight system. During synchronous conversations like calls, a pause creates a mental buffer that can help you craft effective responses and avoid regrettable reactions. Heck, with asynchronous conversations like emails, sleep on it - even though it can be difficult to shake off the uncomfortable feeling - so you can return with an emotional state that you are in control of. For what it's worth, the energy spent steadying your nerves under pressure helps build your resilience for future exchanges, which might be just as uncomfortable but necessary, at the same time.

Consider the conditions

Expecting Accountability In Open Source
Photo by Zulfa Nazer on Unsplash

If time travel were possible, I would tell my younger self that conversations are inherently difficult. There are many ways things can backfire, so one must choose their tone and timing intentionally to ensure the message is respected and requests are enacted. While you should approach people in good faith, it is important to be explicit about the same to avoid assumptions of conflict. If someone is stressed or unprepared, wait it out - you need to ensure that your message lands effectively while respecting their state of mind. Of course, don’t wait forever - but definitely establish a professional standard for emotional protection in conversations.

Rehearse with friends

Expecting Accountability In Open Source
Photo by Felix Rostig on Unsplash

Or contributors. Or associates. Or managers. Basically, anyone you feel safe with. Rely on them to vent your problems while preparing your messaging. Once they help you avoid unclear language, emotional tone, or unintended blame, your message can become more refined in purpose, and you can become more confident in your stance. Practicing crucial conversations with your safe people helps build your (and arguably their) resilience, so those become more natural going forward. Also, if you are like me who overthinks their problems, these safe conversations help you cut to the chase without spiralling into perfectionism and agitation.

Recognize your safety

Expecting Accountability In Open Source
Photo by Nick Fewings on Unsplash

I wrote this, but I know it will take me at least a decade more to fully internalize this idea into practice. I often find myself checking my messages - every now and then - after sending an accountability expecting request because, somehow, my imposter syndrome leads me to believe that it is not my place to ask questions. Of course, I could not be more mistaken in believing so when outcomes, relations and commitments are on the line. My mental trick is evaluating which is worse - myself being misinterpreted or them breaking commitments - and suddenly, my doubts start clearing away, and I find myself composing an email or message to folks.

Believe me, as someone who has been misunderstood many times, I know just how tricky it can be to resist the temptation to let things slide. But casting problems aside would only mean that I do not care enough about the professional career and community circles I contribute to. That is not me, and I am pretty sure you feel the same way too. Our perspectives are valid, and we do not need permission to raise concerns. As building confidence is a continuous journey, we should set a healthy precedent of shared responsibility and an open culture that rewards commitment to authority fairness and willingness to raise concerns of the people involved.

Embrace imperfect outcomes

Expecting Accountability In Open Source
Photo by Matt L on Unsplash

This one, like the previous point, is a mindset shift, and it will take a considerable period before it comes naturally. Conversations expecting accountability often end up in splitting the differences to ensure both sides are comfortable with what was agreed upon. I could be willing to deliver 150% of my potential, but it would be criminally wrong on my part to expect the same from others. Acknowledging that growth and fixing can be complicated, accountability should be viewed as a long-term goal to work toward, rather than an immediate remedy. Such exchanges need both parties to be flexible and focused, with emphasis on potential over perfection.

Being an evolving process, accountability requires a flywheel effect, where the role of driving it shifts among folks to maintain momentum. Since accountability is a two-way street, questions will be raised about your commitment too - but holding the same standards you expect from others will resolve that situation. In a culture where growth is rewarded, your pursuit of accountability can become sustainable, with people joining your efforts and personal relationships getting better. You will have more success making an influential change with most (if not all) hands on deck and taking it gradually, rather than expecting things to transform overnight.

review of the SLZB-06M

Posted by Kevin Fenzi on 2025-05-03 17:55:40 UTC

I've been playing with Homeassistant a fair bit of late and I've collected a bunch of interesting gadgets. Today I'd like to talk about / review the SLZB-06M.

So the first obvious question: what is a SLZB-06M?

It is a small, Ukrainian designed device that is a: "Zigbee 3.0 to Ethernet, USB, and WiFi Adapter" So, basically you connect it to your wired network, or via usb or via wifi and it gateways that to a Zigbee network. It's really just a esp32 with a shell and ethernet/wifi/bluetooth/zigbee, but all assembled for you and ready to go.

I'm not sure if my use case is typical for this device, but it worked out for me pretty nicely. I have a pumphouse that is down a hill and completely out of line-of-sight of the main house/my wifi. I used some network over power/powerline adapters to extend a segment of my wired network over the power lines that run from the house to it, and that worked great. But then I needed some way to gateway the zigbee devices I wanted to put there back to my homeassistant server.

The device came promptly and was nicely made. It has a pretty big antenna and everything is pretty well labeled. On powering it home assistant detected it no problem and added it. However, then I was a bit confused. I already have a usb zigbee adapter on my home assistant box and the integration was just showing things like the temp and firmware. I had to resort to actually reading the documentation! :)

Turns out the way the zigbee integration works is via zigbee2mqtt. You add the repo for that, install the add on and then configure a user. Then you configure the device via it's web interface on the network to match that. Then, the device shows up in a zigbee2mqtt pannel. Joining devices to it is a bit different from a normal wifi setup, you need to tell it to 'permit join', either anything, or specific devices. Then you press the pair button or whatever on the device and it joins right up. Note that devices can only be joined to one zigbee network, so you have to make sure you do not add them to other zigbee adapters you have. You can set a seperate queue for each one of these adapters, so you can have as many networks as you have coordinator devices for.

You can also have the SLZB-06M act as a bluetooth gateway. I may need to do that if I ever add any bluetooth devices down there.

The web interface lets you set various network config. You can set it as a zigbee coordinator or just a router in another network. You can enable/disable bluetooth, do firmware updates (but homeassistant will do these directly via the normal integration), adjust the leds on the device (off, or night mode, etc). It even gives you a sample zigbee2mqtt config to start with.

After that it's been working great. I now have a temp sensor and a smart plug (on a heater we keep down there to keep things from freezing when it gets really cold). I'm pondering adding a sensor for our water holding tank and possibly some flow meters for the pipes from the well and to the house from the holding tank.

Overall this is a great device and I recommend it if you have a use case for it.

Slava Ukraini!

Beginning of May infra bits 2025

Posted by Kevin Fenzi on 2025-05-03 16:52:02 UTC
Scrye into the crystal ball

Wow, it's already May now. Time races by sometimes. Here's a few things I found notable in the last week:

Datacenter Move

Actual progress to report this week! Managed to get access to the mgmt on all our new hardware in the new datacenter. Most everything is configured right in dhcp config now (aarch64 and power10's need still some tweaking there).

This next week will be updating firmware, tweaking firmware config, setting up access, etc on all those interfaces. I want to try and do some testing on various raid configs for storage and standardize the firmware configs. We are going to need to learn how to configure the lpars on the power10 machines next week as well.

Then, the following week hopefully we will have at least some normal network for those hosts and can start doing installs on them.

The week after that I hope to start moving some 'early' things: possibly openqa and coreos and some of our more isolated openshift applications. That will continue the week after that, then it's time for flock, some more moving and then finally the big 'switcharoo' week on the 16th.

Also some work on moving some of our soon to be older power9 hardware into a place where it can be added to copr for more/better/faster copr builders.

OpenShift cluster upgrades

Our openshift clusters (prod and stg) were upgraded from 4.17 to 4.18. OpenShift upgrades are really pretty nice. There was not much in the way of issues (although a staging compute node got stuck on boot and had to be power cycled).

One interesting thing with this upgrade was that support for cgroups v1 was listed as going away in 4.19. It's not been the default in a while, but our clusters were installed so long ago that they were still using it as a default.

I like that the upgrade is basically to edit one map and change a 1 to a 2 and then openshift reboots nodes and it's done. Very slick. I've still not done the prod cluster, but likely next week.

Proxy upgrades

There's been some instablity with our proxies in particular in EU and APAC. We are going to be over the coming weeks rolling out newer/bigger/faster instances which should hopefully reduce or eliminate problems folks have sometimes been seeing.

comments? additions? reactions?

As always, comment on mastodon: https://fosstodon.org/@nirik/114445144640282791

Docker vs Virtual Machines: What Every Ham Should Know

Posted by Piju 9M2PJU on 2025-05-03 03:16:36 UTC

Before container technologies like Docker came into play, applications were typically run directly on the host operating system—either on bare metal hardware or inside virtual machines (VMs). While this method works, it often leads to frustrating issues, especially when trying to reproduce setups across different environments.

This becomes even more relevant in the amateur radio world, where we often experiment with digital tools, servers, logging software, APRS gateways, SDR applications, and more. Having a consistent and lightweight deployment method is key when tinkering with limited hardware like Raspberry Pi, small form factor PCs, or cloud VPS systems.


The Problem with Traditional Software Deployment

Let’s say you’ve set up an APRS iGate, or maybe you’re experimenting with WSJT-X for FT8, and everything runs flawlessly on your laptop. But the moment you try deploying the same setup on a Raspberry Pi or a remote server—suddenly things break.

Why?

Common culprits include:

  • Different versions of the operating system
  • Mismatched library versions
  • Varying configurations
  • Conflicting dependencies

These issues can be particularly painful in amateur radio projects, where specific software dependencies are critical, and stability matters for long-term operation.

You could solve this by running each setup inside a virtual machine, but VMs are often overkill—especially for ham radio gear with limited resources.


Enter Docker: The Ham’s Best Friend for Lightweight Deployment

Docker is an open-source platform that allows you to package applications along with everything they need—libraries, configurations, runtimes—into one neat, portable unit called a container.

Think of it like packaging up your entire ham radio setup (SDR software, packet tools, logging apps, etc.) into a container, then being able to deploy that same exact setup on:

  • A Raspberry Pi
  • A cloud server
  • A homelab NUC
  • Another ham’s machine

Why It’s Great for Hams:

  • 🧊 Lightweight – great for Raspberry Pi or low-power servers
  • 🚀 Fast startup – ideal for services that need to restart quickly
  • 🔁 Reproducible environments – makes sharing setups with fellow hams easier
  • 🔒 Isolation – keeps different radio tools from interfering with each other

Many amateur radio tools like Direwolf, Xastir, Pat (Winlink), and even JS8Call can be containerized, making experimentation safer and more efficient.


Virtual Machines: Still Relevant in the Shack

Virtual Machines (VMs) have been around much longer and still play a crucial role. Each VM acts like a complete computer, with its own OS and kernel, running on a hypervisor like:

  • VirtualBox
  • VMware
  • KVM
  • Hyper-V

With VMs, you can spin up an entire Windows or Linux machine, perfect for:

  • Running legacy ham radio software (e.g., old Windows-only apps)
  • Simulating different operating systems for testing
  • Isolating potentially unstable setups from your main system

However, VMs require more horsepower. They’re heavy, boot slowly, and take up more disk space—often not ideal for small ham radio PCs or low-powered nodes deployed in the field.


Quick Comparison: Docker vs Virtual Machines for Hams

FeatureDockerVirtual Machine
OSShares host kernelFull OS per VM
Boot TimeSecondsMinutes
Resource UseLowHigh
SizeLightweightHeavy (GBs)
Ideal ForModern ham tools, APRS bots, SDR appsLegacy systems, OS testing
PortabilityHighModerate

Ham Radio Use Cases for Docker

Here’s how Docker fits into amateur radio workflows:

  • 🚀 Run an APRS iGate with Direwolf and YAAC in isolated containers.
  • 📡 Deploy SDR receivers like rtl_433, OpenWebRX, or CubicSDR as containerized services.
  • 📨 Set up a Winlink gateway using Pat + ax25 tools, all in one container.
  • 🔄 Automate and scale your APRS bot, or APRS gateway using Docker + cron + scripts.

Docker makes it easier to test and share these setups with other hams—just export your Docker Compose file or image.


When to Use Docker, When to Use a VM

Use Docker if:

  • You’re building or experimenting with modern ham radio apps
  • You want to deploy quickly and repeatably
  • You’re using Raspberry Pi, VPS, or low-power hardware
  • You’re setting up CI/CD pipelines for your scripts or bots

Use VMs if:

  • You need to run legacy apps (e.g., old Windows logging software)
  • You want to simulate full system environments
  • You’re working on something that could crash your main system

Final Thoughts

Both Docker and VMs are powerful tools that have a place in the modern ham shack. Docker offers speed, portability, and resource-efficiency—making it ideal for deploying SDR setups, APRS bots, or automation scripts. VMs, on the other hand, still shine when you need full system emulation or deeper isolation.

At the end of the day, being a ham means being an experimenter. And tools like Docker just give us more ways to explore, automate, and share our radio projects with the world.

The post Docker vs Virtual Machines: What Every Ham Should Know appeared first on Hamradio.my - Amateur Radio, Tech Insights and Product Reviews by 9M2PJU.

Infra and RelEng Update – Week 18

Posted by Fedora Community Blog on 2025-05-02 14:31:27 UTC

This is a weekly report from the I&R (Infrastructure & Release Engineering) Team. We provide you both infographic and text version of the weekly report. If you just want to quickly look at what we did, just look at the infographic. If you are interested in more in depth details look below the infographic.

Week: 28th April – 2nd May 2025

Infra&Releng Infographic

Infrastructure & Release Engineering

The purpose of this team is to take care of day to day business regarding CentOS and Fedora Infrastructure and Fedora release engineering work.
It’s responsible for services running in Fedora and CentOS infrastructure and preparing things for the new Fedora release (mirrors, mass branching, new namespaces etc.).
List of planned/in-progress issues

Fedora Infra

CentOS Infra including CentOS CI

Release Engineering

If you have any questions or feedback, please respond to this report or contact us on #redhat-cpe channel on matrix.

The post Infra and RelEng Update – Week 18 appeared first on Fedora Community Blog.

Local Voice Assistant Step 2: Speech to Text and back

Posted by Jonathan McDowell on 2025-05-01 18:05:51 UTC

Having setup an ATOM Echo Voice Satellite and hooked it up to Home Assistant we now need to actually do something with the captured audio. Home Assistant largely deals with voice assistants using the Wyoming Protocol, which describes itself as essentially JSONL + PCM audio. It works nicely in terms of meaning everything can exist as separate modules that then just communicate over network sockets, and there are a whole bunch of Python implementations of the pieces necessary.

The first bit I looked at was speech to text; how do I get what I say to the voice satellite into something that Home Assistant can try and parse? There is a nice self contained speech recognition tool called whisper.cpp, which is a low dependency implementation of inference using OpenAI’s Whisper model. This is wrapped up for Wyoming as part of wyoming-whisper-cpp. Here we get into something that unfortunately seems common in this space; the repo contains a forked copy of whisper.cpp with enough differences that I couldn’t trivially make it work with regular whisper.cpp. That means missing out on new development, and potential improvements (the fork appears to be at v1.5.4, upstream is up to v1.7.5 at the time of writing). However it was possible to get up and running easily enough.

[I note there is a Wyoming Whisper API client that can use the whisper.cpp server, and that might be a cleaner way to go in the future, especially if whisper.cpp ends up in Debian.]

I stated previously I wanted all of this to be as clean an installed on Debian stable as possible. Given most of this isn’t packaged, that’s meant I’ve packaged things up as I go. I’m not at the stage anything is suitable for upload to Debian proper, but equally I’ve tried to make them a reasonable starting point. No pre-built binaries available, just Salsa git repos. https://salsa.debian.org/noodles/wyoming-whisper-cpp in this case. You need python3-wyoming from trixie if you’re building for bookworm, but it doesn’t need rebuilt.

You need a Whisper model that’s been converts to ggml format; they can be found on Hugging Face. I’ve ended up using the base.en model. I found small.en gave more accurate results, but took a little longer, when doing random testing, but it doesn’t seem to make much of a difference for voice control rather than plain transcribing.

[One of the open questions about uploading this to Debian is around the use of a prebuilt AI model. I don’t know what the right answer is here, and whether the voice infrastructure could ever be part of Debian proper, but the current discussion on the interpretation of the DFSG on AI models is very relevant.]

I run this in the same container as my Home Assistant install, using a systemd unit file dropped in /etc/systemd/system/wyoming-whisper-cpp.service:

[Unit]
Description=Wyoming whisper.cpp server
After=network.target

[Service]
Type=simple
DynamicUser=yes
ExecStart=wyoming-whisper-cpp --uri tcp://localhost:10030 --model base.en

MemoryDenyWriteExecute=false
ProtectControlGroups=true
PrivateDevices=false
ProtectKernelTunables=true
ProtectSystem=true
RestrictRealtime=true
RestrictNamespaces=true

[Install]
WantedBy=multi-user.target

It needs the Wyoming Protocol integration enabled in Home Assistant; you can “Add Entry” and enter localhost + 10030 for host + port and it’ll get added. Then in the Voice Assistant configuration there’ll be a whisper.cpp option available.

Text to speech turns out to be weirdly harder. The right answer is something like Wyoming Piper, but that turns out to be hard on bookworm. I’ll come back to that in a future post. For now I took the easy option and used the built in “Google Translate” option in Home Assistant. That needed an extra stanza in configuration.yaml that wasn’t entirely obvious:

media_source:

With this, and the ATOM voice satellite, I could now do basic voice control of my Home Assistant setup, with everything except the text-to-speech piece happening locally! Things such as “Hey Jarvis, turn on the study light” work out of the box. I haven’t yet got into defining my own phrases, partly because I know some of the things I want (“What time is it?”) are already added in later Home Assistant versions than the one I’m running.

Overall I found this initially complicated to setup given my self-imposed constraints about actually understanding the building blocks and compiling them myself, but I’ve been pretty impressed with the work that’s gone into it all. Next step, running a voice satellite on a Debian box.

Restaurer la géolocalisation sous Linux après l’arrêt du service Mozilla

Posted by Guillaume Kulakowski on 2025-04-29 06:44:30 UTC

Cela faisait plusieurs fois que je remarquais que la géolocalisation sur mon PC personnel ne fonctionnait plus...
Aujourd'hui, avec cinq minutes à perdre, je me suis enfin décidé à creuser un peu. Et après quelques recherches rapides, j'ai découvert que depuis le 12 juin 2024, le service de géolocalisation de Mozilla n'était tout simplement plus disponible.
Nous allons voir comment le remplacer !

Cet article Restaurer la géolocalisation sous Linux après l’arrêt du service Mozilla est apparu en premier sur Guillaume Kulakowski's blog.

MxOS: soberanía tecnológica en marcha

Posted by Rénich Bon Ćirić on 2025-04-29 06:00:00 UTC

Me parece excelente la propuesta que hizo Rubert Riemann en Bruselas, Bélgica. Su iniciativa, EU OS, es muy valiosa y tiene gran potencial. Aún no es una iniciativa oficial de la Unión Europea, pero esa es la intención: que la adopten.

Existe mucho interés en Europa por este tipo de iniciativas. Algunos pueden caer en la anarquía o la apatía ante esto. Pero, la neta, ese no es el camino. Al contrario, podemos colaborar de maneras significativas para fortalecer nuestra soberanía tecnológica sin necesidad de aislarnos por completo; más bien, aportando al ecosistema existente y potenciándolo para nuestro propio beneficio y el de los demás.

Oportunidad

Esta iniciativa es muy valiosa desde una perspectiva tecnológica. Es fundamental que aprendamos a hacer las cosas nosotros mismos, que sepamos cómo crear, modificar, mantener y distribuir nuestro propio sistema operativo.

Hoy en día, iniciar es muy accesible. Contamos con muchísimas herramientas que nos permiten reutilizar lo que ya está hecho. Hay abundante documentación disponible. Existe todo lo necesario para que, incluso una sola persona, pueda empezar a trabajar en esto desde alguna capacidad.

Pero este no debe ser un proyecto solitario. Debe ser un proyecto de nación. Requiere colaboración, fondeo, infraestructura, documentación en español, capacitación y promoción activa.

Nos permitiría colaborar entre naciones compartiendo parches, empaquetado y desarrollo. Pone más ojos en el código para detectar vulnerabilidades o abusos. Crea un terreno fértil donde los mexicanos podemos sembrar y crecer nuestra propia tecnología.

Considero que esta iniciativa también debe impulsarse en México. Debemos explorar cómo colaborar con otras propuestas y aprovechar, al máximo, el verdadero potencial del software libre, que reside en:

  • Poder reutilizar lo que ya existe para no tener que empezar desde cero.
  • Aprender de lo que otros han hecho para poder desarrollar nuestras propias soluciones.
  • Compartir nuestro trabajo para que otros puedan beneficiarse.
  • Aprovechar las contribuciones que otros hagan al ecosistema.

Eso es factible hoy mismo. Ya. Con los recursos y conocimientos disponibles, podemos entrar en ese círculo virtuoso de desarrollo y aprovechamiento tecnológico. Solo necesitamos el impulso.

Alrededor de todo esto hay un ecosistema de negocio. También hay oportunidades para quienes las buscan. Es, en gran medida, una cuestión de conocimiento y voluntad para construirlo. Una distribución mexicana de GNU/Linux, bien soportada y con software libre adaptado para el sector empresarial, el gobierno y la comunidad en general, sería inmensamente beneficiosa para el país.

My perspective after two years as a research and teaching assistant at FIDIT

Posted by Vedran Miletić on 2025-04-28 11:15:13 UTC

My perspective after two years as a research and teaching assistant at FIDIT


human statues near white building

Photo source: Darran Shen (@darranshen) | Unsplash


My employment as a research and teaching assistant at Faculty of Informatics and Digital Technologies (FIDIT for short), University of Rijeka (UniRi) ended last month with the expiration of the time-limited contract I had. This moment has marked almost two full years I spent in this institution and I think this is a good time to take a look back at everything that happened during that time. Inspired by the recent posts by the PI of my group, I decided to write my perspective on the time that I hope is just the beginning of my academic career.