/rss20.xml">

Fedora People

Matrix server maintenance

Posted by Fedora Infrastructure Status on 2026-03-26 11:50:00 UTC

Element Matrix Services is performing scheduled maintenance on our matrix server (fedora.im).

Affected Services:

  • chat.fedoraproject.org
  • fedora.im
  • matrix services

Updates and reboots on Fedora infrastructure

Posted by Fedora Infrastructure Status on 2026-03-25 22:00:00 UTC

Fedora Infrastructure team will be applying updates to servers and rebooting them.

Many services will be affected, most should only be down for a short time as their particular resources are rebooted HOWEVER some may be down for a non-trivial amount of time due to RHEL-9 to RHEL-10 upgrades.

Compiling syslog-ng on an old Mac

Posted by Peter Czanik on 2026-03-25 14:43:49 UTC

I have an aging, but fully functional MacBook. I bought it for syslog-ng testing, but I also use for watching movies. Homebrew no more fully supports old, Intel-based Macs. This blog helps to compile the latest syslog-ng release on these old, but otherwise functional machines.

Read more at https://www.syslog-ng.com/community/b/blog/posts/compiling-syslog-ng-on-an-old-mac

syslog-ng logo

My new toy: first steps with AI on Linux

Posted by Peter Czanik on 2026-03-25 11:48:34 UTC

Ever since I bought my AI mini workstation from HP, my goal was to run hardware accelerated artificial intelligence workloads in a Linux environment. Read more to learn how things turned out on Ubuntu and Fedora!

I have been using various AI tools for a while now. Generating pictures about some impossible situations, like a dinosaur climbing the Hungarian parliament building, finding information where a simple web search is useless, or explaining syslog-ng code to me. All these are nice, sometimes even useful, however I prefer to know what is behind the magic. Well, at least part of it :-) I want to get a bottom up view of various components and processes, and getting my hands dirty. Hopefully this miniature but powerful box will help me in getting known with AI better.

AI in a miniature box :-)

Testing AI on Ubuntu

As mentioned in my installing Ubuntu blog, the 24.04 LTS installer did not work on this machine. I found a nice tutorial about AI on the Ryzen AI Max+ 395 which mentioned using 25.10, so I installed that version instead of the LTS. It installed without any troubles, 3D graphics worked out of the box.

However, AI is a different story. ROCm, hardware acceleration for AI workloads on AMD chips, is only packaged for Ubuntu LTS releases. The workaround described in the tutorial was to use distrobox. Unfortunately, the steps described in the tutorial did not work. Containerization brought in various problems with permissions, software availability, and so on. Most likely an experienced distrobox user could resolve these. In my case, after reading the distrobox documentation for hours, I just gave up.

Getting started with hardware accelerated AI on Fedora

Next, I turned to Fedora Linux 43. The wiki page of the Fedora Heterogeneous Computing Special Interest Group proved to be a good starting point. Fedora has ROCm packaged as part of the distro, and the wiki page gives clear instructions how to get started.

Once I set up user rights and installed the necessary packages, I was able to get some info about my hardware. You can see the output of rocminfo and rocm-clinfo at the bottom of this blog. I did not want to shorten those, but given the many lines of output, I was not sure if anyone would read the rest of my blog :-)

Testing with llama

Of course, seeing info about the hardware is nice, but it’s even better to see it in action. The Ubuntu ROCm tutorial mentioned llama, so I started with that one. Luckily Fedora includes it as a ready to install package, so I did not have to compile it from source. I also installed huggingface-hub, also from a package:

dnf install python3-huggingface-hub llama-cpp

This allowed me to download the model mentioned in the tutorial, and ask a few questions from the downloaded LLM. For now I just used the sample command line, but based on the output llama found the hardware and used it. Next up: learn more about the available models.

You can find the output of the following command at the end of this blog:

llama-cli   -m ~/models/llama-2-7b.Q4_K_M.gguf   --no-mmap   -ngl 99   -p "Explain quantum computing in simple terms:"   -n 256

Testing with pytorch

When I mentioned a friend that hardware accelerated AI seems to work on my Linux box, he suggested to me to try it with PyTorch. Luckily this was available as a ready to install package for Fedora as well:

dnf install python3-torch

I was quite a bit surprised, as the above command installed 8 GB worth of RPM packages (texlive accounting for a good part of it). I do not know much about PyTorch, but did a quick test anyway. Here is the really complex Pyhon code I built based on the documentation:

import torch
x = torch.rand(5, 3)
print(x)
print('Is hw AI accel available')
print(torch.cuda.is_available())

And here is the output from the above code:

tensor([[0.1034, 0.0183, 0.1233],
        [0.1787, 0.0097, 0.8426],
        [0.2872, 0.6351, 0.8468],
        [0.8226, 0.2991, 0.8539],
        [0.2061, 0.6422, 0.8146]])
Is hw AI accel available
True

It’s simple, but looks promising :-)

Outputs

Ooutput of rocminfo and rocm-clinfo

czanik@fedora:~$ rocminfo 
ROCk module is loaded
=====================    
HSA System Attributes    
=====================    
Runtime Version:         1.1
Runtime Ext Version:     1.7
System Timestamp Freq.:  1000.000000MHz
Sig. Max Wait Duration:  18446744073709551615 (0xFFFFFFFFFFFFFFFF) (timestamp count)
Machine Model:           LARGE                              
System Endianness:       LITTLE                             
Mwaitx:                  DISABLED
XNACK enabled:           NO
DMAbuf Support:          YES
VMM Support:             YES

==========               
HSA Agents               
==========               
*******                  
Agent 1                  
*******                  
  Name:                    AMD RYZEN AI MAX+ PRO 395 w/ Radeon 8060S
  Uuid:                    CPU-XX                             
  Marketing Name:          AMD RYZEN AI MAX+ PRO 395 w/ Radeon 8060S
  Vendor Name:             CPU                                
  Feature:                 None specified                     
  Profile:                 FULL_PROFILE                       
  Float Round Mode:        NEAR                               
  Max Queue Number:        0(0x0)                             
  Queue Min Size:          0(0x0)                             
  Queue Max Size:          0(0x0)                             
  Queue Type:              MULTI                              
  Node:                    0                                  
  Device Type:             CPU                                
  Cache Info:              
    L1:                      49152(0xc000) KB                   
  Chip ID:                 0(0x0)                             
  ASIC Revision:           0(0x0)                             
  Cacheline Size:          64(0x40)                           
  Max Clock Freq. (MHz):   5187                               
  BDFID:                   0                                  
  Internal Node ID:        0                                  
  Compute Unit:            32                                 
  SIMDs per CU:            0                                  
  Shader Engines:          0                                  
  Shader Arrs. per Eng.:   0                                  
  WatchPts on Addr. Ranges:1                                  
  Memory Properties:       
  Features:                None
  Pool Info:               
    Pool 1                   
      Segment:                 GLOBAL; FLAGS: FINE GRAINED        
      Size:                    131136832(0x7d0fd40) KB            
      Allocatable:             TRUE                               
      Alloc Granule:           4KB                                
      Alloc Recommended Granule:4KB                                
      Alloc Alignment:         4KB                                
      Accessible by all:       TRUE                               
    Pool 2                   
      Segment:                 GLOBAL; FLAGS: EXTENDED FINE GRAINED
      Size:                    131136832(0x7d0fd40) KB            
      Allocatable:             TRUE                               
      Alloc Granule:           4KB                                
      Alloc Recommended Granule:4KB                                
      Alloc Alignment:         4KB                                
      Accessible by all:       TRUE                               
    Pool 3                   
      Segment:                 GLOBAL; FLAGS: KERNARG, FINE GRAINED
      Size:                    131136832(0x7d0fd40) KB            
      Allocatable:             TRUE                               
      Alloc Granule:           4KB                                
      Alloc Recommended Granule:4KB                                
      Alloc Alignment:         4KB                                
      Accessible by all:       TRUE                               
    Pool 4                   
      Segment:                 GLOBAL; FLAGS: COARSE GRAINED      
      Size:                    131136832(0x7d0fd40) KB            
      Allocatable:             TRUE                               
      Alloc Granule:           4KB                                
      Alloc Recommended Granule:4KB                                
      Alloc Alignment:         4KB                                
      Accessible by all:       TRUE                               
  ISA Info:                
*******                  
Agent 2                  
*******                  
  Name:                    gfx1151                            
  Uuid:                    GPU-XX                             
  Marketing Name:          Radeon 8060S Graphics              
  Vendor Name:             AMD                                
  Feature:                 KERNEL_DISPATCH                    
  Profile:                 BASE_PROFILE                       
  Float Round Mode:        NEAR                               
  Max Queue Number:        128(0x80)                          
  Queue Min Size:          64(0x40)                           
  Queue Max Size:          131072(0x20000)                    
  Queue Type:              MULTI                              
  Node:                    1                                  
  Device Type:             GPU                                
  Cache Info:              
    L1:                      32(0x20) KB                        
    L2:                      2048(0x800) KB                     
    L3:                      32768(0x8000) KB                   
  Chip ID:                 5510(0x1586)                       
  ASIC Revision:           0(0x0)                             
  Cacheline Size:          128(0x80)                          
  Max Clock Freq. (MHz):   2900                               
  BDFID:                   50432                              
  Internal Node ID:        1                                  
  Compute Unit:            40                                 
  SIMDs per CU:            2                                  
  Shader Engines:          2                                  
  Shader Arrs. per Eng.:   2                                  
  WatchPts on Addr. Ranges:4                                  
  Coherent Host Access:    FALSE                              
  Memory Properties:       APU
  Features:                KERNEL_DISPATCH 
  Fast F16 Operation:      TRUE                               
  Wavefront Size:          32(0x20)                           
  Workgroup Max Size:      1024(0x400)                        
  Workgroup Max Size per Dimension:
    x                        1024(0x400)                        
    y                        1024(0x400)                        
    z                        1024(0x400)                        
  Max Waves Per CU:        32(0x20)                           
  Max Work-item Per CU:    1024(0x400)                        
  Grid Max Size:           4294967295(0xffffffff)             
  Grid Max Size per Dimension:
    x                        4294967295(0xffffffff)             
    y                        4294967295(0xffffffff)             
    z                        4294967295(0xffffffff)             
  Max fbarriers/Workgrp:   32                                 
  Packet Processor uCode:: 34                                 
  SDMA engine uCode::      18                                 
  IOMMU Support::          None                               
  Pool Info:               
    Pool 1                   
      Segment:                 GLOBAL; FLAGS: COARSE GRAINED      
      Size:                    65568416(0x3e87ea0) KB             
      Allocatable:             TRUE                               
      Alloc Granule:           4KB                                
      Alloc Recommended Granule:2048KB                             
      Alloc Alignment:         4KB                                
      Accessible by all:       FALSE                              
    Pool 2                   
      Segment:                 GLOBAL; FLAGS: EXTENDED FINE GRAINED
      Size:                    65568416(0x3e87ea0) KB             
      Allocatable:             TRUE                               
      Alloc Granule:           4KB                                
      Alloc Recommended Granule:2048KB                             
      Alloc Alignment:         4KB                                
      Accessible by all:       FALSE                              
    Pool 3                   
      Segment:                 GROUP                              
      Size:                    64(0x40) KB                        
      Allocatable:             FALSE                              
      Alloc Granule:           0KB                                
      Alloc Recommended Granule:0KB                                
      Alloc Alignment:         0KB                                
      Accessible by all:       FALSE                              
  ISA Info:                
    ISA 1                    
      Name:                    amdgcn-amd-amdhsa--gfx1151         
      Machine Models:          HSA_MACHINE_MODEL_LARGE            
      Profiles:                HSA_PROFILE_BASE                   
      Default Rounding Mode:   NEAR                               
      Default Rounding Mode:   NEAR                               
      Fast f16:                TRUE                               
      Workgroup Max Size:      1024(0x400)                        
      Workgroup Max Size per Dimension:
        x                        1024(0x400)                        
        y                        1024(0x400)                        
        z                        1024(0x400)                        
      Grid Max Size:           4294967295(0xffffffff)             
      Grid Max Size per Dimension:
        x                        4294967295(0xffffffff)             
        y                        4294967295(0xffffffff)             
        z                        4294967295(0xffffffff)             
      FBarrier Max Size:       32                                 
    ISA 2                    
      Name:                    amdgcn-amd-amdhsa--gfx11-generic   
      Machine Models:          HSA_MACHINE_MODEL_LARGE            
      Profiles:                HSA_PROFILE_BASE                   
      Default Rounding Mode:   NEAR                               
      Default Rounding Mode:   NEAR                               
      Fast f16:                TRUE                               
      Workgroup Max Size:      1024(0x400)                        
      Workgroup Max Size per Dimension:
        x                        1024(0x400)                        
        y                        1024(0x400)                        
        z                        1024(0x400)                        
      Grid Max Size:           4294967295(0xffffffff)             
      Grid Max Size per Dimension:
        x                        4294967295(0xffffffff)             
        y                        4294967295(0xffffffff)             
        z                        4294967295(0xffffffff)             
      FBarrier Max Size:       32                                 
*******                  
Agent 3                  
*******                  
  Name:                    aie2                               
  Uuid:                    AIE-XX                             
  Marketing Name:          AIE-ML                             
  Vendor Name:             AMD                                
  Feature:                 AGENT_DISPATCH                     
  Profile:                 BASE_PROFILE                       
  Float Round Mode:        NEAR                               
  Max Queue Number:        1(0x1)                             
  Queue Min Size:          64(0x40)                           
  Queue Max Size:          64(0x40)                           
  Queue Type:              SINGLE                             
  Node:                    0                                  
  Device Type:             DSP                                
  Cache Info:              
    L2:                      2048(0x800) KB                     
    L3:                      32768(0x8000) KB                   
  Chip ID:                 0(0x0)                             
  ASIC Revision:           0(0x0)                             
  Cacheline Size:          0(0x0)                             
  Max Clock Freq. (MHz):   0                                  
  BDFID:                   0                                  
  Internal Node ID:        0                                  
  Compute Unit:            0                                  
  SIMDs per CU:            0                                  
  Shader Engines:          0                                  
  Shader Arrs. per Eng.:   0                                  
  WatchPts on Addr. Ranges:0                                  
  Memory Properties:       
  Features:                AGENT_DISPATCH
  Pool Info:               
    Pool 1                   
      Segment:                 GLOBAL; FLAGS: KERNARG, COARSE GRAINED
      Size:                    131136832(0x7d0fd40) KB            
      Allocatable:             TRUE                               
      Alloc Granule:           4KB                                
      Alloc Recommended Granule:4KB                                
      Alloc Alignment:         4KB                                
      Accessible by all:       TRUE                               
    Pool 2                   
      Segment:                 GLOBAL; FLAGS: COARSE GRAINED      
      Size:                    65536(0x10000) KB                  
      Allocatable:             TRUE                               
      Alloc Granule:           4KB                                
      Alloc Recommended Granule:0KB                                
      Alloc Alignment:         4KB                                
      Accessible by all:       TRUE                               
    Pool 3                   
      Segment:                 GLOBAL; FLAGS: COARSE GRAINED      
      Size:                    131136832(0x7d0fd40) KB            
      Allocatable:             TRUE                               
      Alloc Granule:           4KB                                
      Alloc Recommended Granule:4KB                                
      Alloc Alignment:         4KB                                
      Accessible by all:       TRUE                               
  ISA Info:                
*** Done ***             

and

czanik@fedora:~$ rocm-clinfo 
Number of platforms:				 1
  Platform Profile:				 FULL_PROFILE
  Platform Version:				 OpenCL 2.1 AMD-APP (3649.0)
  Platform Name:				 AMD Accelerated Parallel Processing
  Platform Vendor:				 Advanced Micro Devices, Inc.
  Platform Extensions:				 cl_khr_icd cl_amd_event_callback 


  Platform Name:				 AMD Accelerated Parallel Processing
Number of devices:				 1
  Device Type:					 CL_DEVICE_TYPE_GPU
  Vendor ID:					 1002h
  Board name:					 Radeon 8060S Graphics
  Device Topology:				 PCI[ B#197, D#0, F#0 ]
  Max compute units:				 20
  Max work items dimensions:			 3
    Max work items[0]:				 1024
    Max work items[1]:				 1024
    Max work items[2]:				 1024
  Max work group size:				 256
  Preferred vector width char:			 4
  Preferred vector width short:			 2
  Preferred vector width int:			 1
  Preferred vector width long:			 1
  Preferred vector width float:			 1
  Preferred vector width double:		 1
  Native vector width char:			 4
  Native vector width short:			 2
  Native vector width int:			 1
  Native vector width long:			 1
  Native vector width float:			 1
  Native vector width double:			 1
  Max clock frequency:				 2900Mhz
  Address bits:					 64
  Max memory allocation:			 57070749280
  Image support:				 Yes
  Max number of images read arguments:		 128
  Max number of images write arguments:		 8
  Max image 2D width:				 16384
  Max image 2D height:				 16384
  Max image 3D width:				 16384
  Max image 3D height:				 16384
  Max image 3D depth:				 8192
  Max samplers within kernel:			 16
  Max size of kernel argument:			 1024
  Alignment (bits) of base address:		 2048
  Minimum alignment (bytes) for any datatype:	 128
  Single precision floating point capability
    Denorms:					 Yes
    Quiet NaNs:					 Yes
    Round to nearest even:			 Yes
    Round to zero:				 Yes
    Round to +ve and infinity:			 Yes
    IEEE754-2008 fused multiply-add:		 Yes
  Cache type:					 Read/Write
  Cache line size:				 128
  Cache size:					 32768
  Global memory size:				 67142057984
  Constant buffer size:				 57070749280
  Max number of constant args:			 8
  Local memory type:				 Local
  Local memory size:				 65536
  Max pipe arguments:				 16
  Max pipe active reservations:			 16
  Max pipe packet size:				 1236174432
  Max global variable size:			 57070749280
  Max global variable preferred total size:	 67142057984
  Max read/write image args:			 64
  Max on device events:				 1024
  Queue on device max size:			 8388608
  Max on device queues:				 1
  Queue on device preferred size:		 262144
  SVM capabilities:				 
    Coarse grain buffer:			 Yes
    Fine grain buffer:				 Yes
    Fine grain system:				 No
    Atomics:					 No
  Preferred platform atomic alignment:		 0
  Preferred global atomic alignment:		 0
  Preferred local atomic alignment:		 0
  Kernel Preferred work group size multiple:	 32
  Error correction support:			 0
  Unified memory for Host and Device:		 1
  Profiling timer resolution:			 1
  Device endianess:				 Little
  Available:					 Yes
  Compiler available:				 Yes
  Execution capabilities:				 
    Execute OpenCL kernels:			 Yes
    Execute native function:			 No
  Queue on Host properties:				 
    Out-of-Order:				 No
    Profiling :					 Yes
  Queue on Device properties:				 
    Out-of-Order:				 Yes
    Profiling :					 Yes
  Platform ID:					 0x7ffb97d11d80
  Name:						 gfx1151
  Vendor:					 Advanced Micro Devices, Inc.
  Device OpenCL C version:			 OpenCL C 2.0 
  Driver version:				 3649.0 (HSA1.1,LC)
  Profile:					 FULL_PROFILE
  Version:					 OpenCL 2.0 
  Extensions:					 cl_khr_fp64 cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_int64_base_atomics cl_khr_int64_extended_atomics cl_khr_3d_image_writes cl_khr_byte_addressable_store cl_khr_fp16 cl_khr_gl_sharing cl_amd_device_attribute_query cl_amd_media_ops cl_amd_media_ops2 cl_khr_image2d_from_buffer cl_khr_subgroups cl_khr_depth_images cl_amd_copy_buffer_p2p cl_amd_assembly_program 

Output from llama

root@fedora:~# llama-cli   -m ~/models/llama-2-7b.Q4_K_M.gguf   --no-mmap   -ngl 99   -p "Explain quantum computing in simple terms:"   -n 256
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 ROCm devices:
  Device 0: Radeon 8060S Graphics, gfx1151 (0x1151), VMM: no, Wave Size: 32
build: 0 (unknown) with HIP version: 6.4.43484-9999 for x86_64-redhat-linux-gnu
main: llama backend init
main: load the model and apply lora adapter, if any
llama_model_load_from_file_impl: using device ROCm0 (Radeon 8060S Graphics) - 64031 MiB free
llama_model_loader: loaded meta data with 19 key-value pairs and 291 tensors from /root/models/llama-2-7b.Q4_K_M.gguf (version GGUF V2)
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = LLaMA v2
llama_model_loader: - kv   2:                       llama.context_length u32              = 4096
llama_model_loader: - kv   3:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv   4:                          llama.block_count u32              = 32
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 11008
llama_model_loader: - kv   6:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv   7:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   8:              llama.attention.head_count_kv u32              = 32
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  10:                          general.file_type u32              = 15
llama_model_loader: - kv  11:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  12:                      tokenizer.ggml.tokens arr[str,32000]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv  13:                      tokenizer.ggml.scores arr[f32,32000]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  14:                  tokenizer.ggml.token_type arr[i32,32000]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv  15:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  16:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  17:            tokenizer.ggml.unknown_token_id u32              = 0
llama_model_loader: - kv  18:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type q4_K:  193 tensors
llama_model_loader: - type q6_K:   33 tensors
print_info: file format = GGUF V2
print_info: file type   = Q4_K - Medium
print_info: file size   = 3.80 GiB (4.84 BPW) 
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: special tokens cache size = 3
load: token to piece cache size = 0.1684 MB
print_info: arch             = llama
print_info: vocab_only       = 0
print_info: n_ctx_train      = 4096
print_info: n_embd           = 4096
print_info: n_layer          = 32
print_info: n_head           = 32
print_info: n_head_kv        = 32
print_info: n_rot            = 128
print_info: n_swa            = 0
print_info: is_swa_any       = 0
print_info: n_embd_head_k    = 128
print_info: n_embd_head_v    = 128
print_info: n_gqa            = 1
print_info: n_embd_k_gqa     = 4096
print_info: n_embd_v_gqa     = 4096
print_info: f_norm_eps       = 0.0e+00
print_info: f_norm_rms_eps   = 1.0e-05
print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 0.0e+00
print_info: f_attn_scale     = 0.0e+00
print_info: n_ff             = 11008
print_info: n_expert         = 0
print_info: n_expert_used    = 0
print_info: causal attn      = 1
print_info: pooling type     = 0
print_info: rope type        = 0
print_info: rope scaling     = linear
print_info: freq_base_train  = 10000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn  = 4096
print_info: rope_finetuned   = unknown
print_info: model type       = 7B
print_info: model params     = 6.74 B
print_info: general.name     = LLaMA v2
print_info: vocab type       = SPM
print_info: n_vocab          = 32000
print_info: n_merges         = 0
print_info: BOS token        = 1 '<s>'
print_info: EOS token        = 2 '</s>'
print_info: UNK token        = 0 '<unk>'
print_info: LF token         = 13 '<0x0A>'
print_info: EOG token        = 2 '</s>'
print_info: max token length = 48
load_tensors: loading model tensors, this can take a while... (mmap = false)
load_tensors: offloading 32 repeating layers to GPU
load_tensors: offloading output layer to GPU
load_tensors: offloaded 33/33 layers to GPU
load_tensors:        ROCm0 model buffer size =  3820.94 MiB
load_tensors:          CPU model buffer size =    70.31 MiB
..................................................................................................
llama_context: constructing llama_context
llama_context: n_seq_max     = 1
llama_context: n_ctx         = 4096
llama_context: n_ctx_per_seq = 4096
llama_context: n_batch       = 2048
llama_context: n_ubatch      = 512
llama_context: causal_attn   = 1
llama_context: flash_attn    = 0
llama_context: freq_base     = 10000.0
llama_context: freq_scale    = 1
llama_context:  ROCm_Host  output buffer size =     0.12 MiB
llama_kv_cache_unified:      ROCm0 KV buffer size =  2048.00 MiB
llama_kv_cache_unified: size = 2048.00 MiB (  4096 cells,  32 layers,  1 seqs), K (f16): 1024.00 MiB, V (f16): 1024.00 MiB
llama_kv_cache_unified: LLAMA_SET_ROWS=0, using old ggml_cpy() method for backwards compatibility
llama_context:      ROCm0 compute buffer size =   288.00 MiB
llama_context:  ROCm_Host compute buffer size =    16.01 MiB
llama_context: graph nodes  = 1158
llama_context: graph splits = 2
common_init_from_params: setting dry_penalty_last_n to ctx_size = 4096
common_init_from_params: warming up the model with an empty run - please wait ... (--no-warmup to disable)
main: llama threadpool init, n_threads = 16

system_info: n_threads = 16 (n_threads_batch = 16) / 32 | ROCm : NO_VMM = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : LLAMAFILE = 1 | REPACK = 1 | 

sampler seed: 2232334333
sampler params: 
	repeat_last_n = 64, repeat_penalty = 1.000, frequency_penalty = 0.000, presence_penalty = 0.000
	dry_multiplier = 0.000, dry_base = 1.750, dry_allowed_length = 2, dry_penalty_last_n = 4096
	top_k = 40, top_p = 0.950, min_p = 0.050, xtc_probability = 0.000, xtc_threshold = 0.100, typical_p = 1.000, top_n_sigma = -1.000, temp = 0.800
	mirostat = 0, mirostat_lr = 0.100, mirostat_ent = 5.000
sampler chain: logits -> logit-bias -> penalties -> dry -> top-n-sigma -> top-k -> typical -> top-p -> min-p -> xtc -> temp-ext -> dist 
generate: n_ctx = 4096, n_batch = 2048, n_predict = 256, n_keep = 1

 Explain quantum computing in simple terms: what is it, how does it work, and what are its potential benefits?
This is a difficult question to answer because quantum computing is not yet a well-defined field of study, and many of the potential applications are still being researched. However, we can say that quantum computing is a type of computation that relies on the principles of quantum mechanics (the branch of physics that describes the behaviour of particles such as electrons and photons).
These particles obey a set of rules that are different from those obeyed by classical computers, which rely on the principles of classical mechanics. Quantum computing uses a particle’s quantum state (such as its spin) to store information. This means that quantum computers can perform computations that are not possible on classical computers.
In the simplest terms, quantum computing is a type of computation that takes advantage of the unique properties of quantum mechanics. These properties include superposition, entanglement, and non-locality. Superposition is the ability of a quantum system to exist in multiple states simultaneously.
This means that a quantum system can be in two different places at the same time, or have two different properties at the same time. Entanglement is the ability of two quantum systems to be inter

llama_perf_sampler_print:    sampling time =       4.27 ms /   265 runs   (    0.02 ms per token, 62075.43 tokens per second)
llama_perf_context_print:        load time =     631.46 ms
llama_perf_context_print: prompt eval time =      63.57 ms /     9 tokens (    7.06 ms per token,   141.57 tokens per second)
llama_perf_context_print:        eval time =    7110.09 ms /   255 runs   (   27.88 ms per token,    35.86 tokens per second)
llama_perf_context_print:       total time =    7184.25 ms /   264 tokens

Closing words

These are just my first steps. Most of the time I was not even fully aware what I was doing, just reused some sample command lines and code. These experiments were good enough to see that AI works on Linux as well, not just on Windows.

This blog is part of a longer series about my adventures with my new machine and AI. You can reach me to discuss this blog on one of the contacts listed in the upper right corner. You can read the rest of the blogs under the toy tag.

OSPO Notes: How to find your community

Posted by Chris Short on 2026-03-25 04:00:00 UTC
How to find your open source community beyond git commits using repo data, GitHub Insights, GrimoireLab, and social listening to discover who's really out there.

The forge is our new home.

Posted by Fedora Community Blog on 2026-03-24 16:03:24 UTC

After a full year of preparation, the Community Linux Engineering (CLE) team is excited to announce that Fedora Forge, powered by Forgejo, is ready for use! We are proud of this modern Open Source platform and what it means for the future of Fedora Infrastructure. While pagure.io has been a vital part of our community for many years, the time has come to retire our homegrown forge and transition to this powerful new tool.

The final cutover is planned for Flock to Fedora 2026. We strongly encourage teams to migrate their projects well before the conference to ensure a smooth transition. The pagure.io migration is only the first step in a broader infrastructure modernization effort. By the 2027 Fedora 46 release, we plan to retire all remaining Pagure instances across the project, including the package source repositories on src.fedoraproject.org. Getting familiar with Fedora Forge now will help ensure your team is ready as the rest of the Fedora ecosystem transitions.

pagure.io users, it is time to migrate!

If you own a project at pagure.io, you must migrate out of it before June 2026. We’ve prepared a Migration Guide. If you’re unsure about what’s happening, please keep reading

A Focused Scope for Fedora Forge

Historically, the Fedora Project utilized pagure.io, which operated as a general-use public forge where Fedora repositories coexisted alongside personal projects, unrelated upstream software, and individual portfolios.

The Fedora Forge (powered by Forgejo) intentionally adopts a narrower scope. It is an internal piece of project infrastructure, explicitly provisioned to host the code, documentation, and tooling that directly build, manage, and govern the Fedora Project.

What belongs on Fedora Forge:

  • Infrastructure and Operations: Configuration management, deployment scripts, or tooling used by the Fedora Infrastructure team.
  • Release Engineering and Packaging: Tools, scripts, and templates used to build, compose, and distribute Fedora releases.
  • Governance and Team Organization: Trackers, documentation, and collaborative spaces for official Fedora Teams, Special Interest Groups (SIGs), and Working Groups.
  • Fedora-Specific Software: Software projects conceptualized and developed primarily to serve the Fedora community (e.g., Fedora Badges, Bodhi, fedmsg).

What does NOT belong:

  • Personal Projects: Personal portfolios, dotfiles, or hobby projects not directly tied to Fedora are prohibited.
  • General Upstream Development: If you are developing a general-purpose open-source application, its primary upstream development should be hosted elsewhere (e.g., GitHub, GitLab, Codeberg), even if it is packaged in Fedora. (Note: Foundational ecosystem tools like Koji or FreeIPA may qualify for exceptions via a ticket request ).

Why Migrate Early?

Migrating now avoids the “last-minute bottleneck” and gives your team time to adapt to the new resource limits outlined in the Usage Policy:

  • Rewrite Automation: Refactor scripts and webhooks to use the new Forgejo API. Automated tools must respect rate limits and include descriptive user-agent strings.
  • Test Your CI/CD: Native Forgejo Actions are available, but runners are a shared community resource. Builds are subject to a maximum timeout of 10 minutes per job.
  • Clean Up Repositories: Repositories should ideally remain under 500MB. If your project requires large assets, you must use Git LFS (Large File Storage).

Feature Parity & Transparency

We are aware that Forgejo is not a 1:1 clone of Pagure. Most notably, private issues within public repositories are not currently supported in the same way. The CLE team is actively working with the upstream Forgejo community to bridge these functional gaps.

The Migration Roadmap

  • Now – Pre-Flock: Proactive migrations. Please note that the Infrastructure team reserves the right to automatically archive repositories that have seen no activity for 6 months.
  • Flock 2026: The final cutover.
  • Post-Flock: pagure.io becomes a static, read-only historical archive.

The Fedora Council currently has a draft usage policy under consideration, aimed at filling in the details of usage of the new forge instances inside the Fedora Project. Please watch for an additional article here on the Fedora Community Blog that starts the formal feedback process ahead of a Council vote on the policy.

Need help? For technical issues, please open a ticket on the Fedora Infrastructure Tracker or ask in the #fedora-admin Matrix channel.

Technical FAQ

How do authentication and team management work?
Authentication is fully integrated with the Fedora Account System (FAS) via OIDC. Team membership is directly mapped to FAS groups; if you are in a group, your permissions will automatically map to the corresponding Organization/Team on the Forge.

What happens to my API tokens and automation scripts?
Pagure API tokens will not migrate. You must generate new tokens within your account or organization settings on the new Forge and update your scripts to point to the Forgejo API.

Will my local git remote URLs break?
Yes. Once your repository is migrated, pushes to Pagure.io will be rejected. Update your remotes to the new instance:

git remote set-url origin https://forge.fedoraproject.org/<organization>/<your-project>.git

Are Issues and PRs migrating with full fidelity?
Yes. As outlined in the documentation, our tools port Pull Requests, Issues, and Issue Dependencies/Assignments. Pagure-specific tags will be mapped to Forgejo Labels.

Where do I go if my project’s migration fails?
The CLE team is monitoring the #fedora-forge Matrix channel. Reach out there for help with permission desyncs, missing refs, or pipeline breakages.

The post The forge is our new home. appeared first on Fedora Community Blog.

Reflecting on “Warranty Void If Regenerated”

Posted by Brian (bex) Exelbierd on 2026-03-23 10:50:00 UTC

I’ve seen “Warranty Void if Regenerated” going around, particularly among the subset of my friends who believe “LLMs are slop generators”. They typically characterize it as overly optimistic - hopeful, if not downright fantasy.

The “slop generator” position is, in my opinion, demonstrably false, as countless successful code generation outcomes contradict such a sweeping generalization. The dogged pursuit of this position clouds the issue of the real concerns with LLMs as built and used today. I believe there are legitimate company ethics, environmental, and license/copyright concerns worthy of consideration in this space. I also believe that we are still in a highly emotional place where those concerns tend to be both understated and overstated depending on who is talking.

The story consists of three vignettes told from the perspective of Tom, a post-transition specification repair person who works with farmers. In this universe, all code is generated from specs and average humans are making custom software constantly. Domain experts are needed to refine, debug, and in some cases wholesale write the specifications.

There is also a great discussion of the human impact of this post-transition existence. I encourage you to read it, but I’m not addressing that below - not because it isn’t important, but because I want to preserve focus on the “slop generator” drumbeat that feels so misguided.

All in all, I think the piece is well written and that Scott Werner did a great job. This isn’t a critique of the writing or the story itself. I also don’t know what Scott’s perspective is on LLMs, though their public pages and site lead me to believe they are not anti-generative AI.

I’d been harboring a delusion in the back of my mind about trying to write a story about a “machine whisperer”. Scott’s piece reminded me that I am likely still not a creative writer, and I’m glad for their work here.

My thesis here is simple: this story reads like a set of specification and contract failures. It does not read like evidence that code generation inherently produces “slop” or that opaque code from code generation is inherently a failed concept. To be clear here, this is not a critique of Scott’s view, but instead of the “slop generator” view point.

Margaret

Margaret has generated software that pulls in various data sets from both their farm and external sources to predict the best time to harvest. Their latest harvest was harvested before it should have been, and Tom realizes that the specification failed to include a requirement that it raise an error if a data source’s structure or methodology changed. Instead, the system absorbed the data from an updated methodology and didn’t change how it used that data.

This is shown to be a specification problem. The spec as written didn’t suggest that changes were possible or that they should be monitored for, so the generated system didn’t do that.

While this happens with, I suspect, regularity in hand-coded systems, my point isn’t that this is normal. When it happens in a hand-coded system, it is wrong too. And, importantly, it is also a specification error.

There may never have been a specification in the first place and the developer was just expected to figure this out. Depending on their experience and other conditions, they either did … or they didn’t. A clearer spec or set of standards (a/k/a a system prompt) would have fixed this in both cases.

Pit Crew

Scott introduces pit crews in this anecdote. These are people who monitor ongoing quality and concerns.

Today we often approximate this with monitoring systems that we hope are checking the right things, perhaps even with real end-to-end live tests running on a regular basis. We don’t generally dedicate human teams to it.

Whether we ever hit post-transition or not, this begs for a conversation: is QE/QA solely a pre-ship function, or should we be leveraging that knowledge to monitor delivered software in ways that go deeper than what we typically monitor today? What does the SRE practice in this space look like?

Framed that way, the pit crew in the story is less a bandage for sloppy generated code and more the missing extension of our specifications and contracts into how we watch systems evolve over time.

Ethan

Ethan has generated a multitude of tools and they are all communicating with each other. Ethan is a microservice machine.

Ethan, much like Margaret, has a data feed problem. This time one of his own tools made a change in the methodology and calculated a value per-hundredweight instead of per-head. While not stated in the story, this unit for output was chosen at generation because it wasn’t in the specification and the specification also didn’t have a way (or likely even a requirement) to flag changes. The downstream tool didn’t get a read failure but began using this new data value as though it was still per-head. This resulted in poor market price prediction.

The story is similar to Margaret’s except it is more like when Team A breaks Team B in your own company.

For me it raises the interesting point that while we tend to believe otherwise, in many cases our APIs and data formats are our only true contracts. They operate only at the level where they exist. The internals of our dependencies, or the work of other teams, are opaque, and you could say that they may “regenerate” their code every day of the week and you just have to hope it still works for your consumption and use. You have to rely on them not breaking the contract and ensure the contract provides the guarantees you need.

Choreographer

A choreographer is a post-transition architect. It is, in my opinion, the thing we should all be if we are going to use LLMs to generate code.

Here a choreographer goes through Ethan’s systems and defines their interface contracts and layers. They also notice that some tools are unnecessary, while others have formed a sub-network that has no effect. The output of this person’s work is a cleaned up system that functions as a whole and not a set of discrete parts.

This is something we already have to do in large systems, and it’s something that people generating code still have to do. I suspect that some concepts like Gastown try to push parts of this work into a different layer of tooling. And it may even work.

LLM generation and reasoning capacity is getting higher, but none of this eliminates the need for this role or for specification correctness. This is something which we’ve basically never had. Even waterfall failed here.

In this sense, the story reads less like an indictment of generation and more like a warning about what happens when we refuse to name, own, and maintain those contracts across a growing system.

Carol

Carol’s farm illustrates the ugly mess of things we give automation and then complain about.

In this specific case there is a new irrigation system that is using all of the sensors it has to maintain a 60% moisture level across the farm. This results in under- and over-irrigation in some places because the moisture level in those places is influenced by external factors. The system is doing exactly what it was asked to do. The problem is that the target it was given is a bad fit for the actual farm not that the generated system is inherently bad.

Note: I am not a farmer, so I am taking this example at face value.

The short version is that drainage is funny in some places, other places are getting more wind, and still others need slightly differing levels based on the actual crop in that spot. None of this data has been provided to the system, and the story makes it clear that most of it is not in any system.

The farmer just understands their land and can look at it and tell you what is going to happen based on 30 years of real history and 30 years of experience. This is also not new. This is the art and practice of both coding and system administration, and we have failed to codify it usefully to date. We shouldn’t hold our new system accountable for that, but we also shouldn’t pretend that “just write a better spec” is an easy button when so much of the domain is still tacitly known and not shared beyond tribal means.

This is perhaps the one vignette that gives me pause. Even if we can find code generation (it doesn’t have to be LLMs) that writes to a specification, we may still be unsuccessful when our measurements, abstractions, and language can’t yet capture the thing we actually care about.

Right now we make surgical tweaks to the code to encode these lessons as we learn them. Specifying them in human language is often difficult, and maybe that is the core problem. The boundary here isn’t really “hand-written vs generated code”, it is between where, as technologists, we have experience stating precisely enough and where we don’t have a history of doing that well.

But we work in a precise space. In the case of Carol’s farm, Carol and Tom are able to describe the core problems pretty quickly, and I suspect, given time, could come up with data feeds, additional sensors, or equations that describe the issues sufficiently to fix the irrigation system.

It would be hyper-customized to Carol’s farm, but in many ways that is what she wants and needs - and it’s something we fail to deliver, in general, today. Even here, though, calling the outcome “slop” feels like a category error: the system is faithfully pursuing the narrow, naive target we gave it, not spewing random garbage.

The Real Conversation

I wrote this piece in part because the anti-LLM rhetoric of “they are slop generators” gets under my skin. There are a lot of valid reasons to be anti-LLM today. This is not one.

Reading the story reinforced that for me: what fails in these vignettes are specs, contracts, and incentives, not some inherent “slop” property of generated code. The story isn’t an indictment of generated code, it’s a parable about the timeless need for human wisdom, clear communication, and rigorous oversight, no matter how the code comes to be.

I’d like to see our LLM conversations stick closer to the concrete and demonstrably true. Let’s focus on what these systems do, where they fail, and how our specs and contracts are part of that story, instead of getting pulled into slogans like “slop generator” that, by being false, derail the conversation. This creates space for us to have the real conversations that matter around ethics, the environment, and training data usage.

ASN.1 for legacy apps: Synta

Posted by Alexander Bokovoy on 2026-03-23 08:33:00 UTC

Pretty much everything I deal with requires parsing ASN.1 encodings. ASN.1 definitions published as part of internet RFCs: certificates are encoded using DER, LDAP exchanges use BER, Kerberos packets are using DER as well. ASN.1 use is a never ending source of security issues in pretty much all applications. Having safer ASN.1 processing is important to any application developer.

In FreeIPA we are using three separate ASN.1 libraries: pyasn1 and x509 (part of PyCA) for Python code, and asn1c code generator for C code. In fact, we use more: LDAP server plugins also use OpenLDAP’s lber library, while Kerberos KDC plugins also use internal MIT Kerberos parsers.

The PyCA developers noted in their State of OpenSSL statement:

[…] when pyca/cryptography migrated X.509 certificate parsing from OpenSSL to our own Rust code, we got a 10x performance improvement relative to OpenSSL 3 (n.b., some of this improvement is attributable to advantages in our own code, but much is explainable by the OpenSSL 3 regressions). Later, moving public key parsing to our own Rust code made end-to-end X.509 path validation 60% faster — just improving key loading led to a 60% end-to-end improvement, that’s how extreme the overhead of key parsing in OpenSSL was.

That’s 16x performance improvement over the OpenSSL 3. OpenSSL did improve their performance since then but it still pays an overhead for a very flexible design to allow loading cryptographic implementations from dynamic modules (providers). Enablement for externally-provided modules is essential to allow adding new primitives and support for government-enforced standards (such as FIPS 140) where implementations have to be validated in advance and code changes cannot come without expensive and slow re-validation process.

Nevertheless, in FreeIPA we focus on integrating with Linux distributions. Fedora, CentOS Stream, and RHEL enforce crypto consolidation rules, where all packaged applications must be using the same crypto primitives provided by the operating system. We can process metadata ourselves but all cryptographic operations still have to go through OpenSSL and NSS. And paying large performance costs during metadata processing would be hurting to infrastructure components such as FreeIPA.

FreeIPA is a large beast. Aside from its management component, written in Python, it has more than a dozen plugins for 389-ds LDAP server, plugins for MIT Kerberos KDC, plugins for Samba, and tight integration with SSSD, all written in C. Its default certificate authority software, Dogtag PKI, is written in Java and relies on own stack of Java and C dependencies. We are using PyCA’s x509 module for certificate processing in Python code but we cannot use it and underlying ASN.1 libraries in C as those libraries aren’t exposed to C applications or intentionally limited in their functionality to PKI-related tasks.

For the 2026-2028, I’m focusing on enabling FreeIPA to handle post-quantum cryptography (PQC), as a part of the Quantum-Resistant Cryptography in Practice (QARC) project. The project is funded by the European Union under the Horizon Europe framework programme (Grant Agreement No. 101225691) and supported by the European Cybersecurity Competence Centre. One of well publicized aspects of moving to PQC certificates is their sizes. The following table 5 is from Post-Quantum Cryptography for Engineers IETF draft summarizes it well:

PQ Security Level Algorithm Public key size (bytes) Private key size (bytes) Signature size(bytes)
Traditional RSA2048 256 256 256
Traditional ECDSA-P256 64 32 64
1 FN-DSA-512 897 1281 666
2 ML-DSA-44 1312 2560 2420
3 ML-DSA-65 1952 4032 3309
5 FN-DSA-1024 1793 2305 1280
5 ML-DSA-87 2592 4896 4627

Public keys for ML-DSA-65 certificates 7.6x bigger than RSA-2048 ones. You need to handle public keys in multiple situations: when performing certificates’ verification against known certificate authorities (CAs), when matching their properties for validation and identity derivation during authorization, when storing them. FreeIPA uses LDAP as a backend, so storing 7.6 times more data directly affects your scalability when number of users or machines (or Kerberos services) grow up. And since certificates are all ASN.1 encoded, I naturally wanted to establish a performance baseline to ASN.1 parsing.

Synta, ASN.1 library

I started with a small task: created a Rust library, synta, to decode and encode ASN.1 with the help of AI tooling. It quickly grew up to have its own ASN.1 schema parser and code generation tool. With those in place, I started generating more code, this time to process X.509 certificates, handle Kerberos packet structures, and so on. Throwing different tasks at Claude Code led to iterative improvements. Over couple months we progressed to a project with more than 60K lines of Rust code.

Language files blank comment code
Rust 207 9993 17492 67284
Markdown 52 5619 153 18059
Python 41 2383 2742 7679
C 17 852 889 4333
Bourne Shell 8 319 482 1640
C/C++ Header 4 319 1957 1138
TOML 20 196 97 896
YAML 1 20 46 561
make 4 166 256 493
CMake 3 36 25 150
JSON 6 0 0 38
diff 1 6 13 29
SUM 364 19909 24152 102300

I published some of the synta crates yesterday on crates.io, the whole project is available at codeberg.org/abbra/synta. In total, there are 11 crates, though only seven are published (and synta-python is also available at PyPI):

Crate Lines (src/ only)
synta 10572
synta-derive 2549
synta-codegen 17578
synta-certificate 4549
synta-python 8953
synta-ffi 7843
synta-krb5 2765
synta-mtc 7876
synta-tools 707
synta-bench 0
synta-fuzz 3551

Benchmarking, fuzzer, and tools aren’t published. They only needed for development purposes.

Performance

The numbers below were obtained on Lenovo ThinkPad P1 Gen 5, 12th Gen Intel(R) Core(TM) i7-12800H, 64 GB RAM, on Fedora 42. This is pretty much a 3-4 years old hardware.

Benchmarking is what brought this project to life, let’s look at the numbers. When dealing with certificates, ASN.1 encoding can be parsed in different ways: you can visit every structure or stop at outer shells and only visit the remaining nested structures when you really need them. The former is “parse+fields” and the latter is “parse-only” in the following table that summarizes comparison between synta and various Rust crates (and OpenSSL/NSS which were accessible through their Rust FFI bindings):

Library Parse-only Parse+fields vs synta (parse-only) vs synta (parse+fields)
synta 0.48 µs 1.32 µs
cryptography-x509 1.45 µs 1.43 µs 3.0× slower 1.1× slower
x509-parser 2.01 µs 1.99 µs 4.2× slower 1.5× slower
x509-cert 3.16 µs 3.15 µs 6.6× slower 2.4× slower
NSS 7.90 µs 7.99 µs 16× slower 6.1× slower
rust-openssl 15.4 µs 15.1 µs 32× slower 11× slower
ossl 16.1 µs 15.8 µs 33× slower 12× slower

“Parse+fields” tests access every named field: serial number, issuer/subject DNs, signature algorithm OID, signature bytes, validity period, public key algorithm OID, public key bytes, and version. The “parse+fields” speedup is the fair end-to-end comparison: synta’s parse-only advantage is large because most fields are stored as zero-copy slices deferred until access, while other libraries must materialise all fields eagerly at parse time.

The dominant cost in X.509 parsing is Distinguished Name traversal: a certificate’s issuer and subject each contain a SEQUENCE OF SET OF SEQUENCE with per-attribute OID lookup. synta defers this entirely by storing the Name as a RawDer<'a> — a pointer+length into the original input with no decoding. cryptography-x509 takes a similar deferred approach. The nom-based and RustCrypto libraries decode Names eagerly. NSS goes further and formats them into C strings, which is the dominant fraction of its 16× parse overhead.

For benchmarking I used certificates from PyCA test vectors. There are few certificates with different properties, so we parse them multiple times and then average numbers:

Certificate synta cryptography-x509 x509-parser x509-cert NSS
cert_00 (NoPolicies) 1333.7 ns 1386.7 ns 1815.9 ns 2990.6 ns 7940.3 ns
cert_01 (SamePolicies-1) 1348.8 ns 1441.0 ns 2033.4 ns 3174.3 ns 7963.8 ns
cert_02 (SamePolicies-2) 1338.6 ns 1440.1 ns 2120.1 ns 3205.6 ns 8206.8 ns
cert_03 (anyPolicy) 1362.4 ns 1468.3 ns 2006.2 ns 3194.5 ns 7902.4 ns
cert_04 (AnyPolicyEE) 1232.9 ns 1424.7 ns 1968.6 ns 3168.1 ns 7913.1 ns
Average 1323 ns 1432 ns 1989 ns 3147 ns 7985 ns

The gap between synta (1.32 µs) and cryptography-x509 (1.43 µs) is tighter here than in parse-only (3.0×) because synta’s field access includes two format_dn() calls (~800 ns combined) that cryptography-x509 does for effectively free (its offsets were computed at parse time). Synta leads by ~8% overall.

Now, when parsing PQC certificates, an interesting thing happens. First, it is faster to parse ML-DSA than traditional certificates.

Certificate synta cryptography-x509 x509-parser x509-cert NSS
ML-DSA-44 1030.9 ns 1256.4 ns 1732.2 ns 2666.0 ns 7286.9 ns
ML-DSA-65 1124.9 ns 1237.5 ns 1690.5 ns 2664.2 ns 7222.1 ns
ML-DSA-87 1102.6 ns 1226.5 ns 1727.2 ns 2696.6 ns 7284.6 ns
Average 1086 ns 1240 ns 1717 ns 2675 ns 7265 ns

synta’s ML-DSA parse+fields (1.09 µs) is faster than its traditional parse+fields (1.32 µs) because ML-DSA test certificates have shorter Distinguished Names (one attribute each in issuer and subject vs multiple attributes in traditional certificates in the test above). The signature BIT STRING — which is 2,420–4,627 bytes for ML-DSA — is accessed as a zero-copy slice with no size-dependent cost.

Processing CA databases

Imaging your app needs to test whether the certificate presented by a client is known to you (e.g. belongs to a trusted CAs set). A library like OpenSSL looks at the client’s certificate, extracts identifiers of the certificate issuer, looks up whether such issuer is known in the CA database. That would require looking up properties of the certificates in the database. The fast we can do that, the better.

All those numbers in the previous section are for a single certificate being parsed millions of times. In a real app we often need to validate the certificate against a system-wide database of certificate authorities. The database used by Fedora and other Linux distributions comes from Firefox. It contains 180 self-signed root CA certificates for all public CAs with diverse key types (RSA 2048/4096, ECDSA P-256/P-384) and DN structures. The median cert by DER size is “Entrust.net Premium 2048 Secure Server CA” (1,070 bytes); the benchmark uses this cert for single-certificate and field-access sub-benchmarks to get stable results that are not sensitive to certificate-size outliers.

Another data I tried to benchmark against is 9,898 certificates from the Common CA Database (CCADB), covering the full multi-level hierarchy used by Mozilla, Chrome, Apple, and Microsoft:

Depth Count Description
0 919 Root CAs (self-signed)
1 6,627 Intermediates issued directly by roots
2 2,212 Two levels deep
3 137 Three levels deep
4 3 Four levels deep

Intermediate CA certificates tend to have more complex DNs and more extensions than the root CAs in the Mozilla store. The CCADB median cert is “Bayerische SSL-CA-2014-01” (10,432 bytes). These certificates from CCADB cover past 30 years of certificate issuance on the internet.

To see how those benchmarks would behave if CA roots database would be built with post quantum cryptography, I rebuilt the CCADB corpus as ML-DSA certificates. Nine CCADB certificates were skipped: OpenSSL’s x509 -x509toreq -copy_extensions copy step failed to convert them to CSR form, typically because those certs use non-standard DER encodings or critical extensions that the x509toreq pipeline cannot copy into a PKCS#10 request. (The failures are in OpenSSL’s cert→CSR conversion; synta parses all 9,898 original CCADB certs without error.) This leaves 9,889 of the original 9,898 certs in the synthetic database.

The median cert by DER size is “TrustCor Basic Secure Site (CA1)” (6,705 bytes). ML-DSA certs range from 5,530 B to 16,866 B; the distribution is shifted left relative to the CCADB RSA/ECDSA median (10,432 B) because the smallest CCADB certs (compact root CAs with few extensions) become the new median position after ML-DSA key replacement enlarges all certs uniformly.

Benchmark Library Dataset Time Throughput
synta_parse_all synta Mozilla (180 certs) 87.8 µs 2.0 M/sec
nss_parse_all NSS Mozilla (180 certs) 1.577 ms 114 K/sec
openssl_parse_all rust-openssl Mozilla (180 certs) 3.552 ms 50.7 K/sec
ossl_parse_all ossl Mozilla (180 certs) 3.617 ms 49.8 K/sec
synta_parse_and_access synta Mozilla (180 certs) 261 µs 690 K/sec
synta_build_trust_chain synta Mozilla (180 certs) 11.6 µs
synta_parse_all synta CCADB (9,898 certs) 5.10 ms 1.94 M/sec
nss_parse_all NSS CCADB (9,898 certs) 106 ms 93 K/sec
openssl_parse_all rust-openssl CCADB (9,898 certs) 203 ms 48.8 K/sec
ossl_parse_all ossl CCADB (9,898 certs) 214 ms 46.3 K/sec
synta_parse_and_access synta CCADB (9,898 certs) 16.1 ms 615 K/sec
synta_parse_roots synta CCADB (919 roots) 457.7 µs 2.01 M/sec
synta_parse_intermediates synta CCADB (8,979 intermediates) 4.735 ms 1.90 M/sec
synta_build_dependency_tree synta CCADB (9,898 certs) 559 µs
synta_parse_all synta ML-DSA synth (9,889 certs) 5.78 ms 1.71 M/sec
nss_parse_all NSS ML-DSA synth (9,889 certs) 103 ms 96.4 K/sec
openssl_parse_all rust-openssl ML-DSA synth (9,889 certs) 239 ms 41.4 K/sec
ossl_parse_all ossl ML-DSA synth (9,889 certs) 256 ms 38.6 K/sec
synta_parse_and_access synta ML-DSA synth (9,889 certs) 17.5 ms 566 K/sec
synta_parse_roots synta ML-DSA synth (919 roots) 463 µs 1.98 M/sec
synta_parse_intermediates synta ML-DSA synth (8,970 ints.) 5.10 ms 1.76 M/sec
synta_build_dependency_tree synta ML-DSA synth (9,889 certs) 549 µs

NSS is 18–21× slower than synta across all three datasets; rust-openssl is 40–41× slower and ossl is 41–44× slower. All three C-backed libraries successfully parse ML-DSA certificates (NSS 3.120+ and OpenSSL 3.4+ support ML-DSA natively). NSS’s absolute parse time is nearly identical across CCADB traditional certs (106 ms) and ML-DSA synthetic certs (103 ms) — confirming that NSS’s dominant cost is eager DN formatting at parse time, which depends on DN attribute count rather than the signature algorithm. The slightly lower relative slowdown for NSS on ML-DSA (18× vs 21×) is entirely because synta is slower on ML-DSA (5.78 ms vs 5.10 ms), not because NSS is faster.

synta’s throughput is consistent at ~1.7–2.0 M certs/sec across all three datasets, confirming linear O(n) scaling. Parse rate is slightly lower for the ML-DSA synthetic hierarchy (1.71 M/sec) than for the CCADB traditional hierarchy (1.94 M/sec) because the larger ML-DSA SubjectPublicKeyInfo and signature BIT STRING fields add bytes to the tag+length-header scan that synta performs at parse time. The intermediates-only sub-benchmark is slightly lower than roots-only in each dataset (1.76 M/sec vs 1.98 M/sec for ML-DSA; 1.90 M/sec vs 2.01 M/sec for CCADB) because intermediate CAs tend to have more complex DNs and extension lists.

Finally, individual property access for a pre-parsed certificate, single field read, no allocation unless noted:

Field Mozilla (1,070 B) CCADB (10,432 B) ML-DSA (6,705 B) Notes
issuer_raw / subject_raw 4.1 / 4.1 ns 4.2 / 4.1 ns 4.5 / 4.4 ns Zero-copy slice
public_key_bytes / signature_bytes 4.1 / 4.1 ns 4.2 / 4.2 ns 4.6 / 4.4 ns Zero-copy slice
signature_algorithm / public_key_algorithm 5.9 / 5.4 ns 5.9 / 5.5 ns 6.3 / 6.4 ns OID → &'static str
serial_number 10.9 ns 6.8 ns 7.5 ns Integer → i64, length-dependent
validity 180 ns 206 ns 231 ns Two time-string allocations
issuer_dn 401 ns 224 ns 246 ns format_dn()String
subject_dn 404 ns 292 ns 324 ns format_dn()String

Zero-copy fields (issuer_raw, subject_raw, public_key_bytes, signature_bytes) cost ~4–5 ns — the price of reading a pointer and length from a struct field. The slightly higher cost for CCADB and ML-DSA fields vs Mozilla is within measurement noise.

identify_signature_algorithm() and identify_public_key_algorithm() match the OID component array against a static table and return &'static str — no allocation, no string formatting. The ~5–6 ns cost is a few comparisons and a pointer return.

serial_number cost depends on the integer’s byte length: the Entrust Mozilla cert carries a 16-byte serial number (parsed via SmallVec<[u8; 16]>), while the CCADB and ML-DSA synthetic medians have shorter serials. At 10.9, 6.8, and 7.5 ns respectively, all are negligible.

validity (~180–231 ns) allocates two strings: UTCTime and GeneralizedTime are formatted from their raw DER bytes into owned Strings. The two calls account for essentially all of the cost; the YYMMDDHHMMSSZ to RFC 3339 formatting is the dominant work.

format_dn() is the most variable field: it walks the Name DER bytes, decodes each SEQUENCE OF SET OF SEQUENCE, looks up each attribute OID by name, and formats the result into an owned String. The Mozilla cert’s issuer DN is more complex (multiple attributes, longer values: 401 ns) than the CCADB median (224 ns) or the ML-DSA synthetic median (246 ns). The ML-DSA synthetic median’s subject DN (324 ns) is slightly more expensive than the CCADB median (292 ns) because a different cert occupies the median position after key replacement. format_dn() cost is proportional to the DN’s attribute count and string lengths.

Why C Libraries Are Slower

CERT_NewTempCertificate (NSS) and OpenSSL’s d2i_X509 perform significantly more work per certificate than synta:

  1. Eager DN formatting — NSS formats the issuer and subject Distinguished Names into internal C strings during CERT_NewTempCertificate, even when the caller never reads them. Distinguished Name formatting is the single most expensive operation in certificate parsing; doing it unconditionally at parse time accounts for roughly 80% of NSS’s total parse cost. OpenSSL decodes DN structure eagerly as well.

  2. Arena and heap allocation — each NSS certificate allocates a PLArena block and copies the full DER buffer into it (copyDER = 1). OpenSSL allocates from the C heap. These allocations are additional work beyond decoding.

  3. Library state and locking — NSS acquires internal locks on every CERT_NewTempCertificate call to update the certificate cache, even when the resulting certificate is marked as temporary. This serialises concurrent parsing in multi-threaded applications.

  4. FFI boundary costs — the rust-openssl and ossl measurements include the overhead of crossing from Rust into the C library via extern "C" calls and pointer marshalling.

synta defers all of (1): issuer and subject are stored as RawDer<'a> (borrowed byte spans) and decoded only when the caller calls format_dn(). There is no locking, no arena, and no FFI boundary.

In these tests I also found out that PyCA’s cryptography-x509 doesn’t have optimizations for multiple accesses to the same fields. It is typically not a problem if you are just loading a certificate and use it once. If you have to return back to it multiple times, that becomes visible and hurts your performance. So I submitted a pull request to apply some of the optimizations I found with synta. The pull request had to be split into smaller ones and few of them were already merged, so performance to access issuer, subject, and public key in certificates and to some attributes in CSRs was improved 100x. The rest waits for improvements in PyO3 to save some of memory use.

SSH certificates and git signing

Posted by Matthew Garrett on 2026-03-21 19:38:07 UTC

When you’re looking at source code it can be helpful to have some evidence indicating who wrote it. Author tags give a surface level indication, but it turns out you can just lie and if someone isn’t paying attention when merging stuff there’s certainly a risk that a commit could be merged with an author field that doesn’t represent reality. Account compromise can make this even worse - a PR being opened by a compromised user is going to be hard to distinguish from the authentic user. In a world where supply chain security is an increasing concern, it’s easy to understand why people would want more evidence that code was actually written by the person it’s attributed to.

git has support for cryptographically signing commits and tags. Because git is about choice even if Linux isn’t, you can do this signing with OpenPGP keys, X.509 certificates, or SSH keys. You’re probably going to be unsurprised about my feelings around OpenPGP and the web of trust, and X.509 certificates are an absolute nightmare. That leaves SSH keys, but bare cryptographic keys aren’t terribly helpful in isolation - you need some way to make a determination about which keys you trust. If you’re using someting like GitHub you can extract that information from the set of keys associated with a user account1, but that means that a compromised GitHub account is now also a way to alter the set of trusted keys and also when was the last time you audited your keys and how certain are you that every trusted key there is still 100% under your control? Surely there’s a better way.

SSH Certificates

And, thankfully, there is. OpenSSH supports certificates, an SSH public key that’s been signed by some trusted party and so now you can assert that it’s trustworthy in some form. SSH Certificates also contain metadata in the form of Principals, a list of identities that the trusted party included in the certificate. These might simply be usernames, but they might also provide information about group membership. There’s also, unsurprisingly, native support in SSH for forwarding them (using the agent forwarding protocol), so you can keep your keys on your local system, ssh into your actual dev system, and have access to them without any additional complexity.

And, wonderfully, you can use them in git! Let’s find out how.

Local config

There’s two main parameters you need to set. First,

1
git config set gpg.format ssh

because unfortunately for historical reasons all the git signing config is under the gpg namespace even if you’re not using OpenPGP. Yes, this makes me sad. But you’re also going to need something else. Either user.signingkey needs to be set to the path of your certificate, or you need to set gpg.ssh.defaultKeyCommand to a command that will talk to an SSH agent and find the certificate for you (this can be helpful if it’s stored on a smartcard or something rather than on disk). Thankfully for you, I’ve written one. It will talk to an SSH agent (either whatever’s pointed at by the SSH_AUTH_SOCK environment variable or with the -agent argument), find a certificate signed with the key provided with the -ca argument, and then pass that back to git. Now you can simply pass -S to git commit and various other commands, and you’ll have a signature.

Validating signatures

This is a bit more annoying. Using native git tooling ends up calling out to ssh-keygen2, which validates signatures against a file in a format that looks somewhat like authorized-keys. This lets you add something like:

1
* cert-authority ssh-rsa AAAA…

which will match all principals (the wildcard) and succeed if the signature is made with a certificate that’s signed by the key following cert-authority. I recommend you don’t read the code that does this in git because I made that mistake myself, but it does work. Unfortunately it doesn’t provide a lot of granularity around things like “Does the certificate need to be valid at this specific time” and “Should the user only be able to modify specific files” and that kind of thing, but also if you’re using GitHub or GitLab you wouldn’t need to do this at all because they’ll just do this magically and put a “verified” tag against anything with a valid signature, right?

Haha. No.

Unfortunately while both GitHub and GitLab support using SSH certificates for authentication (so a user can’t push to a repo unless they have a certificate signed by the configured CA), there’s currently no way to say “Trust all commits with an SSH certificate signed by this CA”. I am unclear on why. So, I wrote my own. It takes a range of commits, and verifies that each one is signed with either a certificate signed by the key in CA_PUB_KEY or (optionally) an OpenPGP key provided in ALLOWED_PGP_KEYS. Why OpenPGP? Because even if you sign all of your own commits with an SSH certificate, anyone using the API or web interface will end up with their commits signed by an OpenPGP key, and if you want to have those commits validate you’ll need to handle that.

In any case, this should be easy enough to integrate into whatever CI pipeline you have. This is currently very much a proof of concept and I wouldn’t recommend deploying it anywhere, but I am interested in merging support for additional policy around things like expiry dates or group membership.

Doing it in hardware

Of course, certificates don’t buy you any additional security if an attacker is able to steal your private key material - they can steal the certificate at the same time. This can be avoided on almost all modern hardware by storing the private key in a separate cryptographic coprocessor - a Trusted Platform Module on PCs, or the Secure Enclave on Macs. If you’re on a Mac then Secretive has been around for some time, but things are a little harder on Windows and Linux - there’s various things you can do with PKCS#11 but you’ll hate yourself even more than you’ll hate me for suggesting it in the first place, and there’s ssh-tpm-agent except it’s Linux only and quite tied to Linux.

So, obviously, I wrote my own. This makes use of the go-attestation library my team at Google wrote, and is able to generate TPM-backed keys and export them over the SSH agent protocol. It’s also able to proxy requests back to an existing agent, so you can just have it take care of your TPM-backed keys and continue using your existing agent for everything else. In theory it should also work on Windows3 but this is all in preparation for a talk I only found out I was giving about two weeks beforehand, so I haven’t actually had time to test anything other than that it builds.

And, delightfully, because the agent protocol doesn’t care about where the keys are actually stored, this still works just fine with forwarding - you can ssh into a remote system and sign something using a private key that’s stored in your local TPM or Secure Enclave. Remote use can be as transparent as local use.

Wait, attestation?

Ah yes you may be wondering why I’m using go-attestation and why the term “attestation” is in my agent’s name. It’s because when I’m generating the key I’m also generating all the artifacts required to prove that the key was generated on a particular TPM. I haven’t actually implemented the other end of that yet, but if implemented this would allow you to verify that a key was generated in hardware before you issue it with an SSH certificate - and in an age of agentic bots accidentally exfiltrating whatever they find on disk, that gives you a lot more confidence that a commit was signed on hardware you own.

Conclusion

Using SSH certificates for git commit signing is great - the tooling is a bit rough but otherwise they’re basically better than every other alternative, and also if you already have infrastructure for issuing SSH certificates then you can just reuse it4 and everyone wins.


  1. Did you know you can just download people’s SSH pubkeys from github from https://github.com/<username>.keys? Now you do ↩︎

  2. Yes it is somewhat confusing that the keygen command does things other than generate keys ↩︎

  3. This is more difficult than it sounds ↩︎

  4. And if you don’t, by implementing this you now have infrastructure for issuing SSH certificates and can use that for SSH authentication as well. ↩︎

misc fedora bits third week of march 2026

Posted by Kevin Fenzi on 2026-03-21 17:25:07 UTC
Scrye into the crystal ball

Things are just flying by and it seems to be saturday again, so here's another weekly recap.

Secureboot signing

Most of my week was consumed with work on our secure boot signing infrastructure. The old setup was using smart cards in specific builders. This had a lot of disadvantages, including:

  • space on the smart cards was pretty much full, preventing adding more certs

  • Those machines were 'special' and if they went down/broke things would be bad.

  • The smart cards in them are not even made anymore or supported, so we couldn't get more for adding more builders.

So, thanks to a bunch of work from Jeremy Cline we finally have things moved over to the new setup. This setup is:

  • Using our normal signing infrastructure (sigul, soon to be replaced by a rust re-write). We can easily decide in config which machines are used.

  • Using a new hardware on the vault end that has more space for more certs.

  • Allows us to easily add a aarch64 path to sign there.

The signed aarch64 grub2 build is in rawhide now, but for whatever reason it's not working on my slim7x. It is however working in vm's, cloud providers and other hardware, so I suspect it might be just a problem with this laptop. It also doesn't work with my Radxa Orion O6, but again could be something going on there. I think it's at least good enough to get more widespread testing.

We should hopefully have a signed kernel next week, but in the mean time if you have a arm device that supports secureboot, you can update to the latest grub2 and give it a try.

Openh264 builds

We seem to have dropped the ball on f44/f45 openh264 builds. :(

So, I looked at doing some this week. I ran into a linker issue on the i686 builds, but managed to work around that and get builds.

Now we just need to wait for cisco to publish them. I am hoping this process will go much quicker than it has in the past, since we have a better way to upload things for them now.

Time will tell.

Openshift cluster upgrades

I moved all our openshift clusters to 4.21.5 this week (from 4.20.15).

I really love how easy openshift upgrades are. Press button and wait usually. I did have to uprgade to the latest 4.20 first before it would let me move to 4.21, but both steps went fine.

Mass update / reboots next week

Next week we will be catching up on updates all around and rebooting things. The week after we start Fedora 44 Final freeze so we want to have things all updated before that. No special stuff this time, just updates/reboots so I expect it to go smoothly.

comments? additions? reactions?

As always, comment on mastodon: https://fosstodon.org/@nirik/116268414239551452

Contribute at the Fedora CoreOS 44 Test Week

Posted by Fedora Magazine on 2026-03-21 08:00:00 UTC

The Fedora CoreOS and QA teams are gearing up for Fedora 44, and we need your help! We are organizing a Test Week running from March 23 to March 27, 2026.

This event is a nice opportunity for the community to test Fedora CoreOS (FCOS) based on Fedora 44 content before it officially reaches the testing and stable streams. By participating, you help us ensure a smooth and reliable experience for all users.

How does a Test Week work?

A Test Week is an event where anyone can help verify that the upcoming release works as expected. If you’ve been looking for a way to get started with Fedora contribution, this is the perfect entry point.

To participate, you simply need to:

  • Download the FCOS test images.
  • Follow the step-by-step test cases provided.
  • Report whether the tests passed or failed on your hardware or VM.

The Wiki Page is your primary source of information for this event. Once you have completed your tests, please log your results here! Your contribution, big or small, makes a huge difference. Let’s work together to make this release a great one. Happy testing!

Join the Live Sync Session

Want to chat with the team? We are hosting a virtual in-person session on Tuesday, March 24, from 3:00 PM – 4:30 PM UTC. Drop in to ask questions and get help with testing!

Video Meeting: meet.google.com/ufp-bwsb-zwh

Community Update – Week 12 2026

Posted by Fedora Community Blog on 2026-03-20 12:00:00 UTC

This is a report created by CLE Team, which is a team containing community members working in various Fedora groups for example Infrastructure, Release Engineering, Quality etc. This team is also moving forward some initiatives inside Fedora project.

Week: 16 – 20 March 2026

Fedora Infrastructure

This team is taking care of day to day business regarding Fedora Infrastructure.
It’s responsible for services running in Fedora infrastructure.
Ticket tracker

CentOS Infra including CentOS CI

This team is taking care of day to day business regarding CentOS Infrastructure and CentOS Stream Infrastructure.
It’s responsible for services running in CentOS Infratrusture and CentOS Stream.
CentOS ticket tracker
CentOS Stream ticket tracker

Release Engineering

This team is taking care of day to day business regarding Fedora releases.
It’s responsible for releases, retirement process of packages and package builds.
Ticket tracker

  • Fedora 44 Final Freeze will begin on March 31st.
  • Otherwise business as usual.

RISC-V

This is the summary of the work done regarding the RISC-V architecture in Fedora.

  • Executive summary: Continued chipping away at closing the gap between primary and RISC-V Koji.  Resumed stalled work to debug LLVM test-suite failures on Fedora; wrote a draft patch for LLVM to deal with some RISC-V specific failures.  Real F44 rebuild work starts this week — finally, the F43 Koji task queue is empty (making way for F44 mass-rebuild).  Discussed a rough plan for the rebuild in Fedora RISC-V SIG meeting.  We plan to replace board-specific kernels for F44 and use the omni kernel.  RISC-V EU Summit Program Committee work.
  • F43 images can be obtained from here
  • RISC-V omni kernels (formerly “unified”) are now available here
  • Shared Fedora notes for rebuilding OpenJDK to the fine folks doing the CentOS part.

AI

This is the summary of the work done regarding AI in Fedora.

QE

This team is taking care of quality of Fedora. Maintaining CI, organizing test days
and keeping an eye on overall quality of Fedora releases.

Forgejo

This team is working on introduction of https://forge.fedoraproject.org to Fedora
and migration of repositories from pagure.io.

  • Pagure.io sunset messages now live on website and CLI on git push.
  • Demo of Pagure static pages backup at ryanlerch.fedorapeople.org. Net steps to get a full demo of this so we can share with people.
  • New org requests ramping up as more Fedora repos get moved to forge.
  • Private Issues: Make issue query helpers understand public and private issues

EPEL

This team is working on keeping Epel running and helping package things.

  • Held RPM Packaging Workshop at SCALE
  • Helped staff Fedora+CentOS booth at SCALE, with lots of EPEL discussion with conference attendees

UX

This team is working on improving User experience. Providing artwork, user experience,
usability, and general design services to the Fedora project

  • Greetings from Fedora postcard ticket
  • Initial Flock branding sketches posted ticket
  • New repo on Design team Forgejo to store Gemini meeting notes so they’re more accessible.

If you have any questions or feedback, please respond to this report or contact us on #admin:fedoraproject.org channel on matrix.

The post Community Update – Week 12 2026 appeared first on Fedora Community Blog.

New badge: Chemnitzer Linux-Tage 2026 !

Posted by Fedora Badges on 2026-03-20 04:21:49 UTC
Chemnitzer Linux-Tage 2026Thanks for stopping by the Fedora booth at Chemnitzer Linux-Tage

Friday Links 26-10

Posted by Christof Damian on 2026-03-19 23:00:00 UTC

The podcast about culture is great, so is the one with the creator of Kotlin.

Tanith’s techno set is pretty great too.

Leadership

The more you talk about culture, the less people believe you [Podcast] - I have seen some of the stuff they mention.

My preferred product management techniques and frameworks (for the internal platform product… - “Talk to everyone”

My new toy: FreeBSD on the HP Z2 mini revisited

Posted by Peter Czanik on 2026-03-19 09:43:33 UTC

Last week, I wrote about my initial FreeBSD experiences on my new toy, an AI workstation from HP. FreeBSD runs lightning fast on it, but the desktop was somewhat problematic. Well, I made lots of improvements this week!

A bit of debugging

While there are still some rough edges, there have been tons of improvements since last week. I do not have plans to use FreeBSD on the desktop in the long term, but still, I just could not believe that the FreeBSD GUI is this problematic on this device. I did some experimentation though and it helped a lot… :-)

The initial problem I realized while browsing the output of dmesg was that desktop-installer enabled the wrong kernel modules repository for me. The line leading there was this:

KLD amdgpu.ko: depends on kernel - not available or version mismatch

The next problem occurred when I fixed this problem: there was a kernel panic on boot, when amdgpu.ko was loaded.

I did a fresh FreeBSD install and instead of using the latest packages, I decided to go with the quarterly packages. This way, the desktop installer configured the right kmod repo – however, loading amdgpu.ko still caused a kernel panic. Another experiment I made was using the ATI driver instead of AMD. The installer says that AMD is for modern cards, and ATI is for older ones. Well, as it turned out, even if the chip is barely half a year old, it counts as “old”… :-)

I am still not convinced that proper hardware-based acceleration works: both X.org logs and the GNOME “About” page showed software rendering. However, I had no problem with graphics performance: TuxRacer worked perfectly well… :-) And the GNOME desktop also worked nicely and as stable, including video playback. The only pain point when using GNOME was that screen locking still did not work.

KDE to the rescue

Even if it’s just software rendering, the graphics problem seems to be resolved. However, the screen locking problem still bothered me, as I’m an IT security guy with a healthy dose of paranoia (which means that I lock my screen even when I’m home alone… :-)).

So even if I haven’t tried KDE for the past 5+ years, I gave it a try now. After so many years on XFCE and GNOME, the interface looks a bit weird. However, everything I tried on it seems to work just fine, including screen locking.

KDE on FreeBSD

This blog is part of a longer series about my adventures with my new machine and AI. You can reach me to discuss this blog on one of the contacts listed in the upper right corner. You can read the rest of the blogs under the toy tag.

Central log collection - more than just compliance

Posted by Peter Czanik on 2026-03-18 15:10:37 UTC

I often hear, even at security conferences that “no central log collection here” or “we have something due to compliance”. Central logging is more than just compliance. It makes logs easier to use, available and secure, thus making your life easier in operations, security, development, but also in marketing, sales, and so on.

What are logs and what is central log collection?

Most operating systems and applications keep track of what they are doing. They write log messages. A syslog message might look similar:

Mar 16 13:13:49 cent sshd[543817]: Accepted publickey for toor from 192.168.97.14 port 58246 ssh2: RSA SHA256:GeGHdsl1IZrnTniKUxxxX4NpP8Q

Applications might store their logs separately and have their own log format, like this Apache access log:

192.168.0.164 - - [16/Mar/2026:13:17:01 +0100] "HEAD /other/syslog-ng-insider-2026-03-4110-release-opensearch-elasticsearch/ HTTP/1.1" 200 3764 "-" "SkytabBot/1.0 (URL Resolution)"

Central log collection simply means that log messages are collected to a central location instead or in addition to saving them locally.

In this blog we take a look at what ease of use, availability, and security of central log collection mean for you.

Ease of use

If you have a single computer in your organization, finding a log message about an event on that computer takes some time. Once you have 2 computers, you have to check 2 computers to find that event. It might take twice as much time, but it is still easier than implementing central log collection. Not to mention, which one is the central computer. :-)

Once you have a network of 10 computers, logging in to each of them to find a log message about an event becomes a huge overhead. It is still doable, but implementing central log collection is a lot easier already in the short term, than looking at the logs on the machines where they were created.

On a network of 100 computers, it is practically impossible to find relevant logs by security or operations, unless logs are collected centrally.

Availability

Collecting logs centrally means that log messages are available even when the sending machine is down. If you want to know what happened, you do not have to get the machine up and running again, but you can check the logs at the central location. If you see signs of a hardware failure, you can go with a spare part immediately, reducing the time and effort needed to repair the machines.

Security

When a computer is compromised, log messages are often altered or deleted completely. However, this tactic only works with logs stored locally. Collecting logs at a central location allows you to use the unmodified logs and to figure out how the compromise happened.

What is next?

It is time to introduce central logging to your organization if you have not yet done it yet. Of course I am a bit biased, but syslog-ng is the perfect tool to do so. You can get started by reading / watching the syslog-ng tutorial on https://peter.czanik.hu/posts/syslog-ng-tutorial-toc/.

syslog-ng logo

Originally published at https://www.syslog-ng.com/community/b/blog/posts/central-log-collection—more-than-just-compliance

Fedora Asahi Remix 43 is now available

Posted by Fedora Magazine on 2026-03-18 14:00:00 UTC

We are happy to announce the general availability of Fedora Asahi Remix 43. This release brings Fedora Linux 43 to Apple Silicon Macs.

Fedora Asahi Remix is developed in close collaboration with the Fedora Asahi SIG and the Asahi Linux project. This release incorporates all the exciting improvements brought by Fedora Linux 43. Notably, package management is significantly upgraded with RPM 6.0 and the new DNF5 backend for PackageKit for Plasma Discover and GNOME Software ahead of Fedora Linux 44. It also continues to provide extensive device support. This includes newly added support for the Mac Pro, microphones in M2 Pro/Max MacBooks, and 120Hz refresh rate for the built-in displays for MacBook Pro 14/16 models.

Fedora Asahi Remix offers KDE Plasma 6.6 as our flagship desktop experience. It contains all of the new and exciting features brought by Fedora KDE Plasma Desktop 43. It also features a custom Calamares-based initial setup wizard. A GNOME variant is also available, featuring GNOME 49, with both desktop variants matching what Fedora Linux offers. Fedora Asahi Remix also provides a Fedora Server variant for server workloads and other types of headless deployments. Finally, we offer a Minimal image for users that wish to build their own experience from the ground up.

You can install Fedora Asahi Remix today by following our installation guide. Existing systems running Fedora Asahi Remix 41 or 42 should be updated following the usual Fedora upgrade process. Upgrades via GNOME’s Software application are unfortunately not supported. Either KDE’s Plasma Discover or DNF’s System Upgrade command must be used.

Please report any Remix-specific issues in our tracker, or reach out in our Discourse forum or our Matrix room for user support.

A trust paradox

Posted by Ben Cotton on 2026-03-18 12:00:00 UTC

Last month, I wrote about how to define, build, and measure trust in your community. Here’s the challenge: you need to extend trust in order for someone to build trust. I touched on this in 2023 after an Ubuntu release included hate speech in translations. It came back to the fore earlier this month after an AI agent attacked a handful of high-profile GitHub repositories.

The agent took advantage of workflows that allowed an attacker to run malicious code via a variety of mechanisms, including the branch name. The attacking agent only needed to open a pull request to cause damage. Normally, tests run by CI infrastructure are a way to evaluate the trustworthiness of a pull request. Most pull requests, of course, are not malicious, but that doesn’t make them trustworthy. A change that fails linting, unit tests, or integration tests may not be worth a maintainer’s time to review.

So if automated CI tests are both a way to measure trust and a vector for attack, what’s the responsible maintainer to do?

The first step is to make sure your CI jobs are securely configured. Tools like zizmor can identify insecure configurations. You may also want to require that a maintainer manually approve workflows before running against pull requests from untrusted sources. This, of course, puts you into a position where you now have to at least give a cursory review to make sure the change is safe enough for your CI workflow. But that’s less work than a detailed review.

With the rise in AI-generated pull requests, this is a problem that will only add more toil for maintainers. Hopefully, platforms will provide tools that reduce the burden.

This post’s featured photo by 愚木混株 Yumu on Unsplash.

The post A trust paradox appeared first on Duck Alignment Academy.

📝 Valkey version 9.1 🎲

Posted by Remi Collet on 2026-03-18 09:29:00 UTC

RPMs of Valkey version 9.1 are available in the remi-modular repository for Fedora ≥ 42 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).

⚠️ Warning: this is a pre-release version not ready for production usage.

1. Installation

Packages are available in the valkey:remi-9.1 module stream.

1.1. Using dnf4 on Enterprise Linux

# dnf install https://rpms.remirepo.net/enterprise/remi-release-<ver>.rpm
# dnf module switch-to valkey:remi-9.1/common

1.2. Using dnf5 on Fedora

# dnf install https://rpms.remirepo.net/fedora/remi-release-<ver>.rpm
# dnf module reset  valkey
# dnf module enable valkey:remi-9.1
# dnf install valkey

The valkey-compat-redis compatibility package is not available in this stream. If you need the Redis commands, you can install the redis package.

2. Modules

Some optional modules are also available:

These packages are weak dependencies of Valkey, so they are installed by default (if install_weak_deps is not disabled in the dnf configuration).

The Modules are automatically loaded after installation and service (re)start.

3. Future

Valkey also provides a set of modules, which may be submitted for the Fedora official repository.

ℹ️ Notices:

  • Enterprise Linux 9.7 and 10.1, and Fedora 42 have Valkey 8.0 in their repository
  • Fedora 43 has Valkey 8.1
  • Fedora 44 will have Valkey 9.0
  • Fedora 45 will have Valkey 9.1

4. Statistics

valkey

My new toy: AI first steps with the HP Z2 Mini

Posted by Peter Czanik on 2026-03-17 13:40:33 UTC

In the past few weeks, I installed five different operating systems on my latest toy: an AI workstation from HP. I love playing with OSes, but my main goal with the new machine is to learn various aspects of AI. I took my first steps in this adventure on Windows.

Of course, you might ask: why on Windows? Well, it’s easy: because it’s easy… :-) There is nothing to install or configure there, as Windows has multiple built-in apps that support AI and can utilize the NPU (hardware-accelerated AI) support of the AMD Ryzen 395 chip. The Windows Task Manager application already has a graph for NPU utilization and I found good use of it already: the Task Manager was open on the NPU tab while I conducted my experiments, allowing me to verify that it was indeed in use, as I ran a couple of generative AI tasks.

I find it somewhat surprising, but Paint now also comes with integrated AI support. Of course, you must pay for image generation but generating simple graphics (or “stickers”, as they are called in Paint) is free. If the prompt field is empty, its prompt text suggests “a cat wearing sunglasses”, so that’s what I generated. It has no practical use for me, but looks fun regardless… :-) You can see an example in the first figure.

Windows Paint with AI

It’s only slightly crazier that the simplest Windows app, Notepad, now also has AI features. I asked Notepad to write me a three-paragraph-long blog about writing a blog with AI in a boring, formal style. It completed the task successfully: the result is pretty boring and full of bullshit. Here it is:

The integration of artificial intelligence into the realm of content creation, particularly in the sphere of blog writing, presents a multitude of advantages that merit serious consideration. Firstly, AI-driven tools offer an unprecedented level of efficiency, allowing content creators to produce substantial volumes of text without the need for exhaustive manual input. This technological boon is particularly beneficial for those with time constraints or heavy workloads, as it liberates them from the mundane task of typing and editing.

Moreover, the precision and consistency of AI in language generation cannot be understated. By leveraging advanced algorithms, AI systems can maintain a uniform tone and style throughout a piece, which is a common challenge for human writers. This consistency ensures that the content is cohesive and that the voice of the piece remains unaltered, thereby enhancing the overall reading experience.

Lastly, the ability of AI to learn from vast datasets and adapt to various writing styles is a significant asset. It empowers writers to explore new topics and styles with confidence, knowing that the AI can provide a solid foundation upon which to build. This adaptability not only streamlines the writing process but also encourages creativity and innovation in content creation.

Windows Notepad with AI

I also wanted to try a controversial Windows feature: Recall. Well, it does not work. When I started it, I got a nice error message stating that it needs Secure Boot. Linux requires it to be turned off, so I cannot test it now. But I must admit that I do not mind that… :-)

Windows Recall needs Secure Boot

If everything goes well, I’ll make my first steps next week to enable hardware-accelerated AI under Linux.

This blog is part of a longer series about my adventures with my new machine and AI. You can reach me to discuss this blog on one of the contacts listed in the upper right corner. You can read the rest of the blogs under the toy tag.

Fedora 44 on the Raspberry Pi 5

Posted by Peter Robinson on 2026-03-14 19:16:16 UTC

So where has the last six months gone? I was planning on getting images done for Fedora 44 Beta but I was unwell and busy and ran out of time. So what better time to get them out than Pi Day!.

So compared to the last image what do we have now? Quite a lot more and I have more in the pipeline which should be in place in before freeze, plus a possible secret 😉, I just wanted to get something out sooner rather than later for people to play with. So the things that are working and tested are now:

  • All Raspberry Pi 5B: both revC and revD SoC 1/2/4/8/16GB variants
  • Serial console
  • The micro SD slot – the only supported OS disk ATM
  • HDMI including accelerated graphics
  • Wired ethernet port
  • Wireless network interface
  • USB ports (NOT for OS disks)
  • Desktops including images for KDE and GNOME
  • Some other random bits

Overall the devices are quire usable, but I will be working to improve it even more in the coming days.

The things that don’t work, but I’m hoping will be working RSN (pre 44) in no particular order:

  • Raspberry Pi 500 series
  • CM5 Series
  • NVME
  • Thermal
  • Audio
  • A bunch of other stuff
  • Automatic CMA additions

One thing you do need to currently do manually once you’ve created an image is to add the following to the kernel command line (use the –args option to arm-image-installer): cma=256M@0M-1024M and without that accelerated graphics and some other things just won’t work, once you’re booted add it to /etc/kernel/cmdline so new kernels will get it too. I’ll hopefully have that issue fixed shortly, I know the problem, just still haven’t got the best solution!

You’ll also want to disable auto-suspend on the Desktop images.

So where can I get these images? Right here:
The Fedora 44 Minimal Image
The Fedora 44 KDE Image
The Fedora 44 GNOME Workstation Image

Happy Pi Day everyone!

misc fedora bits second week of march 2026

Posted by Kevin Fenzi on 2026-03-14 16:28:27 UTC
Scrye into the crystal ball

Another saturday, another weekly recap.

Fedora 44 Beta release

Monday and Tuesday were all about the Fedora 44 Beta release. Things went mostly smoothly, aside the magazine article publishing early so some outlets announced the release before the website was updated and that caused a bit of confusion.

Hopefully everyone is trying out 44 Beta and reporting bugs and issues so we can have a good final release.

Infra unfreezing flood gates

We were in infra freeze around the Beta release so a bunch of pull requests and changes pilled up waiting for that to end. With the beta out the door, we unfroze and I spent time this week (along with others) pushing out many of those changes. A short / incomplete list:

and more. We got a lot moved forward and there were a number of pull requests from new folks or folks who don't normally submit them and thats been great to see!

kojipkgs outage thursday morning

Thursday morning we had a outage of kojipkgs servers. It all happened before I was awake, but I think I have a good idea of what happened:

  • Someone/scrapers/whoever requested some urls under our ostree tree via our cloudfront distribution.

  • These were for objects directories (the directories themselves)

  • These directories have around 32k object files in them.

  • So, dutifully, apache generated a pretty index of them for the client.

  • This required each request to stat all 32k files in order to display them in a index.

  • This took... minutes for each request

  • Requests filled up the request queue

  • haproxy then marked the backends as down

  • clients started getting 503's

I have no forbidden directory indexes on these directories, so hopefully that will prevent this from happening again.

Scrapers still around

Lets we forget that they are still around, scrapers made their presense known again toward the end of the week. Two things they were doing:

  • They started hitting over and over our hotspot.txt file. This is a small static file containing just "OK" that is used to detect if you are behind a captive portal or not. It's hard to imagine that they get any extracted value from their scraping when they are this mindnumbingly bad at writting a distributed crawler. I guess they make up for it with just having way more clients than they can use to bother with being efficent at all. This one is particularly anoying because we don't want it put it behind anubis or block it or it will break it's entire function.

  • They started hitting koji's 'search' endpoint with pretty exacting queries. These caused database load to go through the roof and caused the application to stop responding. I disabled search for friday, and just re-enabled it. I hope they have moved on to /dev/null now.

comments? additions? reactions?

As always, comment on mastodon: https://fosstodon.org/@nirik/116228691881195787

HostConfig — Home Manager Module for Fedora Atomic Host Configuration

Posted by Miroslav Vadkerti on 2026-03-14 00:00:00 UTC
How I extracted a reusable Home Manager module that materializes Nix store symlinks as real files — so host-side programs on Fedora Silverblue can actually read them.

Community Update – Week 11

Posted by Fedora Community Blog on 2026-03-13 16:00:00 UTC

This is a report created by CLE Team, which is a team containing community members working in various Fedora groups for example Infrastructure, Release Engineering, Quality etc. This team is also moving forward some initiatives inside Fedora project.

Week: 09 – 13 March 2026

Fedora Infrastructure

This team is taking care of day to day business regarding Fedora Infrastructure.
It’s responsible for services running in Fedora infrastructure.
Ticket tracker

CentOS Infra including CentOS CI

This team is taking care of day to day business regarding CentOS Infrastructure and CentOS Stream Infrastructure.
It’s responsible for services running in CentOS Infratrusture and CentOS Stream.
CentOS ticket tracker
CentOS Stream ticket tracker

Release Engineering

This team is taking care of day to day business regarding Fedora releases.
It’s responsible for releases, retirement process of packages and package builds.
Ticket tracker

RISC-V

This is the summary of the work done regarding the RISC-V architecture in Fedora.

  • F44 rebuild will be slow until we clear the current queue in the build system (side note: until RISC-V enters primary Koji, it is likely to be out of sync with primary arch in terms of image delivery timelines).  Engaged on ‘fedora-devel’ (and the Discussion forum) about primary vs. alternative arch requirements. Downstream needs brought the pending Fedora upstream work for LLVM and Java into the foreground. Reviewed the link-time optimization (LTO) situation in Fedora RISC-V: for now we’ll keep it disabled, the LTO gains are rather small, and it gives us much better build times.  Evaluated migrating documentation from Wiki to Forge, but it remains a bit of a low-priority for now.
  • Resolved a “unified kernel” (soon to be “omni kernel”) boot failure on P550
  • Debugged a small regression in ‘arch-test’ reported on the RISC-V Matrix channel.
  • Chip away at the RISC-V tracker
  • Make progress on draining the queue on the tasks tracker

QE

This team is taking care of quality of Fedora. Maintaining CI, organizing test days
and keeping an eye on overall quality of Fedora releases.

  • Fedora 44 Beta was approved last week and is getting released this week. This involved a lot of release validation tests, blocker bugs management, common issues writeup, and more.
  • Ran Podman test week + I18N week (currently in progress). Article and blog invites were created for both events.
  • Cooperation started with Desktop QA wrt adding fmf tests for mesa fedora CI.
  • The Testdays Web app now has a unit tests suite.

Forgejo

This team is working on introduction of https://forge.fedoraproject.org to Fedora
and migration of repositories from pagure.io.

  • Many new orgs added to Forgejo, and migrations ramping up. Fixing migration issues on the fly
  • Pagure static pages archive almost ready to be published for review
  • Continued work on Private Issues:
    • Implement issue-level access control (DB queries still outstanding)
    • Fix hanging and failing tests, (partially caused by rebased upstream, not yet pushed)
    • Switch from pointers to sql.NullInt64 types of public/private issue ID pairs for safety and better consistency with existing code
    • RPM: Package 14.0.3

EPEL

This team is working on keeping Epel running and helping package things.

  • Traveling back from Scale (Trip report will follow)
  • Maintenance work (business as usual) 
  • WIP: matrix bot to look into forgejo repo

UX

This team is working on improving User experience. Providing artwork, user experience,
usability, and general design services to the Fedora project

If you have any questions or feedback, please respond to this report or contact us on #admin:fedoraproject.org channel on matrix.

The post Community Update – Week 11 appeared first on Fedora Community Blog.

JSFX on Fedora Linux: an ultra-fast audio prototyping engine

Posted by Fedora Magazine on 2026-03-13 08:00:00 UTC

Introduction

Writing a real-time audio plugin on Linux often conjures up images of a complex environment: C++, toolchains, CMake, CLAP / VST3 / LV2 SDK, ABI…

However, there is a much simpler approach : JSFX

This article offers a practical introduction to JSFX and YSFX on Fedora Linux: we’ll write some small examples, add a graphical VU meter, and then see how to use it as an CLAP / VST3 plugin in a native Linux workflow.

JSFX (JesuSonic Effects – created by REAPER [7]) allows you to write audio plugins in just a few lines, without compilation, with instant reloading and live editing.

Long associated with REAPER, they are now natively usable on Linux, thanks to YSFX [3], available on Fedora Linux in CLAP and VST3 formats via the Audinux repository ([4], [5]).

This means it’s possible to write a functional audio effect in ten lines, then immediately load it into Carla [8], Ardour [9], or any other compatible host, all within a PipeWire / JACK [11] environment.

A citation from [1] (check the [1] link for images):

In 2004, before we started developing REAPER, we created software designed for creating and modifying FX live, primarily for use with guitar processing.

The plan was that it could run on a minimal Linux distribution on dedicated hardware, for stage use. We built a couple of prototypes.

These hand-built prototypes used mini-ITX mainboards with either Via or Intel P-M CPUs, cheap consumer USB audio devices, and Atmel AVR microcontrollers via RS-232 for the footboard controls.

The cost for the parts used was around $600 each.

In the end, however, we concluded that we preferred to be in the software business, not the hardware business, and our research into adding multi-track capabilities in JSFX led us to develop REAPER. Since then, REAPER has integrated much of JSFX’s functionality, and improved on it.

So, as you can see, this technology is not that new. But the Linux support via YSFX [3] is rather new (Nov 2021, started by Jean-Pierre Cimalando).

A new programming language, but for what ? What would one would use JSFX for ?

This language is dedicated to audio and with it, you can write audio effects like an amplifier, a chorus, a delay, a compressor, or you can write synthesizers.

JSFX is good for rapid prototyping and, once everything is in place, you can then rewrite your project into a more efficient language like C, C++, or Rust.

JSFX for developers

Developing an audio plugin on Linux often involves a substantial technical environment. This complexity can be a hindrance when trying out an idea quickly.

JSFX (JesuSonic Effects) offers a different approach: writing audio effects in just a few lines of interpreted code, without compilation and with instant reloading.

Thanks to YSFX, available on Fedora Linux in CLAP and VST3 formats, these scripts can be used as true plugins within the Linux audio ecosystem.

This article will explore how to write a minimal amplifier in JSFX, add a graphical VU meter, and then load it into Carla as a CLAP / VST3 plugin.

The goal is simple: to demonstrate that it is possible to prototype real-time audio processing on Fedora Linux in just a few minutes.

No compilation environment is required: a text editor is all you need.

YSFX plugin

On Fedora Linux, YSFX comes in 3 flavours :

  • a standalone executable ;
  • a VST3 plugin ;
  • a CLAP plugin.

YSFX is available in the Audinux [5] repository. So, first, install the Audinux repository:

$ dnf copr enable ycollet/audinux

Then, you can install the version you want:

$ dnf install ysfx
$ dnf install vst3-ysfx
$ dnf install clap-ysfx

Here is a screenshot of YSFX as a VST3 plugin loaded in Carla Rack [8]:

Screenshot of YSFX effect VST3 plugin loaded in Carla-rack

You can :

  • Load a file ;
  • Load a recent file ;
  • Reload a file modified via the Edit menu ;
  • Zoom / Unzoom via the 1.0 button ;
  • Load presets ;
  • Switch between the Graphics and Sliders view.

Here is a screenshot of the Edit window:

Screenshot of the editor Window opened via the YSFX plugin.

The  Variables  column displays all the variables defined by the loaded file.

Examples

We will use the JSFX documentation available at [4].

JSFX code is always divided into section.

  • @init : The code in the @init section gets executed on effect load, on samplerate changes, and on start of playback.
  • @slider : The code in the @slider section gets executed following an @init, or when a parameter (slider) changes
  • @block : The code in the @block section is executed before processing each sample block. Typically a block is the length as defined by the audio hardware, or anywhere from 128-2048 samples.
  • @sample : The code in the @sample section is executed for every PCM (Pulse Code Modulation) audio sample.
  • @serialize : The code in the @serialize section is executed when the plug-in needs to load or save some extended state.
  • @gfx [width] [height] : The @gfx section gets executed around 30 times a second when the plug-ins GUI is open.

A simple amplifier

In this example, we will use a slider value to amplify the audio input.

desc:Simple Amplifier
slider1:1<0,4,0.01>Gain

@init
gain = slider1;

@slider
gain = slider1;

@sample
spl0 *= gain;
spl1 *= gain;

slider1, @init, @slider, @sample, spl0, spl1 are JSFX keywords [1].

Description:

  • slider1: create a user control (from 0 to 4 here);
  • @init: section executed during loading;
  • @slider: section executed when we move the slide;
  • @sample: section executed for each audio sample;
  • spl0 and spl1: left and right channels.
  • In this example, we just multiply the input signal by a gain.

Here is a view of the result :

Screenshot of the simple gain example

An amplifier with a gain in dB

This example will create a slider that will produce a gain in dB.

desc:Simple Amplifier (dB)
slider1:0<-60,24,0.1>Gain (dB)

@init
gain = 10^(slider1/20);

@slider
gain = 10^(slider1/20);

@sample
spl0 *= gain;
spl1 *= gain;

Only the way we compute the gain changes.

Here is a view of the result :

Screenshot of the simple gain in dB example

An amplifier with an anti-clipping protection

This example adds protection against clipping and uses a JSFX function for that.

desc:Simple Amplifier with Soft Clip
slider1:0<-60,24,0.1>Gain (dB)

@init
gain = 10^(slider1/20);

@slider
gain = 10^(slider1/20);
function softclip(x) (
  x / (1 + abs(x));
);

@sample
spl0 = softclip(spl0 * gain);
spl1 = softclip(spl1 * gain);

Here is a view of the result :

Screenshot of the simple gain in dB with. a soft clip example

An amplifier with a VU meter

This example is the same as the one above, we just add a printed value of the gain.

desc:Simple Amplifier with VU Meter
slider1:0<-60,24,0.1>Gain (dB)

@init
rms = 0;
coeff = 0.999; // RMS smoothing
gain = 10^(slider1/20);

@slider
gain = 10^(slider1/20);

@sample
// Apply the gain
spl0 *= gain;
spl1 *= gain;
// Compute RMS (mean value of the 2 channels)
mono = 0.5*(spl0 + spl1);
rms = sqrt((coeff * rms * rms) + ((1 - coeff) * mono * mono));

@gfx 300 200 // UI part
gfx_r = 0.1; gfx_g = 0.1; gfx_b = 0.1;
gfx_rect(0, 0, gfx_w, gfx_h);

// Convert to dB
rms_db = 20*log(rms)/log(10);
rms_db < -60 ? rms_db = -60;

// Normalisation for the display
meter = (rms_db + 60) / 60;
meter > 1 ? meter = 1;

// Green color
gfx_r = 0;
gfx_g = 1;
gfx_b = 0;

// Horizontal bar
gfx_rect(10, gfx_h/2 - 10, meter*(gfx_w-20), 20);

// Text
gfx_r = gfx_g = gfx_b = 1;
gfx_x = 10;
gfx_y = gfx_h/2 + 20;
gfx_printf("Level: %.1f dB", rms_db);

The global structure of the code:

  • Apply the gain
  • Compute a smoothed RMS value
  • Convert to dB
  • Display a horizontal bar
  • Display a numerical value

Here is a view of the result :

Screenshot of the simple example with a VU meter

An amplifier using the UI lib from jsfx-ui-lib

In this example, we will use a JSFX UI library to produce a better representation of the amplifier’s elements.

First, clone the https://github.com/geraintluff/jsfx-ui-lib repository and copy the file ui-lib.jsfx-inc into the directory where your JSFX files are saved.

desc:Simple Amplifier with UI Lib VU
import ui-lib.jsfx-inc
slider1:0<-60,24,0.1>Gain (dB)

@init
freemem = ui_setup(0);
rms = 0;
coeff = 0.999;
gfx_rate = 30; // 30 FPS

@slider
gain = 10^(slider1/20);

@sample
spl0 *= gain;
spl1 *= gain;
mono = 0.5*(spl0 + spl1);
rms = sqrt(coeff*rms*rms + (1-coeff)*mono*mono);

// ---- RMS computation ----
level_db = 20*log(rms)/log(10);
level_db < -60 ? level_db = -60;

@gfx 300 200
ui_start("main");

// ---- Gain ----
control_start("main","default");
control_dial(slider1, 0, 1, 0);
cut = (level_db + 100) / 200 * (ui_right() - ui_left()) + ui_left();

// ---- VU ----
ui_split_bottom(50);
ui_color(0, 0, 0);
ui_text("RMS Level: ");
gfx_printf("%d", level_db);
ui_split_bottom(10);
uix_setgfxcolorrgba(0, 255, 0, 1);
gfx_rect(ui_left(), ui_top(), ui_right() - ui_left(), ui_bottom() - ui_top());
uix_setgfxcolorrgba(255, 0, 0, 1);
gfx_rect(ui_left(), ui_top(), cut, ui_bottom() - ui_top());
ui_pop();

The global structure of the example:

  • Import and setup: The UI library is imported and then allocated memory (ui_setup) using @init;
  • UI controls: control_dial creates a thematic potentiometer with a label, integrated into the library;
  • Integrated VU meter: A small graph is drawn with ui_graph, normalizing the RMS value between 0 and 1;
  • UI structure: ui_start(“main”) prepares the interface for each frame. ui_push_height / ui_pop organize the vertical space.

Here is a view of the result :

Screenshot of the simple example with JSFX graphic elements

A simple synthesizer

Now, produce some sound and use MIDI for that.

The core of this example will be the ADSR envelope generator ([10]).

desc:Simple MIDI Synth (Mono Sine)
// Parameters
slider1:0.01<0.001,2,0.001>Attack (s)
slider2:0.2<0.001,2,0.001>Decay (s)
slider3:0.8<0,1,0.01>Sustain
slider4:0.5<0.001,3,0.001>Release (s)
slider5:0.5<0,1,0.01>Volume

@init
phase = 0;
note_on = 0;
env = 0;
state = 0; // 0=idle,1=attack,2=decay,3=sustain,4=release

@slider
// Compute the increment / decrement for each states
attack_inc = 1/(slider1*srate);
decay_dec = (1-slider3)/(slider2*srate);
release_dec = slider3/(slider4*srate);

@block
while (
  midirecv(offset, msg1, msg23) ? (
    status = msg1 & 240;
    note = msg23 & 127;
    vel = (msg23/256)|0;
    // Note On
    status == 144 && vel > 0 ? (
      freq = 440 * 2^((note-69)/12);
      phase_inc = 2*$pi*freq/srate;
      note_on = 1;
      state = 1;
    );
    // Note Off
    (status == 128) || (status == 144 && vel == 0) ? (
      state = 4;
    );
  );
);

@sample
// ADSR Envelope [10]
state == 1 ? ( // Attack
  env += attack_inc;
  env >= 1 ? (
    env = 1;
    state = 2;
  );
);

state == 2 ? ( // Decay
  env -= decay_dec;
  env <= slider3 ? (
    env = slider3;
    state = 3;
  );
);

state == 3 ? ( // Sustain
  env = slider3;
);

state == 4 ? ( // Release
  env -= release_dec;
  env <= 0 ? (
    env = 0;
    state = 0;
  );
);

// Sine oscillator
sample = sin(phase) * env * slider5;
phase += phase_inc;
phase > 2*$pi ? phase -= 2*$pi;

// Stereo output
spl0 = sample;
spl1 = sample;

Global structure of the example:

  • Receives MIDI via @block;
  • Converts MIDI note to frequency (A440 standard);
  • Generates a sine wave;
  • Applies an ADSR envelope;
  • Outputs in stereo.

Here is a view of the result :

Screenshot of the synthesizer example

Comparison with CLAP / VST3

JSFX + YSFX

Advantages of JSFX:

  • No compilation required;
  • Instant reloading;
  • Fast learning curve;
  • Ideal for DSP prototyping;
  • Portable between systems via YSFX.

Limitations:

  • Less performant than native C++ for heavy processing;
  • Less suitable for “industrial” distribution;
  • Simpler API, therefore less low-level control.

CLAP / VST3 in C/C++

Advantages:

  • Maximum performance;
  • Fine-grained control over the architecture;
  • Deep integration with the Linux audio ecosystem;
  • Standardized distribution.

Limitations:

  • Requires a complete toolchain;
  • ABI management/compilation;
  • Longer development cycle.

Conclusion

A functional audio effect can be written in just a few lines, adding a simple graphical interface, and then loaded this script as an CLAP / VST3 plugin on Fedora Linux. This requires no compilation, no complex SDK, no cumbersome toolchain.

JSFX scripts don’t replace native C++ development when it comes to producing optimized, widely distributable plugins. However, they offer an exceptional environment for experimentation, learning signal processing, and rapid prototyping.

Thanks to YSFX, JSFX scripts now integrate seamlessly into the Linux audio ecosystem, alongside Carla, Ardour, and a PipeWire-based audio system.

For developers and curious musicians alike, JSFX provides a simple and immediate entry point into creating real-time audio effects on Fedora Linux.

Available plugins

ysfx-chokehold

A free collection of JS (JesuSonic) plugins for Reaper.

Code available at: https://github.com/chkhld/jsfx

To install this set of YSFX plugins:

$ dnf install ysfx-chokehold

YSFX plugins will be available at /usr/share/ysfx-chokehold.

ysfx-geraintluff

Collection of JSFX effects.

Code available at: https://github.com/geraintluff/jsfx

To install this set of YSFX plugins:

$ dnf install ysfx-geraintluff

YSFX plugins will be available at /usr/share/ysfx-geraintluff.

ysfx-jesusonic

Some JSFX effects from Cockos.

Code available at: https://www.cockos.com/jsfx

To install this set of YSFX plugins:

$ dnf install ysfx-jesusonic

YSFX plugins will be available at /usr/share/ysfx-jesusonic.

ysfx-joepvanlier

A bundle of JSFX and scripts for reaper.

Code available at: https://github.com/JoepVanlier/JSFX

To install this set of YSFX plugins:

$ dnf install ysfx-joepvanlier

YSFX plugins will be available at /usr/share/ysfx-joepvanlier.

ysfx-lms

LMS Plugin Suite – Open source JSFX audio plugins

Code available at: https://github.com/LMSBAND/LMS

To install this set of YSFX plugins:

$ dnf install ysfx-lms

YSFX plugins will be available at /usr/share/ysfx-lms.

ysfx-reateam

Community-maintained collection of JS effects for REAPER

Code available at: https://github.com/ReaTeam/JSFX

To install this set of YSFX plugins:

$ dnf install ysfx-reateam

YSFX plugins will be available at /usr/share/ysfx-reateam.

ysfx-rejj

Reaper JSFX Plugins.

Code available at: https://github.com/Justin-Johnson/ReJJ

To install this set of YSFX plugins:

$ dnf install ysfx-rejj

And all the YSFX plugins will be available at /usr/share/ysfx-rejj.

ysfx-sonic-anomaly

Sonic Anomaly JSFX scripts for Reaper

Code available at: https://github.com/Sonic-Anomaly/Sonic-Anomaly-JSFX

To install this set of YSFX plugins:

$ dnf install ysfx-sonic-anomaly

YSFX plugins will be available at /usr/share/ysfx-sonic-anomaly.

ysfx-tilr

TiagoLR collection of JSFX effects

Code available at: https://github.com/tiagolr/tilr_jsfx

To install this set of YSFX plugins:

$ dnf install ysfx-tilr

YSFX plugins will be available at /usr/share/ysfx-tilr.

ysfx-tukan-studio

JSFX Plugins for Reaper

Code available at: https://github.com/TukanStudios/TUKAN_STUDIOS_PLUGINS

To install this set of YSFX plugins:

$ dnf install ysfx-tukan-studio

YSFX plugins will be available at /usr/share/ysfx-tukan-studio.

Webography

[1] – https://www.cockos.com/jsfx

[2] – https://github.com/geraintluff/jsfx

[3] – https://github.com/JoepVanlier/ysfx

[4] – https://www.reaper.fm/sdk/js/js.php

[5] – https://audinux.github.io

[6] – https://copr.fedorainfracloud.org/coprs/ycollet/audinux

[7] – https://www.reaper.fm/index.php

[8] – https://github.com/falkTX/Carla

[9] – https://ardour.org

[10] – https://en.wikipedia.org/wiki/Envelope_(music)

[11] – https://jackaudio.org

⚙️ PHP version 8.4.19 and 8.5.4

Posted by Remi Collet on 2026-03-13 05:32:00 UTC

RPMs of PHP version 8.5.4 are available in the remi-modular repository for Fedora ≥ 42 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).

RPMs of PHP version 8.4.19 are available in the remi-modular repository for Fedora ≥ 42 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).

ℹ️ These versions are also available as Software Collections in the remi-safe repository.

ℹ️ The packages are available for x86_64 and aarch64.

ℹ️ There is no security fix this month, so no update for versions 8.2.30 and 8.3.30.

Version announcements:

ℹ️ Installation: Use the Configuration Wizard and choose your version and installation mode.

Replacement of default PHP by version 8.5 installation (simplest):

On Enterprise Linux (dnf 4)

dnf module switch-to php:remi-8.5/common

On Fedora (dnf 5)

dnf module reset php
dnf module enable php:remi-8.5
dnf update

Parallel installation of version 8.5 as Software Collection

yum install php85

Replacement of default PHP by version 8.4 installation (simplest):

On Enterprise Linux (dnf 4)

dnf module switch-to php:remi-8.4/common

On Fedora (dnf 5)

dnf module reset php
dnf module enable php:remi-8.4
dnf update

Parallel installation of version 8.4 as Software Collection

yum install php84

And soon in the official updates:

⚠️ To be noticed :

  • EL-10 RPMs are built using RHEL-10.1
  • EL-9 RPMs are built using RHEL-9.7
  • EL-8 RPMs are built using RHEL-8.10
  • intl extension now uses libicu74 (version 74.2)
  • mbstring extension (EL builds) now uses oniguruma5php (version 6.9.10, instead of the outdated system library)
  • oci8 extension now uses the RPM of Oracle Instant Client version 23.26 on x86_64 and aarch64
  • A lot of extensions are also available; see the PHP extensions RPM status (from PECL and other sources) page

ℹ️ Information:

Base packages (php)

Software Collections (php83 / php84 / php85)

 

⚙️ PHP version 8.4.18 and 8.5.3

Posted by Remi Collet on 2026-02-13 05:42:00 UTC

RPMs of PHP version 8.5.3 are available in the remi-modular repository for Fedora ≥ 42 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).

RPMs of PHP version 8.4.18 are available in the remi-modular repository for Fedora ≥ 42 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).

ℹ️ These versions are also available as Software Collections in the remi-safe repository.

ℹ️ The packages are available for x86_64 and aarch64.

ℹ️ There is no security fix this month, so no update for versions 8.2.30 and 8.3.30.

Version announcements:

ℹ️ Installation: Use the Configuration Wizard and choose your version and installation mode.

Replacement of default PHP by version 8.5 installation (simplest):

On Enterprise Linux (dnf 4)

dnf module switch-to php:remi-8.5/common

On Fedora (dnf 5)

dnf module reset php
dnf module enable php:remi-8.5
dnf update

Parallel installation of version 8.5 as Software Collection

yum install php85

Replacement of default PHP by version 8.4 installation (simplest):

On Enterprise Linux (dnf 4)

dnf module switch-to php:remi-8.4/common

On Fedora (dnf 5)

dnf module reset php
dnf module enable php:remi-8.4
dnf update

Parallel installation of version 8.4 as Software Collection

yum install php84

And soon in the official updates:

⚠️ To be noticed :

  • EL-10 RPMs are built using RHEL-10.1
  • EL-9 RPMs are built using RHEL-9.7
  • EL-8 RPMs are built using RHEL-8.10
  • intl extension now uses libicu74 (version 74.2)
  • mbstring extension (EL builds) now uses oniguruma5php (version 6.9.10, instead of the outdated system library)
  • oci8 extension now uses the RPM of Oracle Instant Client version 23.26 on x86_64 and aarch64
  • A lot of extensions are also available; see the PHP extensions RPM status (from PECL and other sources) page

ℹ️ Information:

Base packages (php)

Software Collections (php83 / php84 / php85)

 

Customize Fedora Linux Workstation with Extension Manager

Posted by Fedora Magazine on 2026-03-13 00:25:18 UTC

What are GNOME Shell extensions?

Imagine that Fedora Workstation is your desk, and GNOME Shell extensions are small accessories you add to make it feel more personal. It’s like placing a pencil case on the right side, a lamp that helps you focus, or a small cabinet to keep your things from getting scattered. It’s the same desk—GNOME stays clean and minimal—but a few additions can make your routine more comfortable.

Extensions work on the GNOME interface: the top panel, the way you open applications, how notifications appear, and small details that usually stay hidden. These simple changes can be enough to make your Fedora Workstation feel different. With just one extension, you can make Fedora feel more “you.”

But like any accessories, choose only what truly helps—don’t install everything. Too many extensions can clutter your desktop or make things feel unstable. The goal isn’t to chase excitement, but to find a few small add-ons that better fit the way you work in Fedora Workstation.

Note: The user will need to enable Flathub/Third Party Repos in order to get Extension Manager.

Why use Extension Manager?

Once you see extensions as small “accessories” for GNOME, a question comes up fast: how do you install them without the hassle? This is where Extension Manager helps.

Instead of opening many browser tabs, you can do everything in one place. You can browse extensions. You can search for what you need. You can also read a short description before installing. As a result, the whole process feels calmer and more familiar.

More importantly, Extension Manager makes it easier to experiment safely. For example, you can try one extension to make the top panel more useful. If it doesn’t feel right, you can simply turn it off. Or you can uninstall it in seconds. That way, you stay in control.

Also, you’re not “modding” your whole system. You’re only adding small features. And if you change your mind, you can always go back to GNOME’s clean default look.

In short, Extension Manager is like a small drawer on your desk. It keeps your extensions in one spot. So they’re easy to find, easy to try, and easy to tidy up again.

Install Extension Manager

Let’s move to the easiest part: installing Extension Manager with just a few clicks. Open the Software app on Fedora Workstation, then search for Extension Manager using the search bar. Select the app and click Install. That’s it.

Once the installation is complete, open it from the app menu—look for Extension Manager. Now you’re ready to customize. Start slowly: try one extension first, then see if it fits your daily routine.

Find and Install an Extension

After you open Extension Manager, it can feel like opening an “accessories shop” for your Fedora Workstation. There are many options, from small tweaks to extensions that can change how you work.

Start with the search bar. Think about what you most often need in your day-to-day routine. For example, you might want quicker access to apps, tray icons for indicators, or a more informative top panel. When you find an extension that looks interesting, open its page for a moment. Read the short description, look at the screenshots, and then ask yourself whether it will really help your work flow.

If you’re sure, just click Install. In a few seconds, it will be installed, and you’ll notice the change right away. However, if it doesn’t feel right, don’t hesitate to uninstall it. At this stage, you’re simply trying things out—like picking the accessories that best fit your desk.

Enable/disable and adjust settings

After you install a few extensions, you don’t have to stick with all of them. Sometimes an extension is useful, but you don’t need it all the time. That’s the nice thing about Extension Manager: you can enable or disable extensions at any time, without any drama.

Think of it like accessories on your desk. Some days you need a desk lamp to help you focus. On other days, you want your desk to stay clean and simple. Extensions work the same way. You can turn one on when you need it, and turn it off when you’re done.

If an extension has options, you’ll usually see a Settings or Preferences button. From there, you can tweak small details to match your style—icon placement, button behaviour, panel appearance, and more. This is what makes extensions feel personal. You’re not just installing something and forgetting it; you’re shaping it around your workflow.

And if one day your Fedora starts to feel too crowded, don’t panic. Just open the list of installed extensions and disable the ones you don’t need. Take it slow. The best customization isn’t about how many extensions you have, but how well they fit your daily activities.

Keep it safe: a few practical tips

At this point, you might start thinking, “Wow, there are so many things I can change.” And that’s true. However, if you want Fedora Workstation to stay light and comfortable, there are a few simple habits worth keeping in mind.

First, install extensions the same way you choose tools: only when you truly need them. If you stop using an extension after a few days, it’s better to disable it or remove it. A comfortable desktop isn’t the most crowded one—it’s the one with fewer distractions.

Second, try extensions one by one. If you install many at once, it’s hard to tell which one causes a problem. On the other hand, if you take it slowly, you can quickly feel what fits and what doesn’t.

Finally, remember that GNOME keeps evolving. Sometimes after a major update, an extension may not be ready yet. If something feels odd after an update, the safest move is simple: open Extension Manager and disable the extension you suspect. Once things are back to normal, you can wait for an update or choose an alternative.

In the end, Extension Manager isn’t a ticket to customize without limits. It’s more like a clean toolbox. If you use it with care and focus on what you really need, customization can stay enjoyable—without losing the clean, stable feel of Fedora Workstation.

Wrapping up: share your favorite extensions

Now you know how to customize your Fedora Workstation with Extension Manager. You’ve learned how to install the app, try a few extensions, and adjust their settings. And here’s the fun part: everyone ends up with a different mix of extensions, because we all have different needs and work styles.

If you have a favorite extension, share it. Which one do you rely on most, and what do you use it for? Maybe it helps you stay focused during presentations. Or maybe it makes the top panel more informative, brings back tray icons, or simply speeds up your work flow. Tell us why you like it, so others can picture the benefit.

Who knows—your list might inspire someone else. And you might also discover a new extension that fits your daily routine even better.

Friday Links 26-09

Posted by Christof Damian on 2026-03-12 23:00:00 UTC

The interview with Eugen Rochko, the creator of Mastodon by The Europeans is great, so are their other topics. Both videos in urbanism are a nice watch for today.

Engineering

Internal Platform Technical Product Manager (TPM) patterns: pros and cons - some scenarios, mostly useful for bigger teams.

The syslog-ng Insider 2026-03: 4.11.0 release; OpenSearch; ElasticSearch

Posted by Peter Czanik on 2026-03-11 12:01:47 UTC

The March syslog-ng newsletter is now on-line:

  • Version 4.11.0 of syslog-ng is now available
  • Using OpenSearch data streams in syslog-ng
  • Changes in the syslog-ng Elasticsearch destination

It is available at https://www.syslog-ng.com/community/b/blog/posts/the-syslog-ng-insider-2026-03-4-11-0-release-opensearch-elasticsearch

syslog-ng logo

🎲 PHP version 8.4.19RC1 and 8.5.4RC1

Posted by Remi Collet on 2026-02-27 07:12:00 UTC

Release Candidate versions are available in the testing repository for Fedora and Enterprise Linux (RHEL / CentOS / Alma / Rocky and other clones) to allow more people to test them. They are available as Software Collections, for parallel installation, the perfect solution for such tests, and as base packages.

RPMs of PHP version 8.5.4RC1 are available

  • as base packages in the remi-modular-test for Fedora 42-44 and Enterprise Linux ≥ 8
  • as SCL in remi-test repository

RPMs of PHP version 8.4.19RC1 are available

  • as base packages in the remi-modular-test for Fedora 42-44 and Enterprise Linux ≥ 8
  • as SCL in remi-test repository

ℹ️ The packages are available for x86_64 and aarch64.

ℹ️ PHP version 8.3 is now in security mode only, so no more RC will be released.

ℹ️ Installation: follow the wizard instructions.

ℹ️ Announcements:

Parallel installation of version 8.5 as Software Collection:

yum --enablerepo=remi-test install php85

Parallel installation of version 8.4 as Software Collection:

yum --enablerepo=remi-test install php84

Update of system version 8.5:

dnf module switch-to php:remi-8.5
dnf --enablerepo=remi-modular-test update php\*

Update of system version 8.4:

dnf module switch-to php:remi-8.4
dnf --enablerepo=remi-modular-test update php\*

ℹ️ Notice:

  • version 8.5.4RC1 is in Fedora rawhide for QA
  • EL-10 packages are built using RHEL-10.1 and EPEL-10.1
  • EL-9 packages are built using RHEL-9.7 and EPEL-9
  • EL-8 packages are built using RHEL-8.10 and EPEL-8
  • oci8 extension uses the RPM of the Oracle Instant Client version 23.9 on x86_64 and aarch64
  • intl extension uses libicu 74.2
  • RC version is usually the same as the final version (no change accepted after RC, exception for security fix).
  • versions 8.4.19 and 8.5.4 are planed for March 12th, in 2 weeks.

Software Collections (php84, php85)

Base packages (php)

How to rebase to Fedora Silverblue 44 Beta

Posted by Fedora Magazine on 2026-03-10 19:29:19 UTC

Silverblue is an operating system for your desktop built on Fedora Linux. It’s excellent for daily use, development, and container-based workflows. It offers numerous advantages such as being able to roll back in case of any problems. This article provides the steps to rebase to the newly released Fedora Linux 44 Beta, and how to revert if anything unforeseen happens.

NOTE: Before attempting an upgrade to the Fedora Linux 44 Beta, apply any pending upgrades to your current system.

Updating using the terminal

Because Fedora Linux 44 Beta is not available in GNOME Software, the whole process must be done through a terminal.

First, check if the 44 branch is available, which should be true now:

$ ostree remote refs fedora

You should see the following line in the output:

fedora:fedora/44/x86_64/silverblue

If you want to pin the current deployment (this deployment will stay as an option in GRUB until you remove it), you can do it by running:

# 0 is entry position in rpm-ostree status
$ sudo ostree admin pin 0

To remove the pinned deployment use the following command ( “2” corresponds to the entry position in the output from rpm-ostree status ):

$ sudo ostree admin pin --unpin 2

Next, rebase your system to the Fedora 44 branch.

$ rpm-ostree rebase fedora:fedora/44/x86_64/silverblue

The final thing to do is restart your computer and boot to Fedora Silverblue 44 Beta.

How to revert

If anything bad happens — for instance, if you can’t boot to Fedora Silverblue 44 Beta at all — it’s easy to go back. Pick the previous entry in the GRUB boot menu (you need to press ESC during boot sequence to see the GRUB menu in newer versions of Fedora Silverblue), and your system will start in its previous state. To make this change permanent, use the following command:

$ rpm-ostree rollback

That’s it. Now you know how to rebase to Fedora Silverblue 44 Beta and fall back. So why not do it today?

Known issues

FAQ

Because there are similar questions in comments for each blog about rebasing to newer version of Silverblue I will try to answer them in this section.

Question: Can I skip versions during rebase of Fedora Linux? For example from Fedora Silverblue 42 to Fedora Silverblue 44?

Answer: Although it could be sometimes possible to skip versions during rebase, it is not recommended. You should always update to one version above (42->43 for example) to avoid unnecessary errors.

Question: I have rpm-fusion layered and I got errors during rebase. How should I do the rebase?

Answer: If you have rpm-fusion layered on your Silverblue installation, you should do the following before rebase:

rpm-ostree update --uninstall rpmfusion-free-release --uninstall rpmfusion-nonfree-release --install rpmfusion-free-release --install rpmfusion-nonfree-release

After doing this you can follow the guide in this article.

Question: Could this guide be used for other ostree editions (Fedora Atomic Desktops) as well like Kinoite, Sericea (Sway Atomic), Onyx (Budgie Atomic),…?

Yes, you can follow the Updating using the terminal part of this guide for every ostree edition of Fedora. Just use the corresponding branch. For example for Kinoite use fedora:fedora/44/x86_64/kinoite

RISC-V is sloooow

Posted by Marcin Juszkiewicz on 2026-03-10 17:53:00 UTC

About 3 months ago I started working with RISC-V port of Fedora Linux. Many things happened during that time.

Triaging

I went through the Fedora RISC-V tracker entries, triaged most of them (at the moment 17 entries left in NEW) and tried to handle whatever possible.

Fedora packaging

My usual way of working involves fetching sources of a Fedora package (fedpkg clone -a) and then building it (fedpkg mockbuild -r fedora-43-riscv64). After some time, I check did it built and if not then I go through build logs to find out why.

Effect? At the moment, 86 pull requests sent for Fedora packages. From heavy packages like the “llvm15” to simple ones like the “iyfct” (some simple game). At the moment most of them were merged, and most of these got built for the Fedora 43. Then we can build them as well as we follow ‘f43-updates’ tag on the Fedora koji.

Slowness

Work on packages brings the hard, sometimes controversial, topic: speed. Or rather lack of it.

You see, the RISC-V hardware at the moment is slow. Which results in terrible build times — look at details of the binutils 2.45.1-4.fc43 package I took from koji (Fedora and RISC-V Fedora):

Architecture Cores Memory Build time
aarch64 12 46 GB 36 minutes
i686 8 29 GB 25 minutes
ppc64le 10 37 GB 46 minutes
riscv64 8 16 GB 143 minutes
s390x 3 45 GB 37 minutes
x86_64 8 29 GB 29 minutes

That was StarFive VisionFive 2 board, while it has other strengths (such as upstreamed drivers), it is not the fastest available one. I asked around and one of porters did a built on Milk-V Megrez — it took 58 minutes.

Also worth mentioning is that the current build of RISC-V Fedora port is done with disabled LTO. To cut on memory usage and build times.

RISC-V builders have four or eight cores with 8, 16 or 32 GB of RAM (depending on a board). And those cores are usually compared to Arm Cortex-A55 ones. The lowest cpu cores in today’s Arm chips.

The UltraRISC UR-DP1000 SoC, present on the Milk-V Titan motherboard should improve situation a bit (and can have 64 GB ram). Similar with SpacemiT K3-based systems (but only 32 GB ram). Both will be an improvement, but not the final solution.

Hardware needs for Fedora inclusion

We need hardware capable of building above “binutils” package below one hour. With LTO enabled system-wide etc. to be on par with the other architectures. This is the speed-related requirement.

There is no point of going for inclusion with slow builders as this will make package maintainers complain. You see, in Fedora build results are released into repositories only when all architectures finish. And we had maintainers complaining about lack of speed of AArch64 builders in the past. Some developers may start excluding RISC-V architecture from their packages to not have to wait.

And any future builders need to be rackable and manageable like any other boring server (put in a rack, connect cables, install, do not touch any more). Because no one will go into a data centre to manually reboot an SBC-based builder.

Without systems fulfilling both requirements, we can not even plan for the RISC-V 64-bit architecture to became one of official, primary architectures in Fedora Linux.

I still use QEMU for local testing

Such long build times make my use of QEMU useful. My AArch64 desktop has 80 cores, so with the use of QEMU userspace riscv64 emulation, I can build the “llvm15” package in about 4 hours. Compare that to 10.5 hours on a Banana Pi BPI-F3 builder (it may be quicker on a P550 one).

busy btop
btop shows 80 cores being busy

And LLVM packages make real use of both available cores and memory. I am wondering how fast would it go on 192/384 cores of Ampere One-based system.

Still, I used QEMU for local builds/testing only. Fedora, like several other distributions, does native builds only.

Future plans

We plan to start building Fedora Linux 44. If things go well, we will use the same kernel image on all of our builders (the current ones use a mix of kernel versions). LTO will still be disabled.

When it comes to lack of speed… There are plans to bring new, faster builders. And probably assign some heavier packages to them.

Searching Logseq by Concept, Not Keystrokes

Posted by Tim Waugh on 2026-03-10 13:46:21 UTC

Logseq is great for dumping daily notes, but finding them again later can be a pain. If you’re looking for notes on a “connection timeout” but originally wrote “increasing the socket keepalive”, a standard keyword search will give you nothing. You end up having to guess the exact phrasing your past self used.

I wanted a way to search my graph by concept rather than exact text matches. That’s why I put together the Logseq Semantic Search plugin.

The upcoming database version of Logseq actually has semantic search built-in. But since I’m still using the standard Markdown version for my day-to-day workflow, I wanted to get that capability right now.

Indexing the hierarchy

The plugin uses text embeddings to find conceptually similar blocks. But just embedding individual bullet points doesn’t work well for outliners. A block that just says “needs refactoring” is useless on its own.

If you’ve seen my Logsqueak project, you’ll recognise the indexing approach here. Every block is indexed along with its complete structural lineage—the page name, properties, and the full chain of parent blocks above it.

Because it captures this nested context, the search index knows that a vague bullet point nested under billing-serviceDatabase Connection Pool is actually about your Postgres setup. Searching for “optimizing billing db” will pull that specific child block right to the top of the results.

Running it locally

Since a Logseq graph is essentially a private brain dump, I wanted this to run entirely locally. By default, the plugin connects to Ollama using the lightweight nomic-embed-text model. It’s smart enough to only re-embed blocks that have changed, so it’s relatively fast even without a GPU. (If you prefer, you can also point it at any OpenAI-compatible endpoint in the settings).

I run Fedora Workstation and prefer to keep my host system clean, so I run Ollama via Podman. It’s incredibly straightforward to set up:

# Start the Ollama container, exposing the default port
# and persisting data
podman run -d \
  --name ollama \
  -p 11434:11434 \
  -e "OLLAMA_ORIGINS=*" \
  -v ollama:/root/.ollama \
  docker.io/ollama/ollama

# Pull the lightweight embedding model
podman exec ollama ollama pull nomic-embed-text

Because we mapped port 11434, the Semantic Search plugin can talk to the container seamlessly at http://localhost:11434 right out of the box. No dependency issues, just a private embedding server ready to run in the background.

Usage

You can grab the plugin directly from the Logseq Marketplace. Once it’s installed, hit Alt+K (or click the toolbar icon) to open the search modal. Try typing a natural language query—like “notes about debugging pipeline failures”—and it will surface the relevant blocks even if you didn’t use the word “debugging.”

The source code is up on GitHub if you want to poke around or contribute.

The post Searching Logseq by Concept, Not Keystrokes appeared first on PRINT HEAD.

New toy: Installing FreeBSD on the HP Z2 Mini

Posted by Peter Czanik on 2026-03-10 12:13:13 UTC

Finally, I also installed FreeBSD on my new AI focused mini workstation from HP. I even managed to install GNOME on the machine with minimal effort. However, I also ran into many problems.

So far it’s a mixed experience. Installation went smoothly, FreeBSD 15.0 was up and running in no time. However, FreeBSD is not found by any of the Linux boot managers I use (different flavors of GRUB), and it’s not in the EFI boot menu either. The only way I could boot FreeBSD was bringing up the EFI boot menu, choosing boot from file and loading EFI/freebsd/loader.efi

Once FreeBSD boots on the machine, it is lightning fast. One of the fastest machines I have ever used, in the size of a Lord of the rings book. Still it stays silent while compiling software from FreeBSD ports.

I do not plan to use this box as a FreeBSD desktop, but of course I was curious how much FreeBSD desktop support evolved since I last tried it. I found a nice article on the FreeBSD Foundation website, describing how to install a GUI on FreeBSD using the new desktop-installer tool. It asked tons of questions, did some magic, and after a while I had GNOME up and running.

The good:

  • no manual package installation or configuration editing necessary
  • the exact same GNOME look and feel as on all Linux distributions I tested (except for Ubuntu)
  • sound works, using the built in speaker

The bad:

  • no accelerated graphics at all
  • 3D games start, play music, but no graphics
  • playing YouTube in Firefox works, both graphics and sound, but low quality
  • the screensaver starts automatigically, but cannot be unlocked (workaround: disable screensaver)

The same boring GNOME as everywhere else :-)

The same boring GNOME as everywhere else :-)

I might try to debug some of these issues, but most likely I’ll just reinstall FreeBSD, and keep using it in text-only mode. As far as I could see, there is no in hardware AI acceleration available on FreeBSD. However, with 32 CPU cores, a fast SSD and 128 GB of RAM, this is an ideal box for running complex test environments in FreeBSD jails. I love Bastille and plan to install it once I cleaned up the machine after the GNOME experiment.

This blog is part of a longer series about my adventures with my new machine and AI. You can reach me to discuss this blog on one of the contacts listed in the upper right corner. You can read the rest of the blogs under the toy tag.

Announcing Fedora Linux 44 Beta

Posted by Fedora Magazine on 2026-03-10 08:00:00 UTC

On Tuesday, 10 March 2026, it is our pleasure to announce the availability of Fedora Linux 44 Beta! As with every beta release, this is your opportunity to contribute by testing out the upcoming Fedora Linux 44 Beta release. Testing the beta release is a vital way you can contribute to the Fedora Project. Your testing is invaluable feedback that helps us refine what the final F44 experience will be for all users.

We hope you enjoy this latest beta version of Fedora!

How to get the Fedora Linux 44 Beta release

You can download Fedora Linux 44 Beta, or our pre-release edition versions, from any of the following places:

The Fedora CoreOS “next” stream rebases to Fedora beta content on the same day as the beta release. To try out Fedora Linux 44-based CoreOS, try out the Fedora CoreOS “next” stream today.

You can also update an existing system to the beta using DNF system-upgrade.

The Fedora Linux 44 Beta release content may also be available for Fedora Spins and Labs.

Fedora Linux 44 Beta highlights

Like every Beta release, the Fedora Linux 44 Beta release is packed with changes. The following are highlights from the full set of changes for F44. They are ready for you to test drive in the Fedora Linux 44 Beta.

Installer and desktop Improvements

Goodbye Anaconda Created Default Network Profiles: This change impacts how Anaconda populates network device profiles. Only those devices configured during installation (by boot options, kickstart or interactively in UI) become part of the final system install. This behavior change addresses some long standing issues caused by populating network profiles for all network devices. These made it difficult to correctly reconfigure devices post-install.

Unified KDE Out of the Box Experience: This change introduces the post-install Plasma Setup application for all Fedora KDE variants. In the variants making use of this new setup application, the Anaconda configuration will be adjusted to disable redundant configuration stages that duplicate the functionality exposed in the setup application.

KDE Plasma Login Manager: This change introduced the Plasma Login Manager (PLM) for Fedora KDE variants instead of SDDM for the default login manager.

Reworked Games Lab: This change modernizes the Games Lab deliverable by leveraging the latest technologies. This offers a high quality gaming and game development experience. It includes a change from Xfce to KDE Plasma to take advantage of the latest and greatest Wayland stack for gaming.

Budgie 10.10: Budgie 10.10 is the latest release of Budgie Desktop. Budgie 10.10 migrates from X11 to Wayland. This ensures a viable long-term user experience for Fedora Budgie users and lays groundwork for the next major Budgie release.

LiveCD Improvements

Automatic DTB selection for aarch64 EFI systems: This change intends to make the aarch64 Fedora Live ISO images work out of the box on Windows on ARM (WoA) laptops. This will automatically select the right DTB at boot.

Modernize Live Media: This change modernizes the live media experience by switching to the “new” live environment setup scripts provided by livesys-scripts and leverage new functionality in dracut to enable support for automatically enabling persistent overlays when flashed to USB sticks.

System Enhancements

GNU Toolchain Update: The updates to the GNU Toolchain ensure Fedora stays current with the latest features, improvements, and bug and security fixes from the upstream gcc, glibc, binutils, and gdb projects. They guarantee a working system compiler, assembler, static and dynamic linker, core language runtimes, and debugger.

Reproducible Package Builds: Over the last few releases, we changed our build infrastructure to make package builds reproducible. This is enough to reach 90%. The remaining issues need to be fixed in individual packages. With this change, all package builds are expected to be reproducible in the F44 final release. Bugs will be filed against packages when an irreproducibility is detected. The goal is to have no fewer than 99% of package builds reproducible.

Packit as a dist-git CI: This change continues down the path of modernizing the Fedora CI experience by moving forward with the final phase of the plan to integrate Packit as the default CI for Fedora dist-git.

Remove Python Mock Usage: python-mock was deprecated with Fedora 34. However, it is still in use in many packages. We plan to go through the remaining usages and clean them up, with the goal of retiring python-mock from Fedora.

Adoption of new R Packaging Guidelines: This change introduces new rpm macros to help standardize and automate common R language packaging tasks resulting in a simplification of the rpm spec files.

Introduction of Nix Developer Tool: This change adds the nix package manager developer tool to Fedora.

Hardlink identical files in packages by default: With this change, all fedora packages will automatically hardlink files under /usr by default as a post install action. The mechanism introduced in this change is designed specifically to address reproducibility validation race conditions found in use by traditional hardlinking approaches.

Fedora Linux 44 Beta upgrades and removals

Golang 1.26: Fedora users will receive the most current and recent Go release. Being close to upstream allows us to avoid security issues and provide more updated features. Consequently, Fedora will provide a reliable development platform for the Go language and projects written in it.

MariaDB 11.8 as Distribution Default Version: The distribution default for MariaDB packaging will switch to 11.8. Multiple versions of the MariaDB packages will continue to be available. This change only impact which of the versioned packages presents itself as the unversioned “default”

IBus 1.5.34: Fedora users will benefit from better support of Wayland and Emoji features.

Django 6.x: Fedora Users can make use of the latest Django version; users who use Django add-ons that are not ready for 6.0 yet should be able to switch it out for python3-django5

TagLib 2: This change puts Fedora on the latest supported version, and it will benefit from improvements in future minor releases with a simple update.

Helm 4: Helm 4 has been released upstream with intentional backwards-incompatible changes relative to Helm 3. To ensure a smooth transition for Fedora, this Change introduces Helm 4 as the default helm package, while providing a parallel-installable helm3 package for users and tooling that still rely on Helm 3.

Ansible 13: Update from Ansible 11 and Ansible Core 2.18 to Ansible 13 and Ansible Core 2.20. This includes major robustness and security fixes to the templating engine which might break existing playbooks that had incorrect behavior. This was silently ignored in previous releases.

TeXLive 2025: With this change, we update to the latest version of TeXLive (2025). We also move to a modularized packaging system, which splits the “texlive” SPEC into a set of collection and scheme packages. This reflects the categorization that TeXLive upstream defines. Each collection package will package the immediate component dependencies as subpackages.

Drop QEMU 32-bit Host Builds: Fedora will stop building QEMU on i686 architecture. This change brings Fedora inline with the QEMU upstream project decision to deprecate support for 32-bit host builds. Upstream intends to start removing 32-bit host build support code in a future release and will assume 64-bit atomic ops in all builds.

Drop FUSE 2 libraries in Atomic Desktops: Remove FUSE 2 binaries and libraries from all Atomic Desktops

Drop compatibility for pkla polkit rules in Atomic Desktops: Remove support for deprecated pkla polkit rules from all Fedora Atomic Desktops

More information about Fedora Linux 44 Beta

Details and more information on the many great changes landing in Fedora Linux 44 are available on the Change Set page.


Editor’s Notes

  1. Previously, it was noted that Fedora CoreOS “next” stream releases a week after the beta. This was a publishing error. The Fedora CoreOS “next” stream releases on the same day as the beta release. The article was edited to clarify this error.

In the land of XML

Posted by Kushal Das on 2026-03-09 07:22:53 UTC

One of the major thing at work is XML, due to all things identity. Yes, XML and SAML are very much alive. SWAMID is the identity fedeation for research and higher education in Sweden and edusgain which is the global identify federation around the world connected 80+ pariticipaaant federations connecting over 10k identify and service providers. And these are based on SAML.

In the last few weeks I released two libraries in Rust and then python bindings for the same using pyo3. uppsala is the zero dependency XML library and pyuppsala is the python binding.

Features of uppsala/pyuppsala

  • XML 1.0 parsing with full well-formedness checking
  • Namespace-aware DOM with tree mutation (create, append, insert, remove, detach)
  • XPath 1.0 evaluation (all axes, functions, predicates)
  • XSD validation (structures + datatypes, 40+ built-in types, facets, complex types)
  • XSD regex pattern matching (Unicode categories, blocks, character class subtraction)
  • Imperative XML builder (XmlWriter) for constructing output without a DOM
  • Serialization with pretty-printing, compact output, and streaming to files
  • Automatic encoding detection for UTF-8 and UTF-16 (LE/BE)

Read the full documentation

bergshamra is the pure Rust XML Security library implementing the W3C XML Digital Signatures (XML-DSig), XML Encryption (XML-Enc), and XML Canonicalization (C14N) specifications. Built entirely on the RustCrypto ecosystem with Uppsala for XML parsing, and pybergshamra is the python binding.

Features of bergshamra/pybergshamra

  • XML Digital Signatures — sign and verify (enveloped, enveloping, detached)
  • XML Encryption — encrypt and decrypt (element, content, key wrapping, key transport, multi-recipient)
  • XML Canonicalization — all 6 W3C C14N variants (inclusive/exclusive, with/without comments, 1.0/1.1) with document-subset filtering via XPath
  • X.509 certificate chain — validation with expiry, trust anchors, CRL revocation, chain building
  • Post-quantum signatures — ML-DSA (FIPS 204) and SLH-DSA (FIPS 205) with context strings
  • EdDSA — Ed25519 signatures (RFC 8032)
  • Key agreement — ECDH-ES (P-256/P-384/P-521), X25519, DH-ES (X9.42 finite-field)
  • Key derivation — ConcatKDF, HKDF (SHA-256/384/512), PBKDF2
  • RSA-OAEP — configurable digest (SHA-1/224/256/384/512), MGF1, and OAEPparams
  • HMAC truncation — HMACOutputLength with CVE-2009-0217 minimum length protection
  • SAML support — SAML v1.1 AssertionID attribute as default ID, cid: URI scheme for WS-Security MIME references
  • CipherReference — resolve encrypted content via URI with XPath and Base64 transforms
  • XPath — XPath, XPath Filter 2.0, XPointer for reference processing
  • XSLT — identity transform and minimal XSLT for document-subset operations
  • OPC Relationship Transform — for Office Open XML signatures (ECMA-376 Part 2)
  • Key formats — PEM, DER, PKCS#8 (plain and encrypted), PKCS#12, X.509 (PEM and DER), xmlsec keys.xml, raw symmetric keys
  • KeyInfo resolution — KeyName, X509Certificate (multi-cert chain with leaf detection), X509IssuerSerial, RSA/EC/DSA KeyValue, DEREncodedKeyValue, RetrievalMethod, EncryptedKey, KeyInfoReference
  • #![forbid(unsafe_code)] across every crate

Supported algorithms

CategoryAlgorithms
DigestSHA-1, SHA-224/256/384/512, SHA3-224/256/384/512, MD5†, RIPEMD-160†
Signature (RSA)RSA PKCS#1 v1.5 (SHA-1/224/256/384/512, MD5†, RIPEMD-160†), RSA-PSS (SHA-1/224/256/384/512, SHA3-224/256/384/512)
Signature (EC)ECDSA (P-256/P-384/P-521 × SHA-1/224/256/384/512, SHA3-224/256/384/512, RIPEMD-160†)
Signature (other)DSA (SHA-1, SHA-256), Ed25519, HMAC (SHA-1/224/256/384/512, MD5†, RIPEMD-160†)
Post-quantumML-DSA-44/65/87 (FIPS 204), SLH-DSA SHA2-128f/128s/192f/192s/256f/256s (FIPS 205)
Block cipherAES-128/192/256-CBC, AES-128/192/256-GCM, 3DES-CBC
Key wrapAES-KW-128/192/256 (RFC 3394), 3DES-KW (RFC 3217)
Key transportRSA PKCS#1 v1.5, RSA-OAEP (SHA-1/224/256/384/512 digest, MGF1-SHA-1/224/256/384/512)
Key agreementECDH-ES (P-256/P-384/P-521), X25519, DH-ES (X9.42)
Key derivationConcatKDF, HKDF (SHA-256/384/512), PBKDF2
C14NInclusive 1.0/1.1, Exclusive 1.0, each ± comments
TransformsEnveloped signature, Base64, XPath, XPath Filter 2.0, XSLT (identity), OPC Relationship
Key formatsPEM, DER, PKCS#8, PKCS#12, X.509, xmlsec keys.xml, raw HMAC/AES/3DES

† MD5 and RIPEMD-160 are behind the legacy-algorithms feature flag.

xmlsec test suite compatibility

Bergshamra is tested against the full xmlsec interoperability test suite (1157 test steps across DSig and Enc). These are the same tests used by the xmlsec1 C library, covering test vectors from the W3C, Merlin, Aleksey, IAIK, NIST, and Phaos interop suites.

SuitePassedFailedTotalPass Rate
Enc7010701100%
DSig447945698%
Total11489115799.2%

The 9 DSig failures are GOST algorithm tests (GOST R 34.10-2001, GOST R 34.10-2012-256, GOST R 34.10-2012-512) which require special OS cryptographic libraries not available in the RustCrypto ecosystem.

These are the libraries, you will see the tools/services built on top of these in the coming months hopefully.

Silencing Alerts During OpenShift Cluster Upgrades with Prometheus and Alertmanager

Posted by Miroslav Vadkerti on 2026-03-09 00:00:00 UTC
How to detect OpenShift cluster upgrades, extend the detection window to cover the recovery period, and use Alertmanager inhibit rules to suppress noisy alerts during upgrades.

You’d better start believing in supply chains because you’re in one (SCaLE 23x)

Posted by Ben Cotton on 2026-03-08 19:53:01 UTC

At SCaLE 23x, I debuted “You’d better start believing in supply chains because you’re in one.” This is my first talk that started as a silly joke, but it covers a serious and timely topic.

Two-panel image macro using scenes from "The Pirates of the Caribbean." In the top panel, a pirate looks into the camera and says "You'd better start believing in supply chains." In the bottom panel, the lighting is bluer and the pirate's face is decomposed as if he's been dead for a while. He says "you're in one."

Abstract: “I’m not a supplier!” open source maintainers correctly say. When a large company comes in making unfunded demands, it drives volunteer maintainers away. But supply chain attacks are a reality and they don’t just affect megacorps. As an open source maintainer, you have a supply chain, too.

Improving your security improves safety for everyone. But how can volunteer maintainers who aren’t security experts do this work? This talk introduces easy practices and tools to address common software supply chain concerns. Attendees will also learn how to address supply chain and regulatory concerns from their downstreams.

Date: 8 March 2026

Location: Pasadena, CA, US

Slides

Resources

Looking for links to the tools I mentioned in the talk, perhaps because you’re sitting in the audience right now?

The post You’d better start believing in supply chains because you’re in one (SCaLE 23x) appeared first on Duck Alignment Academy.

misc fedora bits first week of march 2026

Posted by Kevin Fenzi on 2026-03-07 17:27:53 UTC
Scrye into the crystal ball

Here we are in the first week of March 2026 already. This was a pretty quiet week for me, partly due to the Fedora 44 Beta freeze and partly I think due to people traveling/being away. In any case it was welcome to me to have a chance to work on some planned work instead of day to day or fighting fires.

GPU machine

This week I finally got our gpu machine all setup, which has been a very long road. Last year we thought it would be very handy to have a machine that has desktop GPUs in it that we could use to test / build / explore things that could use those. We didn't want a server with fancy datacenter gpus, we wanted things that Fedora users might have. This of course is tricky, since that entails a desktop like machine in a datacenter.

After some looking around, we found the Dell Precision 7960 Rack, which is a rackmount machine, but sort of a desktop too.

We got a loaner to test things out with, and finally decided to buy it and use it. There have been so many little delays with this thing ( wrong network card, need a new one. Time of people involved to setup the testing. Drac license was wrong and I couldn't install it, and more).

But finally this week it's up. We will see how useful it becomes and what new exciting things it opens up.

Fedora 44 Beta is GO for release next tuesday

We had our Fedora 44 Beta go/nogo meeting on thursday and amazinly we were go for release on tuesday. The second beta candidate had no accepted blockers. I'm always a bit surprised when things go so smoothly, but I will take it!

Secure boot signing

I also made some more progress on my secure boot signing setup, but then i hit a blocker. I was able to sign grub and kernel for aarch64, but it doesn't actually boot. (I have my lenovo slim7x and also another aarch64 box that supports secure boot to test with). Hopefully we can get to the bottom of that soon so we can switch things on. I really hope we can have it running before Fedora 44 final freeze.

This also has been a long road.

Just missed it

I'm getting a solar system with batteries and home backup installed late this month, and I'm really looking forward to it.

Unfortunately, my electic coop just informed me that there is going to be a 4 hour power outage on monday for maint work. If it had only been next month, I could have just ignored it. Oh well, one more time for the generator! :)

comments? additions? reactions?

As always, comment on mastodon: https://fosstodon.org/@nirik/116189196834124254

Ideal Laptop

Posted by Avi Alkalay on 2026-03-07 08:15:44 UTC

Do you remember the most important characteristics you should look for in a good laptop? In the following order:

1⃣ A high-resolution, high-density display: 3K or 4K, far beyond HD or Full HD
2⃣ A battery that lasts all day
3⃣ Fast storage (SSD)
4⃣ Light, thin, and elegant

⛔ It’s not the CPU.
⛔ It’s not AI.
⛔ It’s not having huge storage capacity.
⛔ It’s not a large physical size.
⛔ It’s not having more than 8 GB of memory (memory is not storage).
⛔ It’s not having a stylus, tablet convertible, or having a detachable or articulated keyboard.
⛔ And it’s definitely not having a numeric keypad on the side.

Until last week, the best and most affordable laptop on the market with these characteristics was the $1100 MacBook Air. But now Apple has launched the MacBook Neo, which delivers all these qualities — display, battery, storage, lightness, and elegance for 45% less: $600.

❝A laptop for me is just for browsing the internet, email, editing documents, messaging, watching movies, and relaxing with games like Solitaire or Roblox.❞

Congratulations, you’re like 99.9% of humanity. The MacBook Neo delivers the best value for you.

In the Windows laptop universe, these truly important characteristics (display quality, lightness, etc.) are usually found only in the most expensive product lines. To justify the high price, their marketing shifts the focus to things that are largely irrelevant: unnecessarily powerful CPUs, unnecessarily large storage, unnecessarily large memory, tablet modes, styluses, and so on.

All unnecessary for 99.9% of humanity.

And even in those expensive lines, the battery rarely lasts more than two hours, let alone all day. The reason: inefficient CPU.

Don’t be misled when choosing your next laptop. Pay attention to the characteristics that really matter: hires display, battery duration, fast storage, lightness, and elegance. General rule is to avoid laptops that use Intel CPUs.

This also applies to the laptops that companies give to their employees.

Also in my Instagram, Facebook and LinkedIn.

Laptop ideal

Posted by Avi Alkalay on 2026-03-07 07:56:23 UTC

Lembra quais são as características mais importantes que você deve procurar num bom laptop? 1⃣ tela de alta resolução e densidade, 3K ou 4K, bem mais que HD ou full HD; 2⃣ bateria que dura o dia todo; 3⃣ armazenamento rápido (SSD); 4⃣ leve, fino e elegante.

⛔Não é CPU. ⛔Não é IA. ⛔Não é armazenamento grande. ⛔Não é ter tamanho físico grande. ⛔Não é ter mais do que 8GB de memória (memória não é armazenamento). ⛔Não é ter caneta, virar tablet, teclado removível ou articulado. ⛔E definitivamente não é ter teclado numérico lateral.

Até semana passada, o melhor e mais barato laptop do mercado com essas características era o MacBook Air de $1100. Mas agora a Apple lançou o MacBook Neo que entrega todas essas características de tela, bateria, armazenamento, leveza e elegância, 45% mais barato: $600.

❝Laptop prá mim é só prá navegar na internet, e-mail, editar documentos, mandar mensagem, assistir filmes, me distrair jogando paciência e Roblox❞. Parabéns, você é como 99,9% da humanidade. O MacBook Neo entrega o melhor custo-benefício para você.

No universo dos laptops Windows, essas características importantes (tela, leveza etc), só se encontram nas linhas mais caras dos fabricantes. E para justificar o alto preço, seu marketing procura mudar o foco para coisas completamente irrelevantes, como CPU desnecessariamente poderosa, armazenamento desnecessariamente alto, memória desnecessariamente grande, virar tablet, ter caneta etc. Desnecessário para 99,9% da humanidade. E mesmo assim, nem nessas linhas mais caras a bateria dura mais do que 2 horas (motivo: CPU ineficiente), quanto menos o dia todo.

Não seja ludibriado ao escolher seu próximo laptop. Preste atenção nas características que realmente importam: tela, bateria, armazenamento rápido, leveza e elegância. A regra geral é evitar laptops com CPUs Intel.

Isso vale também para laptops que empresas dão a seus funcionários.

Também no meu Instagram, Facebook e LinkedIn.

Final Reminder: Flock 2026 Financial Assistance Applications Close Sunday, March 8th

Posted by Fedora Community Blog on 2026-03-06 10:15:00 UTC

Do you want to join us for our annual contributor conference? We want to see you there! However, we know that traveling to a global event is a big trip. It costs real money. To help out, the Flock Organizing Team offers Flock 2026 financial assistance. We want to make sure money does not stop our active contributors from attending.

This is your final reminder. You must submit your form by Sunday, March 8th, 2026. The organizing team starts looking at the data on Monday morning. Because of this fast timeline, we cannot accept any late forms. Sunday is a hard stop.

What does this funding actually cover? We can help you pay for your travel. This includes your airfare or train tickets. We can also help cover your hotel room at the main event venue. We have a limited budget. Because of this, we cannot fully fund every person who applies. Your peers on the organizing team review all the forms. They look at your community impact to make these tough choices.

Note for Flock 2026 speakers

Are you giving a talk this year? We are excited to hear from you! But please remember one important rule. Being an accepted speaker does not give you guaranteed funding. You still need to ask for help. All speakers must fill out the Flock 2026 financial assistance form if they need travel support.

Apply for Flock 2026 financial assistance

Applying is easy. Just follow these steps:

  1. Open the Google Form application.
  2. Fill in your travel details and your estimated costs.
  3. Explain your recent work and your impact in the Fedora community.
  4. Submit the form before the end of the day on Sunday, March 8th.

We want to bring as many builders and contributors together as possible. Please do not wait until the last minute. If you need support to join us, fill out the application today!

The post Final Reminder: Flock 2026 Financial Assistance Applications Close Sunday, March 8th appeared first on Fedora Community Blog.

Community Update – Week 10 2026

Posted by Fedora Community Blog on 2026-03-06 10:00:00 UTC

This is a report created by CLE Team, which is a team containing community members working in various Fedora groups for example Infrastructure, Release Engineering, Quality etc. This team is also moving forward some initiatives inside Fedora project.

Week: 02 – 06 March 2026

Fedora Infrastructure

This team is taking care of day to day business regarding Fedora Infrastructure.
It’s responsible for services running in Fedora infrastructure.
Ticket tracker

  • Migrated remaining pagure.io repositories, there is one remaining, but needs to have private issues implemented
  • Provide DNS domain name for konflux (ticket)
  • Meeting with the IPA-tuura team to move forward with the Ipsilon replacement project

CentOS Infra including CentOS CI

This team is taking care of day to day business regarding CentOS Infrastructure and CentOS Stream Infrastructure.
It’s responsible for services running in CentOS Infratrusture and CentOS Stream.
CentOS ticket tracker
CentOS Stream ticket tracker

Release Engineering

This team is taking care of day to day business regarding Fedora releases.
It’s responsible for releases, retirement process of packages and package builds.
Ticket tracker

  • Still in Beta Freeze, we have had a couple of release candidate composes for Fedora 44 Beta.
  • GO/NO-GO call for the Beta Release is tentatively scheduled for Thursday, March 5th.

RISC-V

This is the summary of the work done regarding the RISC-V architecture in Fedora.

  • Continue to comb through the list of Fedora packages that need work —  submit changes to Fedora / upstream, review patches from others, and submit builds as needed.
  • Build/test the RISC-V unified kernel.
  • Figured out logistics for 24-h remote access to SpacemiT ‘K3’ hardware (based on RVA230
  • Discussed with the RISC-V SIG and Fedora docs about migrating the Fedora RISC-V docs from the wiki to forge.
  • Figure out the prerequisites for F44 builds — toolchain dependencies are being sorted out.

AI

This is the summary of the work done regarding AI in Fedora.

QE

This team is taking care of quality of Fedora. Maintaining CI, organizing test days
and keeping an eye on overall quality of Fedora releases.

  • Fedora 44 RCs are now available and under heavy testing. The Go/NoGo is scheduled in a few days. Lots of blocker (and not blocker) bugs were discovered, discussed, voted on and resolved. 
  • Kernel 6.19 test days are complete and Podman 5.8 test days are under way.
  • A new version of BlockerBugs bot was implemented and deployed to staging, which should allow us to migrate blocker voting repository from Pagure to Forge (ticket).
  • Our cloud automation now reports results to the production wiki for each new compose.
  • The refactoring of Fedora Easy Karma was merged to the main branch. All places with links were also updated wrt the recent transition to Forge, and the migration is now done for this repo.
  • Forge QA repos now has a reusable workflow for an AI review.
  • More bugs hitting OpenQA automated testing were resolved, some some particular ones are very hard to debug, e.g. an ibus hang or an xdg-desktop-portal crash.

Forgejo

This team is working on introduction of https://forge.fedoraproject.org to Fedora
and migration of repositories from pagure.io.

  • [Forgejo] New Organization and Teams Request: Fedora KDE [Followup]
  • [Forgejo] Participated in the Forge review, planning and retrospective meet with Rodney, Nils, Ryan and David [Board]
  • [Forgejo] New organisation for Fedora Code of Conduct committee + migration [Followup] [Namespace] [Followup]
  • [Forgejo] New repo for Security SIG [Followup] [Namespace] [Followup] [Followup]
  • [Forgejo] New Organization and Teams Request: Fedora Btrfs [Followup] [Namespace]
  • [Docs] Will be migrating Release Notes imminently + branching for F44, contributions welcome once the repo appears on Forgejo
  • Forgejo runners added for orgs mindshare and apps
  • Private Issues
    • Unit tests for code ⇔ DB boundary (Issue & PrivateIssue: done, related types: ongoing)
    • Verify, debug and fix DB migration

UX

This team is working on improving User experience. Providing artwork, user experience,
usability, and general design services to the Fedora project

  • Updated all references to Fedora Design ticket tracker to point to Forgejo [ticket], migration of tickets still wip, but significant progress made.

If you have any questions or feedback, please respond to this report or contact us on #admin:fedoraproject.org channel on matrix.

The post Community Update – Week 10 2026 appeared first on Fedora Community Blog.

How a Single Regex Stalled 30 Testing Farm Jobs for Hours

Posted by Miroslav Vadkerti on 2026-03-06 00:00:00 UTC
Tracing stuck Testing Farm jobs from process list through Python stack traces to a catastrophic regex backtracking bug in tmt — caused by a 1M-character line in test output.

Friday Links 26-08

Posted by Christof Damian on 2026-03-05 23:00:00 UTC

A bit quieter this week. The podcast about the golden ages of computing is interesting, the GitHub issue exploit is also worth a read.

Leadership

The Complicators, The Drama Aggregators, and The Avoiders - I definitely recognise some of these.

Engineering

Socialize the Plan - Does reviewing plans really make sense? They are basically throwaway.

A GitHub Issue Title Compromised 4,000 Developer Machines - oops.

Fedora Project Community Corner @ DevConf.IN 2026 - Day Two

Posted by Akashdeep Dhar on 2026-03-05 18:30:06 UTC
Fedora Project Community Corner @ DevConf.IN 2026 - Day Two

Returning early from the dinner on the previous day was the right call to make, given how packed we were expecting the second day of DevConf.IN 2026 to be. Since this was going to be on 14th February 2026, i.e., a Saturday (and, of course, Valentine's Day), we would most likely have a greater footfall of folks around the conference professing their eternal love for free and open source software that day. I found myself waking up before the scheduled alarm, and as I had some time, I started working on manually awarding the attendees the event badge. For an over-occupied booth like ours, it was especially important since many folks could not get the chance to scan the QR code, and the greater attention was, of course, taken up by the DevConf.IN 2026's visitor choice, Fedora Project Community Trivia.

Learning from the previous day's mistakes, I also decided to give myself some time for breakfast before getting an Uber ride. Somehow, I was able to get one without having to go through more than a couple of cancellations at around 0900am Indian Standard Time. After picking up the swagpacks, Shounak Dey and I set the location directly to the Vyas Building of MIT World Peace University - thus removing the hauling efforts from the day before. I also checked with the likes of Yashwanth Rathakrishnan and Samyak Jain in the meantime, both of whom had their hotel bookings a lot closer to the conference venue. Yashwanth had to skip breakfast since an Uber ride was arranged for him before he could get some food, while Samyak took his time to get ready and head over to the conference venue.

Funnily enough, it was Matthew Miller, this time around, who had arrived at the venue by around 1000am Indian Standard Time and was lounging at the Fedora Project Community Corner. Shounak and I chuckled at the photograph shared by Yashwanth while we were on the way, since our punctuality from the day before might have most likely rubbed off on him. After a smooth ride, we soon arrived at the venue entrance at around 1030am Indian Standard Time. Given that we had the lay of the land and our posters were already placed properly, we had more time to conveniently set up our swagpacks and exhibits at our booth. I checked in with Matthew about how his preparations for his talk were going, while Shounak and Yashwanth started arranging the swagpacks and exhibits on the booth desk.

Samyak decided to add a couple more tricky questions to the question list of the Fedora Project Community Trivia. That was a brilliant idea since it allowed visitors from the previous day to go out of their way to explore more about the Fedora Project and its dealings. As we were quite economical with handing out swagpacks the day before, and none of us was in the mood for taking them back home, we decided to be rather generous that day. I devised a plan for handing out our limited-edition Fedora Project magnet-and-clip combo to the attendees entering Matthew's talk to appreciate their presence there, while Matthew contemplated whether he should extend his talk to include 35 Fedora Linux releases instead of the previously planned 30 of them, since he had to manage the talk duration.

As his talk was scheduled for 0115pm Indian Standard Time, we had more than enough time before then to attend to the visitors. We also decided to temporarily shut down the Fedora Project Community Corner for those 45 minutes to allow a larger audience to attend Matthew's presentation instead. Unfortunately, just like the day before, visitors continued to confuse our booth with the DevConf.IN registration desk. We had a frustrated Samyak craft a handwritten note at our booth to guide folks to the actual DevConf.IN registration desk. His frustrations were well placed, as even though we loved interacting with visitors, we would rather spend time and energy talking about the Fedora Project and Red Hat than being punished for the booth's attendance by becoming glorified compasses.

A quick catch-up with Sudhir Dharanendraiah allowed me to remind him of the community dinner that we were planning to have later that day at Ishaara, Viman Nagar, and also share some Fedora Project stickers with him for them to be shared by the Red Hat India Communities booth personnel. That not only helped out the staff there, but it also allowed the Fedora Project to be cross-promoted across multiple co-located booths. Following this, I headed over next to the DevConf.IN registration desk to avail myself of a couple of DevConf.IN-themed magnets and clips. To return the goodwill, I also shared some Fedora Project magnet-and-clip combos with them to be shared by the visiting attendees, a gesture that they not only appreciated but that also allowed us to further pursue cross-promotion.

At around 1200pm Indian Standard Time, Shounak and Yashwanth took off to explore the booths around DevConf.IN 2026, leaving Samyak and me to tend to the booth visitors. While the footfall did drop by the time they left, it slowly started picking up pace about fifteen minutes later, with questions around .NET compatibility in Fedora Linux and security developments in the Fedora Project. There was another wave of quiz participants who checked in with us about the Fedora Project Community Trivia answer keys and verified where they got things wrong. It was rather refreshing to see that while there were a few visitors who were argumentative about their defeat, a greater number embraced getting to know the things that they did not previously know about the Fedora Project and its dealings.

One of the learnings we took away from organising an interactive event that made the best use of visitors being there in person was to order more exclusive swagpacks in the future. With Samyak dropping another handcrafted note to inform visitors about the temporary shutdown and me wrapping things up at the booth desk, we were pleasantly surprised by Sayak Sarkar, who managed to make it to the venue early that day. Samyak set up a countdown timer on his iPad placed on the booth desk to expire at around 0100pm Indian Standard Time to let folks know the same. After a quick catch-up with both Sayak and Deepesh Nair, we had the returning Shounak and Yashwanth haul the swagpacks and exhibits to the first floor's VY0104 room, where Matthew's talk was scheduled to be organised.

As Matthew was getting prepared for the talk, Shounak made his way to the elevated back of the room with his fancy Canon DSLR camera. Yashwanth and I worked on arranging the swagpacks on one of the front desks so that attendees could help themselves. The room was soon filled to the brim, and the talk went well, albeit with some minor considerations, with Matthew having to speed up the delivery toward the end to accommodate five more releases. Matthew also ensured that there was some time left at the end for questions and feedback; a bunch of them were related to artificial intelligence, his experience, changing times, and newer responsibilities. With the talk getting wrapped up at around 0200pm Indian Standard Time, the Fedora Project Indian Crew made it back to the ground floor.

We had to wash our plans of having lunch at around that time down the drain since the booth activities were planned to wrap up earlier that day. That would have essentially meant that we had to be around to declare the winners of the second day of the Fedora Project Community Trivia at around 0230pm Indian Standard Time. Samyak made himself busy populating the scores on the raffle service while Shounak unsuccessfully tried sharing the progress on the Fedora Badges Revamp Project with Sayak. The poor cellular data reception, along with the wireless downtimes, made it extremely difficult for us to connect to the outside world. Yashwanth and I instead decided to field other conversations around the use of Fedora Linux in the robotics field and other adjacent infrastructure uses.

Matthew joined us soon, asking for feedback, and I shared that it would have been best to cover the earlier releases quickly while spending more time on the recent ones, because it was his personal experience as the Fedora Project Leader that people were there for. Since we coincidentally had him around then, it just made sense for him to felicitate the lucky winners with the exclusive Fedora Project-themed sippers. With Kashyap Ekbote helping us with the pictures after we were through with the awarding ceremony, Rajan Shah appreciated us for putting up a great show of community engagement at the conference. At around 0330pm Indian Standard Time, we started wrapping up our booth operations, with many enthusiastic visitors returning to express just how much they liked us there.

Samyak and I went around reminding Matthew and Sayak about the dinner that he had booked previously in the morning, before seeing them off. For the late lunch, though, Shounak and Samyak discovered the La Sicilia Bistro and Patisserie restaurant, which was barely under five kilometres away from the event venue. On our way out, we swiftly got ourselves some photographs at the DevConf.IN 2026 event entrance was before we booked an Uber ride to the meal place. While the likes of Sudhir and Kashyap could not join us for lunch, Avadhoot Dhere gracefully joined our little party, and we did end up having quite the delicacies while sharing the best moments of the day. We saw each other off at around 0500pm Indian Standard Time after a rather late lunch and returned home on a quiet Uber ride.

Having had a brief respite, one hell of a confusing disaster struck when I checked with Sudhir to restate the location of the restaurant. It turned out that Samyak had prebooked the wrong branch of the Ishaara restaurant, which was about thirty kilometres on the outskirts of the city, i.e., in Wakad and not the one that was under approximately ten kilometres from all ends, i.e., in Viman Nagar. I checked in with Samyak, but after not hearing back from him, I decided to create a new booking under my name amidst the filled capacity due to the Valentine's Day evening. Thankfully, I was able to get one, and after hearing from Matthew and Karen Miller about them having gone to the wrong location, I asked them to go on without us because it would have taken at least a couple of hours just to get there.

Even Samyak had to tap out since he was feeling under the weather after all the work from the day, so I checked in with the likes of Shounak, Yashwanth, and Sudhir to reconfirm the correct location at around 0730pm Indian Standard Time. It was quite the mess-up, but I was not going to let the Fedora Project's community presence at DevConf.IN 2026 end on a sour note. The four of us ordered light bites with hand gestures, as the restaurant employed specially-abled people, and it was an enlightening experience to witness the world from their perspective. After some interesting conversations and unpacking catchups, Sudhir and Yashwanth left first before Shounak and I were able to find ourselves an Uber ride, finally breathing a sigh of relief after having organised this throughout the last couple of long days.

Counting Synology Photos uploads with synofoto-media-count

Posted by Brian (bex) Exelbierd on 2026-03-04 21:10:00 UTC

I’m currently testing Synology Photos, including the iPhone uploader. I wanted to know how far the upload had actually gotten.

The problem is that none of the obvious UIs answer that.

  • The Synology Photos web UI doesn’t show a total count.
  • The phone UI shows my whole camera roll (uploaded or not), and also doesn’t give a useful count.

So I wrote a small tool: synofoto-media-count.

The mismatch

If you’re backing up an iPhone library, you can end up with three numbers that don’t agree:

  • The number of photos on your phone
  • The number of files on disk
  • The number of “things” Synology Photos has indexed (which the UI doesn’t show)

That last one is the number I cared about. I’m fine with the file system being messy - I just want to know whether the app has ingested what I think it has.

Why counting files doesn’t answer this

The file system is easy to count, but it’s not what I’m trying to measure. With Live Photos, the file count is expected to be “weird” because a single photo experience can be multiple files.

What I actually want is a number that matches the app’s idea of “items,” because that’s what I’m mentally comparing to the photo count on my phone. That’s the gap this script closes.

What the script does

The repo contains a read-only bash script (count-media.sh) that runs SELECT queries against Synology Photos’ PostgreSQL database (by default, the synofoto database). It has options for multiple users and folders, JSON output for automation, and an optional publish-to-ha.py helper that publishes counts into Home Assistant via MQTT auto-discovery. It collapses Live Photo pairs into a single “item” so the results are closer to what you see on the phone.

Requirements

To run it, you need:

  • Synology DSM 7.x with Synology Photos installed
  • SSH access to the NAS
  • sudo privileges (or direct postgres user access)
  • Python 3 if you want to use publish-to-ha.py

Safety

This script is read-only. It runs SELECT queries only and never modifies your data.

Usage (quick)

In the common case, you copy the script to your NAS, make it executable, and run it with sudo. It will try to do something sensible for iPhone uploads (like auto-selecting /MobileBackup if it exists) and will scope to the current user by default.

If the defaults don’t match your setup, there are flags for selecting a folder interactively, scoping to a different user, and emitting --json output for automation.

Home Assistant integration

If you want the counts to show up somewhere other than your terminal, publish-to-ha.py can publish per-user counts into Home Assistant via MQTT auto-discovery. The result is a handful of sensors per user (non-live photos, Live Photos, videos, other, and a total) that you can graph or use in automations.

Notes

  • Counts include nested subfolders by default. If you want a single folder only, there’s an “exact folder” option.
  • --verbose shows additional technical detail (raw unit counts, type breakdowns).
  • --inspect helps when something looks weird - like “incomplete Live Photo groups” where one half is missing.
  • For iPhone MobileBackup libraries, the defaults for photo/video types should work, but they’re overridable if your installation differs.

It prints a breakdown like:

  • non-live photos
  • Live Photos (collapsed groups, plus their underlying files)
  • standalone videos
  • “other” items
  • incomplete Live Photo groups (one half is missing)

That breakdown is enough to sanity check whether “uploads are incomplete” vs “uploads are likely complete.” This provides a validation point to go with “why isn’t this thing uploading now?”

Now it’s time to set my phone to sleep focus and leave the uploader running overnight … for a long time.

New toy: Installing Fedora Linux on the HP Z2 Mini

Posted by Peter Czanik on 2026-03-04 11:56:13 UTC

The data sheet of my new AI-focused mini workstation from HP does not mention Fedora, but I could install it just fine. I expected this though, because when I asked around about Linux support and hardware AI acceleration for AMD Ryzen 39X chips, all responses came from Fedora users… :-)

Installing Fedora on the HP Z2 Mini was a smooth experience, even though I hadn’t used the graphical installer for ages. I installed the Fedora server variant during Covid, and I’m upgrading it ever since. Still, using the graphical installer was easy, so Fedora was up and running in no time.

Rebooting Fedora is not always fun, though. This box has two SSDs in it. In most cases, booting is OK, but sometimes the numbering of SSDs seems to be reversed. When this happens, booting gets stuck in an infinite loop, but a simple reboot solves the problem.

I guess I’m getting older, but I appreciate that GNOME looks exactly the same as on any other Linux distro, except Ubuntu. Everything in GNOME works from muscle memory, just as in most applications. Of course, under the hood, Linux distros are different: they have different package managers, repositories, backgrounds and application defaults. However, for a simple user, there is no need to learn the desktop from scratch, just because their friend installed another Linux distro for them…

Also, while I’m not a gamer, when I saw during installation that Steam was available, I gave it a try as well. It worked flawlessly. I do not follow the current Windows situation, but when I installed Need for Speed a few years ago, I had to go through many steps and install the game twice due to a failed attempt to make it work. Today, installing and starting NFS was a simple next-next-finish experience, so I could start the latest reincarnation of my favorite childhood game without any problems.

Steam on Fedora :-)

Need for Speed

I did a few steps to configure accelerated AI on the machine. I installed a few extra packages from Copr and they found something, after I worked around a couple minor problems:

root@fedora:~# /usr/xrt/bin/xrt-smi examine
System Configuration
  OS Name              : Linux
  Release              : 6.18.13-200.fc43.x86_64
  Machine              : x86_64
  CPU Cores            : 32
  Memory               : 96311 MB
  Distribution         : Fedora Linux 43 (Workstation Edition)
  GLIBC                : 2.42
  Model                : HP Z2 Mini G1a Workstation Desktop PC
  BIOS Vendor          : HP
  BIOS Version         : X53 Ver. 01.05.02

XRT
  Version              : 2.19.0
  Branch               :
  Hash                 :
  Hash Date            : 2025-04-25 00:00:00
  virtio-pci           : unknown, unknown
  amdxdna              : unknown, unknown
  NPU Firmware Version : 1.0.0.166

Device(s) Present
|BDF             |Name          |
|----------------|--------------|
|[0000:c6:00.1]  |RyzenAI-npu5  |

More in-depth AI testing will follow later, once I also installed FreeBSD on the box.

This blog is part of a longer series about my adventures with my new HP Z2 Mini and AI. You can reach me to discuss this blog on one of the contacts listed in the upper right corner. You can read the rest of the blogs under the toy tag.

Logsqueak: Rescuing Insights from the Logseq Journal

Posted by Tim Waugh on 2026-03-04 11:30:43 UTC

I’ve been using Logseq for a year now and it’s become the backbone of my workflow. I have pages dedicated to specific topics, concepts, projects, meetings… all sorts of things.

During my day, when I want to note something down or write something out to think about it, the daily Logseq journal is the obvious place for it to go. It has been an invaluable habit to build. But there’s a catch: the journal can easily become a black hole. It ends up as a chaotic mix of meeting notes, fleeting thoughts, random ideas, task lists and the occasional moment of genuine insight.

Most of the time, I try to link journal items to the relevant pages. Sometimes I remember to update those pages in light of new information. But other times I forget, and those insights get buried in the timeline, only resurfacing if I explicitly search for them.

All of those things belong in the journal, but some of them also belong in permanent pages. I wanted a way to filter the signal from the noise and capture things that I can integrate into my pages, in a way that makes them traceable back to the journal, without leaving the keyboard.

Enter Logsqueak: a proof-of-concept experiment to see if a local AI model can act as an automated gardener for a Personal Knowledge Management (PKM) system.

How Logsqueak Works

It’s a Python-based terminal UI built with Textual, using RAG (Retrieval-Augmented Generation) via Ollama. Because PKM data is highly personal, my aim was to be able to build a tool that can run entirely on a local GPU, meaning your private journal entries never have to leave your machine. (Though you can certainly connect it to much larger cloud models if you prefer.)

The workflow is broken down into 3 phases:

1: Extraction (Signal vs. Noise)

In this phase, Logsqueak reads your Logseq journal and helps identify which items are ephemeral daily noise (e.g., “Morning standup at 9am”) and which are actual knowledge or insight worth keeping.

2: Refinement (Making it Evergreen)

Temporal context is stripped away, and additional context from parent bullet points is added in.

  • Original Journal Entry:
    • Working on the new analytics dashboard
      • Finally figured out why the main chart was double-fetching data on load. The useEffect hook was missing the empty dependency array.
  • Logsqueak Refinement:
    • To prevent double-fetching data on load in the analytics dashboard, ensure the useEffect hook for the main chart includes an empty dependency array.

3: Integration (Filing it Away)

In this final phase, the most semantically relevant pages in your Logseq graph are tracked down, and the best insertion point is identified. Logsqueak will suggest exactly which page and heading the new insight belongs under.

Logseq is built around powerful block properties, so crucially this is where the traceability happens. When an insight is integrated, Logsqueak adds an extracted-to:: property to the original journal block, linking it directly to the new block. The new block on the target page gets an id:: property linking back. This means you can always jump from your polished knowledge base straight back to the original journal entry to see the full context of what you were doing that day.

All writes are performed using a custom engine specifically built for Logseq’s Markdown format, ensuring your notes stay safe. Because this is a proof-of-concept, all writes are guarded by explicit user approval—Logsqueak won’t change your files without you saying “yes.”

Try it out!

Logsqueak requires Python 3.11+ and an AI assistant. You can use Ollama to run everything locally.

If you’re on Fedora, getting the prerequisites running is incredibly straightforward. Since Fedora Workstation ships with recent Python versions out of the box, you’re already halfway there. You just need to grab Ollama to run the models locally, set up a virtual environment, and you’re good to go:

# Assuming you've installed Ollama
git clone https://github.com/twaugh/logsqueak.git
cd logsqueak
./setup-dev.sh
source venv/bin/activate
logsqueak init

Taming the Knowledge Graph

This tool can help you turn a pile of daily logs into a structured, searchable knowledge base. Although it can’t yet create new pages from scratch or be given custom instructions about how best to integrate things into the graph, it’s already useful enough for me to use in my daily routine.

It’s very much a proof-of-concept though, and I’d love to get some feedback from other developers and knowledge management enthusiasts. You can check out the code on GitHub.

Building Logsqueak made me realise just how much time I spend thinking about note-taking friction. While Logsqueak handles my fast, keyboard-driven daily logging, I actually do a lot of my deep thinking away from the screen on a Ratta Supernote e-ink tablet.

I recently found myself trying to solve a similar “black hole” problem over there. The result is Slipstream: a Zettelkasten framework to let you build infinitely nested idea networks by hand.

If you happen to be an e-ink user who prefers a stylus to a keyboard when you need to disconnect and focus, you might find it an interesting contrast. As a bonus, because Slipstream has a structured convention, exporting those handwritten notes to plain text makes them perfectly readable for the exact kind of LLM processing Logsqueak relies on. It’s analogue thinking, ready for the AI age.

The post Logsqueak: Rescuing Insights from the Logseq Journal appeared first on PRINT HEAD.