Fedora People

Episode 417 – Linux Kernel security with Greg K-H

Posted by Josh Bressers on February 26, 2024 12:10 AM

Josh and Kurt talk to GregKH about Linux Kernel security. We most focus on the topic of vulnerabilities in the Linux Kernel, and what being a CNA will mean for the future of Linux Kernel security vulnerabilities. The future of Linux Kernel security vulnerabilities is going to be very interesting.

<audio class="wp-audio-shortcode" controls="controls" id="audio-3325-1" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_417_Linux_Kernel_security_with_Greg_K-H.mp3?_=1" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_417_Linux_Kernel_security_with_Greg_K-H.mp3</audio>

Show Notes

django-ca, HSM and PoC

Posted by Kushal Das on February 25, 2024 09:25 AM

django-ca is a feature rich certificate authority written in Python, using the django framework. The project exists for long, have great documentation and code comments all around. As I was looking around for possible CAs which can be used in multiple projects at work, django-ca seems to be a good base fit. Though it has still a few missing parts (which are important for us), for example HSM support and Certificate Management over CMS.

I started looking into the codebase of django-ca more and meanwhile also started cleaning up (along with Magnus Svensson) another library written at work for HSM support. I also started having conversion with Mathias (who is the author of django-ca) about this feature.

Thanks to the amazing design of the Python Cryptography team, I could just add several Private key implementations in our library, which in turn can be used as a normal private key.

I worked on a proof of concept branch (PoC), while getting a lot of tests also working.

===== 107 failed, 1654 passed, 32 skipped, 274 errors in 286.03s (0:04:46) =====

Meanwhile Mathias also started writing a separate feature branch where he is moving the key operations encapsulated inside of backends, and different backends can be implemented to deal with HSM or normal file based storage. He then chatted with me on Signal over 2 hours explaining the code and design of the branch he is working on. He also taught me many other django/typing things which I never knew before in the same call. His backend based approach makes my original intention of adding HSM support very easy. But, it also means at first he has to modify the codebase (and the thousands of test cases) first.

I am writing this blog post also to remind folks that not every piece of code needs to go to production (or even merged). I worked on a PoC, that validates the idea. And then we have a better and completely different design. It is perfectly okay to work hard for a PoC and later use a different approach.

As some friends asked on Mastodon, I will do a separate post about the cleanup of the other library.

Data Recovery with Open-Source Tools (part 1)

Posted by Steven Pritchard on February 24, 2024 06:25 PM

This is material from a class I taught a long time ago.  Some of it may still be useful.  🙂

The original copyright notice:

Copyright © 2009-2010 Steven Pritchard / K&S Pritchard Enterprises, Inc.

This work is licensed under the Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-nd/3.0/us/ or send a letter to Creative Commons, 171 Second Street, Suite 300, San Francisco, California, 94105, USA.

This is part 1 of a multi-part series.

Identifying drives

An easy way to get a list of drives attached to a system is to run fdisk -l.  The output will look something like this:

# fdisk -l

Disk /dev/sda: 80.0 GB, 80026361856 bytes

255 heads, 63 sectors/track, 9729 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Disk identifier: 0xcab10bee

   Device Boot      Start         End      Blocks   Id  System

/dev/sda1   *           1        8673    69665841    7  HPFS/NTFS

/dev/sda2            8675        9729     8474287+   c  W95 FAT32 (LBA)

In many cases, you'll see a lot of (generally) uninteresting devices that are named /dev/dm-n.  These are devices created by device mapper for everything from software RAID to LVM logical volumes.  If you are primarily interested in the physical drives attached to a system, you can suppress the extra output of fdisk -l with a little bit of sed.  Try the following:

fdisk -l 2>&1 | sed '/\/dev\/dm-/,/^$/d' | uniq

Whole devices generally show up as /dev/sdx (/dev/sda, /dev/sdb, etc.) or /dev/hdx (/dev/hda, /dev/hdb, etc.).  Partitions on the individual devices show up as /dev/sdxn (/dev/sda1, /dev/sda2, etc.), or, in the case of longer device names, the name of the device with pn appended (an example might be /dev/mapper/loop0p1).



The vast majority of hard drives currently in use connect to a computer using either an IDE (or Parallel ATA) interface or a SATA (Serial ATA) interface.  For the most part, SATA is just IDE with a different connector, but when SATA came out, the old Linux IDE driver had accumulated enough cruft that a new SATA driver (libata) was developed to support SATA controller chipsets.  Later, the libata driver had support for most IDE controllers added, obsoleting the old IDE driver.

There are some differences in the two drivers, and often those differences directly impact data recovery.  One difference is device naming.  The old IDE driver named devices /dev/hdx, where x is determined by the position of the drive.

/dev/hda    Master device, primary controller

/dev/hdb    Slave device, primary controller

/dev/hdc    Master device, secondary controller

/dev/hdd    Slave device, secondary controller

And so on.

Unlike the IDE driver, the libata driver uses what was historically SCSI device naming, /dev/sdx, where x starts at "a" and increments upwards as devices are detected, which means that device names are more-or-less random, and won't be consistent across reboots.

The other major difference between the old IDE driver and the libata driver that affects data recovery is how the drivers handle DMA (direct memory access).  The ATA specification allows for various PIO (Programmed I/O) and DMA modes.  Both the old IDE driver and the libata driver will determine the best mode, in most cases choosing a DMA mode initially, and falling back to a PIO mode in error conditions.  The old IDE driver would also let you manually toggle DMA off and on for any device using the command hdparm.

hdparm -d /dev/hd    Query DMA on/off state for /dev/hdx

hdparm -d0 /dev/hdx    Disable DMA on /dev/hdx

hdparm -d1 /dev/hdx    Enable DMA on /dev/hdx

The libata driver currently lacks the ability to toggle DMA on a running system, but it can be turned off for all hard drives with the kernel command line option libata.dma=6, or for all devices (including optical drives) with libata.dma=0.  On a running system, the value of libata.dma can be found in /sys/module/libata/parameters/dma.  (The full list of numeric values for this option can be found in http://www.kernel.org/doc/Documentation/kernel-parameters.txt.)  There does not appear to be a way to way to toggle DMA per device with the libata driver.

There are several reasons why you might want to toggle DMA on or off for a drive.  In some cases, failing drives simply won't work unless DMA is disabled, or even in some rare cases might not work unless DMA is enabled. In some cases the computer might have issues when reading from a failing drive with DMA enabled.  (The libata driver usually handles these situations fairly well.  The old IDE driver only began to handle these situations well in recent years.)

In addition to those reasons, PIO mode forces a drive to a maximum speed of 25MB/s (PIO Mode 6, others are even slower), while DMA modes can go up to 133MB/s.  Some drives appear to work better at these lower speeds.


While SCSI drives and controllers are less common than they once were, all current hard drive controller interfaces now use the kernel SCSI device layers for device management and such.  For example, all devices that use the SCSI layer will show up in /proc/scsi/scsi.

# cat /proc/scsi/scsi

Attached devices:

Host: scsi0 Channel: 00 Id: 00 Lun: 00

  Vendor: TSSTcorp Model: CD/DVDW TS-L632D Rev: AS05

  Type:   CD-ROM                           ANSI  SCSI revision: 05

Host: scsi1 Channel: 00 Id: 00 Lun: 00

  Vendor: ATA      Model: ST9160821A       Rev: 3.AL

  Type:   Direct-Access                    ANSI  SCSI revision: 05

Host: scsi3 Channel: 00 Id: 00 Lun: 00

  Vendor: ATA      Model: WDC WD10EACS-00Z Rev: 01.0

  Type:   Direct-Access                    ANSI  SCSI revision: 05

In most cases, it is safe to remove a device that isn't currently mounted, but to be absolutely sure it is safe, you can also explicitly tell the kernel to disable a device by writing to /proc/scsi/scsi.  For example, to remove the third device (the Western Digital drive in this example), you could do the following:

echo scsi remove-single-device 3 0 0 0 > /proc/scsi/scsi

Note that the four numbers correspond to the controller, channel, ID, and LUN in the example.

In cases where hot-added devices don't automatically show up, there is also a corresponding add-single-device command.

When recovering data from SCSI (and SCSI-like drives such as SAS), there are no special tricks like DMA.

USB, etc.

The Linux USB drivers are rather resilient in the face of errors, so no special consideration needs to be given when recovering data from thumb drives and other flash memory (except that these devices tend to work or not, and, of course, dead shorts across USB ports are a Bad Thing).  USB-to-ATA bridge devices are a different matter entirely though.  They tend to lock up hard or otherwise behave badly when they hit errors on a failing drive.  Generally speaking, they should be avoided for failing drives, but drives that are OK other than a trashed filesystem or partition table should be completely fine on a USB-to-ATA bridge device.

To be continued in part 2.

Drive Failures - Data Recovery with Open-Source Tools (part 2)

Posted by Steven Pritchard on February 24, 2024 06:21 PM

This is part 2 of a multi-part series.  See part 1 for the beginning of the series.

Note that this is material from 2010 and earlier that pre-dates the common availability of solid state drives.

Detecting failures

Mechanical failures

Mechanical drive failure is nearly always accompanied by some sort of audible noise.  One common sound heard from failing hard drives is the so-called "Click of Death", a sound similar to a watch ticking (but much louder).  This can have various causes, but it is commonly caused by the read/write head inside a drive being stuck or possibly trying to repeatedly read a failing block.

Another common noise is a very high-pitched whine.  This is caused by bearings in a drive failing (most likely rubbing metal-on-metal), usually as a result of old age.  Anything that moves inside a computer (fans, for example) can make a noise like this, so always check a suspect drive away from other sources of noise to verify that the sound is indeed coming from the drive.

Drive motors failing and head crashes can cause other distinctive noises.  As a rule, any noise coming from a hard drive that does not seem normal is probably an indicator of imminent failure.

Electronic failures

Failing electronics can cause a drive to act flaky, not detect, and occasionally catch fire.

Hard drives have electronics on the inside of the drive which are inaccessible without destroying the drive (unless you happen to have a clean room).  Unfortunately, if those fail, there isn't much you can do.

The external electronics on a hard drive are usually a small circuit board that contains the interface connector and is held onto the drive with a few screws.  In many cases, multiple versions of a drive (IDE, SATA, SCSI, SAS, etc.) exist with different controller interface boards.  Generally speaking, it is possible to transplant the external electronics from a good drive onto a drive with failing electronics in order to get data off the failing drive.  Usually the controller board will need to be off an identical drive with similar manufacturing dates.

Dealing with physical failures

In addition to drive electronics transplanting, just about any trick you've heard of (freezing, spinning, smacking, etc.) has probably worked for someone, sometime.  Whether any of these tricks work for you is a matter of trial and error.  Just be careful.

Freezing drives seem to be especially effective.  Unfortunately, as soon as a drive is operating, it will tend to heat up quickly, so some care needs to be taken to keep drives cool without letting them get wet from condensation.

Swapping electronics often works when faced with electronic failure, but only when the donor drive exactly matches the failed drive.

Freezing drives often helps in cases of crashed heads and electronic problems. Sometimes they will need help to stay cold (ice packs, freeze spray, etc.), but often once they start spinning, they'll stay spinning. Turning a drive on its side sometimes helps with physical problems as well.

Unfortunately, we do have to get a drive to spin for any software data recovery techniques to work.

To be continued in part 3.

Back on the market

Posted by Ben Cotton on February 23, 2024 09:48 PM

Nearly 10 months to the day since the last time this happened, I was informed yesterday that my position has been eliminated. As before, it’s unrelated to my performance, but the end result is the same: I am looking for a new job.

So what am I looking for? The ideal role would involve leadership in open source strategy or other high-level work. I’m excited by the opportunities to connect open source development to business goals in a way that makes it a mutually-beneficial relationship between company and community. In addition to my open source work, I have experience in program management, marketing, HPC, systems administration, and meteorology. You know, in case you’re thinking about clouds. My full resume (also in PDF) is available on my website.

If you have something that you think might be a good mutual fit, let me know. In the meantime, you can buy Program Management for Open Source Projects for all of your friends and enemies. I’m also available to give talks to communities (for free) and companies (ask about my reasonable prices!) on any subject where I have expertise. My website has a list of talks I’ve given, with video when available.

The post Back on the market appeared first on Blog Fiasco.

Untitled Post

Posted by Zach Oglesby on February 23, 2024 02:14 PM

I track the books I read on my blog and have loose goals of how many books I want to read a year, but I have been rereading the Stormlight Archive books this year in preparation for the release of book 5. Questioning if I should count them or not.

آموزش نصب و استفاده از ابزار TFSwitch

Posted by Fedora fans on February 23, 2024 01:57 PM


اگر از Terraform استفاده می کنید و پروژه های مختلفی دارید که هر کدام از آنها دارای نسخه های متفاوتی از  Terraformهستند، شاید نصب و حذف نسخه های مختلف  Terraform روی سیستم جهت کار با آن پروژه ها ساده و منطقی نباشد.راه حل استفاده از ‌tfswitch است. ابزار خط فرمان tfswitch به شما امکان می دهد بین نسخه های مختلف terraform سوئیچ کنید. اگر نسخه خاصی از terraform را نصب نکرده اید، tfswitch به شما امکان می دهد نسخه مورد نظر خود را دانلود کنید. نصب سریع و آسان است. پس از نصب، با اجرای دستور tfswitch به سادگی نسخه مورد نیاز خود را از منوی کشویی انتخاب کنید و شروع به استفاده از terraform کنید.
روش دیگر اینکه، کافیست تا وارد پروژه ی Terraform خود شوید و سپس دستور tfswitch را اجرا کنید. ابزار tfswitch فایل provider.tf یا version.tf و یا هر فایل دیگری که نسخه Terraform پروژه را مشخص کرده اید را می خواند و بصورت خودکار همان نسخه از Terraform را دانلود و نصب می کند.


نصب TFSwitch در لینوکس


برای نصب tfswitch در لینوکس کافیست تا دستور زیر را با کاربر root اجرا کنید:

# curl -L https://raw.githubusercontent.com/warrensbox/terraform-switcher/release/install.sh | bash



The post آموزش نصب و استفاده از ابزار TFSwitch first appeared on طرفداران فدورا.

Anonymous analytics via Plausible.io

Posted by Kiwi TCMS on February 23, 2024 10:15 AM

Since the very beginning when we launched Kiwi TCMS our team has been struggling to understand how many people use it, how active these users are, which pages & functionality they spend the most time with, how many installations of Kiwi TCMS are out there in the wild and which exactly versions are the most used ones!

We reached over 2 million downloads without any analytics inside the application because we do not want to intrude on our users' privacy and this has not been easy! Inspired by a recent presentation we learned about Plausible Analytics - GDPR, CCPA and cookie law compliant, open source site analytics tool - and decided to use it! You can check-out how it works here.

What is changing

Starting with Kiwi TCMS v13.1 anonymous analytics will be enabled for statistical purposes. Our goal is to track overall usage patterns inside the Kiwi TCMS application(s), not to track individual visitors. All the data is in aggregate only. No personal data is sent to Plausible.

Anonymous analytics are enabled on this website, inside our official container images and for all tenants provisioned under https://*.tenant.kiwitcms.org. Running containers will report back to Plausible Analytics every 5 days to send the version number of Kiwi TCMS, nothing else! Here's a preview of what it looks like:

"preview of versions report"

You can examine our source code here and here.

Staying true to our open source nature we've made the kiwitcms.org stats dashboard publicly available immediately! In several months we are going to carefully examine the stats collected by the kiwitcms-container dashboard and consider making them publicly available as well! Most likely we will!

Who uses Plausible

A number of [open source] organizations have publicly endorsed the use of Plausible Analytics:

You can also inspect this huge list of Websites using Plausible Analytics compiled by a 3rd party vendor!

How can I opt-out

  • Leave everything as-is and help us better understand usage of Kiwi TCMS
  • Update the setting PLAUSIBLE_DOMAIN and collect your own stats with Plausible
  • Update the setting ANONYMOUS_ANALYTICS to False and disable all stats

IMPORTANT: Private Tenant customers and demo instance users cannot opt-out! Given that they are consuming digital resources hosted by our own team they have already shared more information with us than what gets sent to Plausible! Note that we do not track individual users, analyze or sell your information to 3rd parties even across our own digital properties!

Happy Testing!

If you like what we're doing and how Kiwi TCMS supports various communities please help us!

Infra & RelEng Update – Week 8 2024

Posted by Fedora Community Blog on February 23, 2024 10:00 AM

This is a weekly report from the I&R (Infrastructure & Release Engineering) Team. It also contains updates for CPE (Community Platform Engineering) Team as the CPE initiatives are mostly tied to I&R work.

We provide you with both an infographic and a text version of the weekly report. If you want to quickly look at what we did, just look at the infographic. If you are interested in more in-depth details look at the infographic.

Week: 19 February – 23 February 2024

Read more: Infra & RelEng Update – Week 8 2024 <figure class="wp-block-image size-full wp-lightbox-container" data-wp-context="{ "core": { "image": { "imageLoaded": false, "initialized": false, "lightboxEnabled": false, "hideAnimationEnabled": false, "preloadInitialized": false, "lightboxAnimation": "zoom", "imageUploadedSrc": "https://communityblog.fedoraproject.org/wp-content/uploads/2024/02/Weekly-Report_week8.jpg", "imageCurrentSrc": "", "targetWidth": "1006", "targetHeight": "994", "scaleAttr": "", "dialogLabel": "Enlarged image" } } }" data-wp-interactive="data-wp-interactive">I&R infographic </figure>

Infrastructure & Release Engineering

The purpose of this team is to take care of day-to-day business regarding CentOS and Fedora Infrastructure and Fedora release engineering work.
It’s responsible for services running in Fedora and CentOS infrastructure and preparing things for the new Fedora release (mirrors, mass branching, new namespaces etc.).
List of planned/in-progress issues

Fedora Infra

CentOS Infra including CentOS CI

Release Engineering

CPE Initiatives


Extra Packages for Enterprise Linux (or EPEL) is a Fedora Special Interest Group that creates, maintains, and manages a high quality set of additional packages for Enterprise Linux, including, but not limited to, Red Hat Enterprise Linux (RHEL), CentOS and Scientific Linux (SL), Oracle Linux (OL).


Matrix Native Zodbot

With ongoing stability issues with the Matrix <-> IRC bridge and many contributors switching over to Matrix, zodbot has become increasingly unreliable. The bridge is currently shut off completely. This initiative will provide a future proof solution and allow us to conduct meetings without wasting time troubleshooting the bridge and zodbot.


  • This initiative is now finished as the Zodbot is already running in Matrix for a few months and most of the initial issues were resolved

If you have any questions or feedback, please respond to this report or contact us on -cpe channel on Matrix.

The post Infra & RelEng Update – Week 8 2024 appeared first on Fedora Community Blog.

Fedora and older XFS filesystem format V4

Posted by Justin M. Forbes on February 22, 2024 01:40 PM

Upstream deprecated the V4 format for XFS with commit b96cb835.  With next year being the date that it defaults to unsupported (though can be enabled with a kernel config for a while still).  As such, Fedora 40 will be the last release that supports these older XFS filesystems.  Once Fedora 40 is EOL around June of 2025, the default Fedora kernel will no longer be able to mount them.

Hello Nushell!

Posted by Michel Lind on February 21, 2024 12:00 AM
After about two years of on-and-off work, I’m happy to report that Nushell has finally landed in Fedora and EPEL 9, and can be installed simply using sudo dnf --refresh install nu on Fedora and your favorite Enterprise Linux distribution (I’m partial to CentOS Stream myself, but also RHEL, AlmaLinux, etc.) (For those not familiar with Nushell yet, think of it as a cross-platform Powershell written in Rust - it lets you manipulate pipelines of structured data, the way you might be using jc and jq with a regular shell)

Anatomy of a Jam Performance

Posted by Adam Young on February 20, 2024 10:53 PM

My Band, The Standard Deviants, had a rehearsal this weekend. As usual, I tried to record some of it. As so often happens, our best performance was our warm up tune. This time, we performed a tune called “The SMF Blues” by my good friend Paul Campagna.

<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio">
<iframe allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen="allowfullscreen" frameborder="0" height="329" src="https://www.youtube.com/embed/ZWs9G1dTfJA?feature=oembed" title="SMF Blues" width="584"></iframe>

What is going on here? Quite a bit. People talk about Jazz as improvisation, but that does not mean that “Everything” is made up on the spot. It takes a lot of preparation in order for it to be spontaneous. Here are some of the things that make it possible to “just jam.”

Blues Funk

While no one in the band besides me knew the tune before this session, the format of the tune is a 12 bar blues. This is probably the most common format for a jam tune. A lot has been written on the format elsewhere, so I won’t detail here. All of the members of this group know a 12 bar blues, and can jump in to one after hearing the main melody once or twice.

The beat is a rock/funk beat set by Dave, our Drummer. This is a beat that we all know. This is also a beat that Dave has probably put 1000 hours into playing and mastering in different scenarios. Steve, on Bass, has played a beat like this his whole career. We all know the feel and do not have to think about it.

This song has a really strong turn around on the last two bars of the form. It is high pitched, repeated, and lets everyone know where we are anchored in the song. It also tells people when a verse is over, and we reset.


The saxophone plays a lead role in this music, and I give directions to the members of the band. This is not to “boss” people around, but rather to reduce the number of options at any given point so that we as a unit know what to do. Since we can’t really talk in this scenario, the directions have to be simple. There are three main ways I give signals to the band.

The simplest is to step up an play. As a lead instrument, the saxophone communicates via sound to the rest of the band one of two things; either we are playing the head again, or I am taking a solo. The only person that really has to majorly change their behavior is Adam 12 on Trombone. Either he plays the melody with me or he moves into a supporting role. The rest of the band just adjusts their energy accordingly. We play louder for a repetition of the head, and they back off a bit for a solo.

The second way I give signals to the band is by direct eye contact. All I have to do is look back at Dave in the middle of a verse to let him know that the next verse is his to solo. We have played together long enough that he knows what that means. I reinforce the message by stepping to the side and letting the focus shift to him.

The third way is to use hand gestures. As a sax player, I have to make these brief, as I need two hands to play many of the notes. However, there are alternative fingerings, so for short periods, I can play with just my left hand, and use my right to signal. The most obvious signal here is the 4 finger hand gesture I gave to Adam 12 that we are going to trade 4s. This means that each of us play for four measures and then switch. If I gave this signal to all of the band, it would mean that we would be trading 4s with the drums, which we do on longer form songs. Another variation of this is 2s, which is just a short spot for each person.

One of the wonderful thing about playing with this band is how even “mistakes” such as when i tried to end the tune, and no one caught the signal…and it became a bass solo that ended up perfectly placed. Steve realized we all quieted out, and that is an indication for the Bass to step up, and he did.

Practice Practice Practice

Everyone here knows their instrument. We have all been playing for many decades, and know what to do. But the horn is a harsh mistress, and she demands attention. As someone once said:

Skip a day and you know. Skip two days and your friends know. Skip three days and everyone knows.

Musical Wisdom from the ages

Adam 12 is the newest member of the band. It had been several years since he played regularly before he joined us. His first jam sessions were fairly short. We have since heard the results of the hard work that he has put in.

I try to play every day. It completes with many other responsibilities and activities in modern life. But my days seems less complete if I did not at least blow a few notes through the horn.


Music is timing. Our ears are super sensitive to changes in timing, whether at the micro level, translating to differences in pitch, or the macro level, with changes in tempo…which is just another word for time. Dave is the master of listening. He catches on to a pattern one of us it playing and he works it into his drumming constantly. Steve is the backbone of the band. Listening to the bass line tells us what we need to know about speed and location in the song. The more we play together, the more we pick up on each others cues through playing. The end effect is that we are jointly contributing to an event, an experience, a performance.

Debugging an odd inability to stream video

Posted by Matthew Garrett on February 19, 2024 10:30 PM
We have a cabin out in the forest, and when I say "out in the forest" I mean "in a national forest subject to regulation by the US Forest Service" which means there's an extremely thick book describing the things we're allowed to do and (somewhat longer) not allowed to do. It's also down in the bottom of a valley surrounded by tall trees (the whole "forest" bit). There used to be AT&T copper but all that infrastructure burned down in a big fire back in 2021 and AT&T no longer supply new copper links, and Starlink isn't viable because of the whole "bottom of a valley surrounded by tall trees" thing along with regulations that prohibit us from putting up a big pole with a dish on top. Thankfully there's LTE towers nearby, so I'm simply using cellular data. Unfortunately my provider rate limits connections to video streaming services in order to push them down to roughly SD resolution. The easy workaround is just to VPN back to somewhere else, which in my case is just a Wireguard link back to San Francisco.

This worked perfectly for most things, but some streaming services simply wouldn't work at all. Attempting to load the video would just spin forever. Running tcpdump at the local end of the VPN endpoint showed a connection being established, some packets being exchanged, and then… nothing. The remote service appeared to just stop sending packets. Tcpdumping the remote end of the VPN showed the same thing. It wasn't until I looked at the traffic on the VPN endpoint's external interface that things began to become clear.

This probably needs some background. Most network infrastructure has a maximum allowable packet size, which is referred to as the Maximum Transmission Unit or MTU. For ethernet this defaults to 1500 bytes, and these days most links are able to handle packets of at least this size, so it's pretty typical to just assume that you'll be able to send a 1500 byte packet. But what's important to remember is that that doesn't mean you have 1500 bytes of packet payload - that 1500 bytes includes whatever protocol level headers are on there. For TCP/IP you're typically looking at spending around 40 bytes on the headers, leaving somewhere around 1460 bytes of usable payload. And if you're using a VPN, things get annoying. In this case the original packet becomes the payload of a new packet, which means it needs another set of TCP (or UDP) and IP headers, and probably also some VPN header. This still all needs to fit inside the MTU of the link the VPN packet is being sent over, so if the MTU of that is 1500, the effective MTU of the VPN interface has to be lower. For Wireguard, this works out to an effective MTU of 1420 bytes. That means simply sending a 1500 byte packet over a Wireguard (or any other VPN) link won't work - adding the additional headers gives you a total packet size of over 1500 bytes, and that won't fit into the underlying link's MTU of 1500.

And yet, things work. But how? Faced with a packet that's too big to fit into a link, there are two choices - break the packet up into multiple smaller packets ("fragmentation") or tell whoever's sending the packet to send smaller packets. Fragmentation seems like the obvious answer, so I'd encourage you to read Valerie Aurora's article on how fragmentation is more complicated than you think. tl;dr - if you can avoid fragmentation then you're going to have a better life. You can explicitly indicate that you don't want your packets to be fragmented by setting the Don't Fragment bit in your IP header, and then when your packet hits a link where your packet exceeds the link MTU it'll send back a packet telling the remote that it's too big, what the actual MTU is, and the remote will resend a smaller packet. This avoids all the hassle of handling fragments in exchange for the cost of a retransmit the first time the MTU is exceeded. It also typically works these days, which wasn't always the case - people had a nasty habit of dropping the ICMP packets telling the remote that the packet was too big, which broke everything.

What I saw when I tcpdumped on the remote VPN endpoint's external interface was that the connection was getting established, and then a 1500 byte packet would arrive (this is kind of the behaviour you'd expect for video - the connection handshaking involves a bunch of relatively small packets, and then once you start sending the video stream itself you start sending packets that are as large as possible in order to minimise overhead). This 1500 byte packet wouldn't fit down the Wireguard link, so the endpoint sent back an ICMP packet to the remote telling it to send smaller packets. The remote should then have sent a new, smaller packet - instead, about a second after sending the first 1500 byte packet, it sent that same 1500 byte packet. This is consistent with it ignoring the ICMP notification and just behaving as if the packet had been dropped.

All the services that were failing were failing in identical ways, and all were using Fastly as their CDN. I complained about this on social media and then somehow ended up in contact with the engineering team responsible for this sort of thing - I sent them a packet dump of the failure, they were able to reproduce it, and it got fixed. Hurray!

(Between me identifying the problem and it getting fixed I was able to work around it. The TCP header includes a Maximum Segment Size (MSS) field, which indicates the maximum size of the payload for this connection. iptables allows you to rewrite this, so on the VPN endpoint I simply rewrote the MSS to be small enough that the packets would fit inside the Wireguard MTU. This isn't a complete fix since it's done at the TCP level rather than the IP level - so any large UDP packets would still end up breaking)

I've no idea what the underlying issue was, and at the client end the failure was entirely opaque: the remote simply stopped sending me packets. The only reason I was able to debug this at all was because I controlled the other end of the VPN as well, and even then I wouldn't have been able to do anything about it other than being in the fortuitous situation of someone able to do something about it seeing my post. How many people go through their lives dealing with things just being broken and having no idea why, and how do we fix that?

(Edit: thanks to this comment, it sounds like the underlying issue was a kernel bug that Fastly developed a fix for - under certain configurations, the kernel fails to associate the MTU update with the egress interface and so it continues sending overly large packets)

comment count unavailable comments

Week 7 in Packit

Posted by Weekly status of Packit Team on February 19, 2024 12:00 AM

Week 7 (February 13th – February 19th)

  • Packit now supports special value ignore for trigger in jobs configuration that indicates not to execute the job at all. This can be useful for templates or temporarily disabled jobs. (packit#2234)
  • We have fixed the caching of data for the usage API endpoint. (packit-service#2350)
  • We have fixed an issue that caused loading the same data multiple times on the dashboard within the project views. (packit-service#2349)
  • We have also fixed crashing of dashboard's Usage page in case of unsuccessful queries. (dashboard#378)
  • We have fixed parsing of resolved Bugzillas in comments with multiple arguments specified, e.g. /packit pull-from-upstream --with-pr-config --resolved-bugs rhbz#123. (packit-service#2346)

Episode 416 – Thomas Depierre on open source in Europe

Posted by Josh Bressers on February 19, 2024 12:00 AM

Josh and Kurt talk to Thomas Depierre about some of the European efforts to secure software. We touch on the CRA, MDA, FOSDEM, and more. As expected Thomas drops a huge amount of knowledge on what’s happening in open source. We close the show with a lot of ideas around how to move the needle for open source. It’s not easy, but it is possible.

<audio class="wp-audio-shortcode" controls="controls" id="audio-3321-2" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_416_Thomas_Depierre_on_open_source_in_Europe.mp3?_=2" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_416_Thomas_Depierre_on_open_source_in_Europe.mp3</audio>

Show Notes

New Samsung S90C OLED – Green Screen Of Death

Posted by Jon Chiappetta on February 18, 2024 11:56 PM

So last year, for my birthday, I purchased a new Sony PS5 to update my gaming console after soo many years and it immediately failed on me by always shutting down during game play. This year, for my birthday, I decided to try and update my 8 year old TV with a new Samsung OLED for the first time and as my luck would have it, I was presented with the “Green Screen Of Death”. The TV only just arrived a few days ago and is now dead so I have to go through the process of trying to contact a certified Samsung repair person to see if it can even be fixed. I can’t tell if its just my bad luck going on lately or if quality control at these companies has gone down hill but it’s starting to get harder and harder to find good quality alternatives! 😦

<figure class="wp-block-embed aligncenter is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio">
<iframe allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen="allowfullscreen" frameborder="0" height="567" src="https://www.youtube.com/embed/vjmcboqltgA?feature=oembed" title="New Samsung S90C OLED - Green Screen Of Death" width="1008"></iframe>
</figure> <figure class="wp-block-embed aligncenter is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio">
<iframe allowfullscreen="true" class="youtube-player" height="567" sandbox="allow-scripts allow-same-origin allow-popups allow-presentation allow-popups-to-escape-sandbox" src="https://www.youtube.com/embed/cyWlACuhqNg?version=3&amp;rel=1&amp;showsearch=0&amp;showinfo=1&amp;iv_load_policy=1&amp;fs=1&amp;hl=en&amp;autohide=2&amp;wmode=transparent" style="border:0;" width="1008"></iframe>

El Alto Costo de Volar

Posted by Arnulfo Reyes on February 18, 2024 05:53 PM

El incidente con el vuelo 1282 de Alaska Airlines, en el que se desprendió una puerta de salida de emergencia de un Boeing 737-MAX9 en pleno vuelo, ha reavivado la discusión sobre la decadencia de Boeing como fabricante de aviones. Este avión pertenece a la misma familia 737 MAX que sufrió dos accidentes mortales en 2018 y 2019.


La mayoría de las fuentes apuntan al mismo evento desencadenante: la fusión de 1997 con McDonnell-Douglas, que transformó a Boeing de una empresa impulsada por la ingeniería y enfocada en construir los mejores aviones posibles, a una que se centra abrumadoramente en las finanzas y el precio de las acciones.

Esta descripción parece precisa, pero omite una gran parte del panorama: la brutal estructura de la industria de aviones comerciales que impulsa a empresas como Boeing a crear cosas como el 737 MAX (una actualización de un avión de 50 años) en lugar de crear un nuevo modelo desde cero. Al desglosar cómo funciona la industria de aviones comerciales, podemos comprender mejor el comportamiento de Boeing.

Las palabras de George Ball, director gerente de Lehman Brothers en 1982, y Jean Pierson, ex CEO de Airbus, resuenan con fuerza en este contexto: “No hay precedentes históricos ni paralelos actuales para la magnitud del riesgo de exposición financiera asumido por una empresa de estructuras aéreas estadounidense” y “No puedes ganar, no puedes salirte a mano, y no puedes renunciar”. Estas citas subrayan la complejidad y los desafíos inherentes a la industria de la aviación comercial.

El riesgo de desarrollar un nuevo avión

La fabricación de aviones comerciales es similar a cualquier otra industria manufacturera en algunos aspectos. Una empresa desarrolla un producto y trata de venderlo por un precio suficiente para cubrir los costos de desarrollo y producción. Si tiene éxito y genera ganancias, continúa desarrollando nuevos productos; si no, cierra sus operaciones. Lo que distingue a la industria de la aviación es la escala en la que ocurren estas cosas.

Desarrollar un avión comercial es increíblemente costoso. Los presupuestos para nuevos programas de desarrollo de aeronaves están en los miles de millones de dólares, y los inevitables sobrecostos pueden llevar los costos de desarrollo a $20–30 mil millones o más.


Esto no es simplemente un caso de empresas modernas e infladas que han olvidado cómo hacer las cosas de manera eficiente (aunque hay algo de eso): desarrollar un avión a reacción siempre ha sido costoso. Boeing gastó entre $1.2 y $2 mil millones para desarrollar el 747 a finales de los años 60 (~$10–20 mil millones en dólares de 2023), y otros fabricantes de la época como Lockheed y McDonnell-Douglas señalaron que sus propios costos de desarrollo de nuevos aviones eran similares.

El costo de desarrollar un nuevo avión comercial puede ser una fracción significativa, si no mayor, del valor total de la empresa. Por ejemplo, Boeing gastó $186 millones en el desarrollo de su primer avión a reacción en 1952, el 707, que era $36 millones más de lo que valía la empresa.

Cuando Boeing comenzó el desarrollo del 747 en 1965, la empresa estaba valorada en $375 millones, menos de un tercio de lo que gastó en el desarrollo del 747. La mayoría de los otros programas no son tan desequilibrados, pero aún representan un riesgo enorme. Se estima que el Boeing 777 costó entre $12 y $14 mil millones para desarrollar en un momento en que Boeing valía alrededor de $30 mil millones. Y cuando Airbus lanzó el programa A380, presupuestó $10.7 mil millones, la mitad del valor de la empresa (y mucho menos de lo que finalmente se gastó). Los fabricantes de aviones a menudo apuestan por la empresa cuando deciden desarrollar un nuevo modelo de jet.

Gastar miles de millones de dólares en el desarrollo de nuevos productos no es exclusivo de la industria de la aviación: un nuevo modelo de automóvil costará miles de millones de dólares para desarrollar, al igual que un nuevo medicamento. Pero tanto los automóviles como los medicamentos pueden distribuir sus costos de desarrollo entre millones de ventas de productos. El mercado de aviones comerciales, por otro lado, es mucho más pequeño; solo se venden alrededor de 1,000 jets grandes cada año. Los fabricantes de aviones necesitan poder recuperar los miles de millones gastados en el desarrollo de productos, fábricas y herramientas con solo unos pocos cientos de ventas.

Esto crea algunas dificultades para los fabricantes de aviones. Por un lado, hace que las curvas de aprendizaje sean muy importantes. Las curvas de aprendizaje son el fenómeno por el cual los costos de producción (o alguna medida relacionada, como las horas de trabajo) tienden a caer un porcentaje constante por cada duplicación acumulativa del volumen de producción: pasar de 10 a 20 unidades producidas produce la misma disminución porcentual de costos que pasar de 10,000 a 20,000.

Los productos de alto volumen pasan la mayor parte de su tiempo en una porción relativamente “plana” de la curva de aprendizaje, donde las duplicaciones están cada vez más separadas. Si has producido 1,000,000 de algo, si haces otros 500 o 5,000 no hará casi ninguna diferencia en términos de la curva de aprendizaje. Pero si solo has hecho 50 de algo, hacer otros 500 hace una gran diferencia en el nivel de reducción de costos que se puede lograr. Por lo tanto, si solo planeas vender unos pocos cientos de algo, un número relativamente pequeño de ventas tendrá un gran impacto en cuán eficientemente estás produciendo y cuán rentable eres.

Los fabricantes de aviones comerciales dependen de obtener suficientes pedidos para empujarlos lo suficientemente lejos en la curva de aprendizaje donde están ganando suficiente dinero por avión para recuperar los costos de desarrollarlo. Los primeros aviones podrían producirse de manera tan ineficiente que se venden por menos de lo que cuesta hacerlos. Esto es típicamente en el vecindario de 500 aviones (y podría ser mucho más si un programa se excede mucho del presupuesto); vende menos, y el programa perderá dinero.

El número relativamente pequeño de ventas de aviones también crea una intensa presión para predecir con precisión las tendencias en los viajes aéreos. Los fabricantes necesitan patinar donde estará el disco: desarrollar el tipo de avión que las aerolíneas querrán durante muchos años en el futuro.

Adivinar mal puede ser desastroso. Airbus perdió enormes cantidades de dinero cuando malinterpretó el mercado para su enorme A380 (solo vendió 251 aviones, muy por debajo de lo que se necesitaba para equilibrar, y el último A380 salió de la línea en 2021). Airbus proyectó que el crecimiento continuo en los viajes aéreos crearía demanda para un avión aún más grande que pudiera mover pasajeros de manera económica entre los grandes aeropuertos centrales. Pero, de hecho, los viajes internacionales se fragmentaron, y las aerolíneas vuelan cada vez más vuelos directos entre destinos utilizando aviones más pequeños y fáciles de llenar como el Boeing 787. A finales de la década de 1960, Lockheed y McDonnell-Douglas se arruinaron mutuamente al desarrollar cada uno un nuevo avión (el L-1011 y el DC-10, respectivamente) para lo que resultó ser un mercado muy pequeño: se vendieron menos de 700 aviones entre ellos. Lockheed terminó abandonando el mercado de aviones comerciales después de perder $2.5 mil millones en el programa (~$9 mil millones en dólares de 2023), y McDonnell-Douglas nunca se recuperó, finalmente se vendió a Boeing en 1997 a medida que su participación en el mercado disminuyó y no pudo financiar el desarrollo de nuevos aviones.

Pero adivinar bien viene con sus propios problemas. Si un programa se retrasa, eso puede causar pedidos perdidos, falta de confianza de las aerolíneas y, en última instancia, llevar a los clientes a los brazos de un competidor. Boeing originalmente planeó introducir su 787 en 2008, pero se retrasó hasta 2011, lo que llevó a los clientes a comprar el Airbus A330 (Airbus se jactó de que vendió más A330 después de que Boeing lanzó el 787). Si Boeing hubiera entregado a tiempo, su ventaja sobre Airbus habría sido enorme.

Y tener demasiados pedidos para un nuevo avión puede ser casi tan malo como tener muy pocos. A finales de la década de 1960, Douglas se ahogó en una demanda inesperadamente alta para su DC-9: no pudo cumplir con sus plazos de entrega, se vio obligado a pagar compensación a las aerolíneas afectadas y casi se vio obligado a la quiebra debido a problemas de flujo de efectivo, lo que resultó en una fusión con McDonnell Aircraft. Boeing tuvo una lucha similar al tratar de aumentar rápidamente la producción de su 737 revisado (llamado 737 Next Generation, o NG) a mediados de la década de 1990: finalmente se vio obligado a detener temporalmente la producción debido al caos, lo que resultó en entregas tardías (y penalizaciones asociadas) y una pérdida neta para el año 1997, la primera de la compañía desde 1959. Se estima que Boeing perdió mil millones de dólares en los primeros 400 737NG, a pesar de que eran derivados de un avión que Boeing había estado construyendo desde la década de 1960.

Los eventos globales pueden cambiar rápidamente las tendencias en los viajes aéreos, cambiando completamente los tipos de aviones que las aerolíneas quieren comprar. Airbus, por ejemplo, tuvo poco éxito inicial vendiendo su primer modelo, el A300. En 1978, cuatro años después de su introducción, solo había vendido 38 de ellos. Pero el aumento de los precios del combustible debido a la segunda crisis del petróleo, junto con la creciente competencia de las aerolíneas, creó una demanda para un avión de pasajeros de fuselaje ancho y dos motores eficientes en combustible, y solo el A300 cumplía con los requisitos. Para 1979, las ventas habían aumentado a más de 300. De manera similar, la desregulación de la industria de las aerolíneas en Estados Unidos obligó a las aerolíneas a ser mucho más competitivas en precio y a centrarse en cosas como la eficiencia del combustible y el costo operativo. Esto cambió el cálculo de los tipos de aviones que estaban interesados en comprar, y aumentó la demanda de aviones como el 737 de Boeing.

A menudo, el éxito se reduce a la suerte tanto como a cualquier otra cosa.

Airbus tuvo suerte cuando los precios del petróleo en alza significaron que su A300 estaba de repente en alta demanda y no tenía competencia. Boeing tuvo suerte con su 737 en la década de 1960, que entró en servicio más de dos años después del similar DC-9, y solo tuvo éxito en parte debido a los retrasos en la producción de Douglas. Y Boeing tuvo suerte nuevamente con el 747, que, al igual que el A380, era un avión enorme que pocas aerolíneas realmente necesitaban. Solo tuvo éxito porque Juan Trippe, el fundador de Pan Am, los compró por capricho (a Trippe le gustaba tener los aviones más recientes y “no veía la necesidad de un análisis de mercado”). Otras aerolíneas siguieron su ejemplo, sin querer conceder a Pan Am el beneficio de marketing de tener los aviones más grandes (aunque el 747 se volvió cada vez más útil a medida que aumentaban los viajes internacionales).

Los fabricantes de aviones se enfrentan a la ingrata tarea de intentar navegar este panorama mientras arriesgan miles de millones de dólares. Pero, por supuesto, desarrollar nuevos productos no es una opción: al igual que en cualquier otra industria, los competidores intentan asegurar su propia ventaja al promocionar sus productos a un número limitado de clientes. La tecnología de los aviones está cambiando constantemente, y las aerolíneas (en una competencia brutal por sí mismas) quieren todas las últimas tecnologías: mejor aerodinámica, materiales más ligeros, motores más grandes y más eficientes, etc., que reducirán sus costos operativos y mejorarán la experiencia de sus pasajeros. Perder incluso un pedido puede ser un gran revés para un fabricante de aviones, tanto por el pequeño número de clientes en general como porque las ventas tienden a tener impulso: una venta a una aerolínea probablemente signifique más ventas futuras a esa aerolínea (ya que habrá eficiencias de la flota común en cosas como el mantenimiento compartido y la formación), o ventas a aerolíneas asociadas, o ventas a competidores que quieren apostar por el caballo ganador. Las aerolíneas son muy conscientes del hecho de que pocas ventas pueden poner a un fabricante de aviones en terreno peligroso, lo que hace arriesgado para una aerolínea engancharse a un perdedor.

Si un fabricante de aviones puede navegar con éxito por este panorama, la recompensa es una ganancia insignificante: entre 1970 y 2010, Boeing, el constructor de aviones comerciales más exitoso, promedió poco más del 5% de ganancia anual. No es de extrañar que la feroz y costosa competencia y las ganancias miserables hayan expulsado gradualmente a los competidores del espacio, dejando solo a Boeing y Airbus (y, si te sientes generoso, a Bombardier y Embraer). Empresas como de Havilland, Dassault, Lockheed, Douglas, Convair, Glen Martin, todas han sido expulsadas o forzadas a fusionarse. Al observar la historia de la fabricación de aviones a reacción en 1982, John Newhouse estima que, de los 22 aviones a reacción comerciales que se habían desarrollado hasta entonces, solo dos, el Boeing 707 y el Boeing 727, se creía que habían ganado dinero (aunque señala que el 747 podría eventualmente hacer esa lista también).

El resultado de estas dificultades es que los fabricantes de aviones piensan muy cuidadosamente antes de desarrollar un nuevo avión: los riesgos son grandes, las recompensas pequeñas e inciertas. A menudo, es una apuesta mucho más segura simplemente desarrollar una modificación de un modelo existente, manteniendo el mismo fuselaje básico y agregando motores más eficientes o una forma de ala ajustada, o estirándolo para agregar más capacidad de pasajeros. Revisar un modelo existente puede costar solo el 10–20% de diseñar un nuevo avión desde cero, y puede proporcionar casi tantos beneficios. El típico avión nuevo podría ser un 20–30% más eficiente en combustible que los diseños existentes, pero Boeing pudo lograr una mejora del 15–16% con el 737 MAX revisado. Y actualizar un modelo existente también es más barato para las aerolíneas, no tienen que volver a entrenar a sus pilotos para volar el nuevo avión.

Para ver cómo se ve este tipo de cálculo en la práctica, echemos un vistazo a la historia del Boeing 737, que ha sido revisado y actualizado repetidamente desde su primer vuelo en 1967.

Evolución del Boeing 737

Boeing desarrolló por primera vez el 737 a mediados de la década de 1960 como un avión de corto alcance y pequeña capacidad para completar su línea de productos y prevenir que Douglas se quedara con todo el mercado de gama baja con su DC-9. Inicialmente, no fue particularmente exitoso, ni se esperaba que lo fuera: Douglas llevaba una ventaja de 2 años con el DC-9, y el 727 anterior de Boeing ya estaba sirviendo gran parte de ese mercado, aunque con 3 motores en lugar de los 2 motores más eficientes del 737. De hecho, el programa estuvo a punto de ser cancelado poco después de su lanzamiento debido a las bajas ventas iniciales. Para minimizar el costo y el tiempo de desarrollo, el 737 fue diseñado para compartir tantas piezas como fuera posible con los anteriores 707 y 727.

El rendimiento inicial del 737 fue menor de lo esperado, por lo que Boeing desarrolló una versión “avanzada” con aerodinámica mejorada en 1970. Pero incluso con estas mejoras, las luchas de Douglas para construir el DC-9, y un pedido para una versión militar del 737 (The T-43A trainer), las ventas aún eran lentas. El avión se estaba construyendo a una tasa de solo dos por mes, y hasta 1973 el programa estuvo al borde de ser cancelado (durante este período, Boeing casi quebró debido a los sobrecostos en el programa 747, y tuvo que despedir al 75 por ciento de su fuerza laboral). El 737 solo se salvó porque finalmente se estaba vendiendo por menos de sus costos de producción, pero no se esperaba que recuperara sus costos de desarrollo.

Pero las ventas comenzaron a repuntar en 1973, y la producción había alcanzado cinco aviones por mes en 1974. Para 1978, contra todo pronóstico, se había convertido en el avión de pasajeros más vendido del mundo, título que retuvo de 1980 a 1985. La desregulación de las aerolíneas en los EE. UU. había causado un cambio en la estrategia de las aerolíneas: en lugar de conectar directamente dos ciudades con vuelos de bajo volumen, comenzaron a conectar a través de aeropuertos centrales, utilizando aviones más pequeños y más baratos de operar. El 737 encajaba perfectamente en la factura.

Pero los competidores de Boeing no se quedaron quietos. Douglas lanzó una versión actualizada de su DC-9, el Super 80, con una versión mejorada de su motor Pratt and Whitney que lo hacía más silencioso y más eficiente en combustible que el 737. Para contrarrestar la amenaza, y para lidiar con las regulaciones de ruido cada vez más estrictas, Boeing respondió con el Boeing 737–300 de “nueva generación”, que comenzó su desarrollo en 1981. Esta versión del 737 agregó capacidad de pasajeros, mejoró la aerodinámica y tenía un nuevo turbofán de alto bypass más eficiente de CFMI (una empresa conjunta entre GE y la empresa francesa SNECMA).

Ajustar un motor tan grande debajo del ala del 737 fue un desafío. El 737 había sido diseñado originalmente con una distancia al suelo baja para acomodar aeropuertos de “segundo nivel” con sistemas de escaleras algo limitados. Extender el tren de aterrizaje para elevar el avión habría requerido cambiar la ubicación de los pozos de las ruedas, lo que habría cambiado la estructura del avión lo suficiente como para esencialmente convertirlo en un nuevo avión. En cambio, el motor se apretó en el espacio disponible, dándole una forma elipsoide característica. Este motor de alto bypass le dio al 737–300 una mejora del 18% en eficiencia de combustible versus los aviones de generación anterior, y una mejora del 11% sobre el Super 80 de McDonnell-Douglas, mientras aún lo mantenía lo más similar posible al 737 anterior.

A medida que el 737–300 tomaba forma, surgía un nuevo competidor. Tras el éxito de su A300, Airbus comenzó el desarrollo del A320 más pequeño, un competidor directo del 737, en 1984. El A320 incorporó muchas tecnologías avanzadas, como el fly-by-wire, que reemplazó las pesadas conexiones mecánicas o hidráulicas entre los controles del avión y las superficies de control del avión con conexiones electrónicas más ligeras. Para 1987, el A320 ya había acumulado 400 pedidos, incluyendo un gran pedido de Northwest Airlines, un cliente de Boeing de larga data. Claramente iba a ser un competidor feroz.

<figure><figcaption>Concept art for the 7J7</figcaption></figure>

Algunos argumentan que Boeing podría haber (y debería haber) eliminado al A320 inmediatamente anunciando un nuevo avión “desde cero”. En ese momento, Boeing estaba trabajando en un avión del tamaño del 737 llamado 7J7, que utilizaba un avanzado motor de avión “unducted fan” (UDF) de GE. Teóricamente, el 7J7 habría sido un 60% más eficiente en combustible que los aviones existentes, junto con la incorporación de tecnologías como el fly by wire. Pero el motor UDF tenía problemas técnicos sin resolver, como la alta generación de ruido, y Boeing estaba preocupado por el tiempo que llevaría llevar el 7J7 al mercado. En cambio, Boeing desarrolló otra versión estirada de su 737 (el 737–400), canceló el proyecto 7J7, y comenzó a desarrollar un avión para llenar el hueco entre su 767 y 747, el 777.

Pero a medida que el A320 continuaba invadiendo el mercado y más clientes de Boeing de larga data desertaban (como United en 1992), estaba claro que se requería un reemplazo para el 737. Muchos una vez más favorecieron el desarrollo de un avión completamente nuevo (que Airbus cree que habría sido catastrófico para el A320), pero Boeing era cauteloso con los nuevos aviones después del 777. Aunque el programa se realizó a tiempo y entregó un avión excepcional, los costos se dispararon, hasta $14 mil millones según algunas estimaciones ($28 mil millones en dólares de 2023) contra un presupuesto proyectado de $5 mil millones. En cambio, Boeing lanzó el 737 “Next Generation” (NG), otra actualización del fuselaje del 737. El 737NG presentaba, entre otras cosas, un nuevo diseño de ala, un motor más eficiente que reducía los costos de combustible en un 9% y los costos de mantenimiento en un 15%, y añadía “winglets” para mejorar la aerodinámica. El 737NG también redujo el número de piezas en un 33% en comparación con las versiones anteriores, mientras que aún conservaba suficiente similitud para requerir una mínima reentrenamiento de los pilotos y permanecer dentro de las reglas de “derivados” de la FAA.

Entregado por primera vez en diciembre de 1997, el 737NG se volvió inmensamente popular, con la versión 737–800 vendiendo más de 5000 aviones en los siguientes 20 años (aunque, como hemos señalado, aumentar la producción vino con inmensas dificultades). Sin embargo, esto no fue suficiente para enterrar al A320, que también continuó vendiéndose bien. Algunas personas de Airbus creen que un avión completamente nuevo habría sido catastrófico para el A320 a finales de los años 90.

A principios de la década de 2000, el 737 y el A320 se habían convertido en los productos más importantes de la oferta de Boeing y Airbus, y juntos representaban el 70% del mercado de aviones comerciales. Una vez más, Boeing comenzó a considerar un reemplazo para el 737 e inició un proyecto, Yellowstone, para explorar reemplazos completamente nuevos para el 737 y otros aviones de Boeing. Pero los hallazgos no fueron particularmente alentadores: sin un nuevo motor avanzado (que no estaría listo hasta 2013 o 2014), las mejoras en la eficiencia del combustible serían de un 4% como máximo. Y las tecnologías que incorporaría del 787 en desarrollo, como los compuestos avanzados, serían difíciles de escalar para la producción de alto volumen requerida para un reemplazo del 737.

Boeing se había vuelto una vez más cauteloso con los nuevos aviones debido a su experiencia con el 787, que había superado masivamente el presupuesto y se había retrasado. El nuevo Boeing, centrado en las finanzas, había sido lo suficientemente reacio a aprobar el desarrollo del 787, y ahora era aún más reacio.

Pero para 2010, con nuevos motores como el Pratt and Whitney GTF y el CFM LEAP en el horizonte, Boeing se inclinaba fuertemente hacia un reemplazo del 737 desde cero.

La mano de Boeing terminó siendo forzada por Airbus.

En 2011, Airbus comenzó a trabajar en un A320 con motores nuevos con un rendimiento significativamente mejorado, llamado A320neo (por “new engine option”), y lo utilizó para atraer parcialmente a un gran cliente de Boeing, American Airlines (que dividió un gran pedido entre Boeing y Airbus). Airbus creía que Boeing se sentiría obligado a responder con su propio motor en lugar de perder más clientes mientras desarrollaba un reemplazo desde cero. Los clientes, por su parte, habían perdido la confianza en que Boeing pudiera entregar un nuevo avión a tiempo después del desastre del 787, y también preferían que Boeing lanzara un motor con una mejor oportunidad de estar a tiempo. Un motor tendría casi todos los beneficios de un avión completamente nuevo (~15–16% de ahorro de combustible versus 20% para un típico desde cero), costaría quizás el 10–20% para desarrollar, y evitaría los costos de las aerolíneas teniendo que volver a entrenar a los pilotos, así como cosas como tener que averiguar cómo producir partes compuestas en grandes volúmenes.

El resto, por supuesto, es historia.

En lugar de un nuevo avión, Boeing desarrolló otra revisión del 737, el 737 MAX. Ajustar motores aún más grandes en el avión mientras se mantenía lo suficientemente similar para caer bajo las reglas de derivados de la FAA requería desplazarlos muy hacia adelante e inclinarlos ligeramente hacia arriba, lo que cambió ligeramente las características de rendimiento del avión. Para mantener su rendimiento similar a los 737 anteriores, Boeing creó un software, MCAS, para tratar de emular el comportamiento de los aviones anteriores. El software MCAS, y sus interacciones con varios sensores, finalmente causaron dos accidentes mortales de vuelos 737 MAX.

Conclusión: A veces pienso en cómo el límite de la posibilidad tecnológica está definido no solo por el dominio del universo, sino por los límites de la economía y las organizaciones que operan dentro de ella. Si los productos son suficientemente complejos, y la demanda es de cantidades tan pequeñas que hay un caso de negocio limitado para ellos, no los obtendremos, incluso si son físicamente posibles de construir.

Los submarinos nucleares parecen estar cerca de este límite: armas enormemente complejas que solo un puñado de organizaciones en el planeta son capaces de construir. Los aviones a reacción parecen estar dirigiéndose rápidamente a este límite exterior, si es que no están allí ya.

El costo y el nivel de tecnología requeridos, junto con el tremendo riesgo de desarrollarlos y el pequeño número de ventas en las que se pueden recuperar los costos, ya han reducido el número de proveedores a esencialmente dos (aunque quizás COMAC de China eventualmente agregue un tercer jugador), y no hay evidencia de que esté volviéndose más fácil.

podman-compose and systemd

Posted by Jens Kuehnel on February 17, 2024 09:45 PM

I’m using more and more podman and especially podman-compose. podman-compose is not part of RHEL, but it is available in EPEL and it is in Fedora. Of course I run it as a non-root user. It really works great, but creating systemd unit files for podman-compose is ugly. I had it running for about a year, but I wanted to have a look for something better. This blog post talks about Fedora (tested with 39), RHEL8 and RHEL9. All have some smaller problems, but sometimes different ones.

I wanted to try Quadlet-podman for over a year. This week I had a closer look and found that it is more complicate than I thought. I really like the simple one-file solution of a compose file. I found podlet to migrate compose files to quadlet. (Use podlet musl, if you have problem with the glibc of the gnu version).

But at the end I really like to continue using the compose file that are provided by most of the tool that I use and I had only very small problems with podman-compose, all of them easy fixable. At the end I decided to use podman-compose systemd. There is not a lot of documentation, but I really liked it.

I had quite a lot of problems, but I will show you here how to fix them and how my setup is working now.

Setup as root

First things first. I run it always as non-root of course. If you do too, please do a “restorecon -R” to the homedir of the user that runs the container. audit2allow and the logs will not show the problem (you have to disable noaudit to see it – I fear), but it will interferes with the startup of your container.

You want to make sure the container user can run even when he is not loged in. So you have to enable linger with:

loginctl enable-linger USERNAME.

I enabled cgroupv2 on my RHEL8 and there is a bug that you have to fix. The Problem can be fixed by different solution, but I choose to change the file /etc/containers/containers.conf. Of course this is not needed on RHEL9 or Fedora.


To use podman-compose systemd you need a systemd template and I choose to set it up in /etc, because I have multiple user running multiple applications. (If the filename of the output of systemctl status on Fedora/RHEL9 looks strange to you. There is a link: /etc/xdg/systemd/user -> ../../systemd/user).

You can run the command podman-compose systemd -a create-unit as root. Or you can run this command as a normal user and paste the output to /etc/systemd/user/podman-compose@.service. But all platforms (podman-compose runs with version 1.0.6 on all) the template has an error that prohibits the successful startup of the containers. You have to add “–in-pod pod_%i” to the up command – thanks to RalfSchwiete. I also added the ExecStopPost line. Here my complete template:

# /etc/systemd/user/podman-compose@.service
Description=%i rootless pod (podman-compose)

ExecStartPre=-/usr/bin/podman-compose --in-pod pod_%i up --no-start
ExecStartPre=/usr/bin/podman pod start pod_%i
ExecStart=/usr/bin/podman-compose wait
ExecStop=/usr/bin/podman pod stop pod_%i
ExecStopPost=/usr/bin/podman pod rm pod_%i


As User

With the preparation as root finished, the setup as non-root is quit simple – Almost 😉 . First stop the container with “podman-compose down”. Then go to the directory with the podman-compose.yml (or docker-compose.yml if you use the old name) file and run “podman-compose systemd”. Be careful as this commands starts the containers again. I always stop the containers again with podman-compose down and start it up again with “systemctl –user enable –now ‘podman-compose@COMPOSE ‘ “. Otherwise you are not sure if the systemctl command is working.

But not on Fedora and RHEL9. Here I always got the error message “Failed to connect to bus: No medium found“. The solution was not to use “su – USERNAME” but instead

machinectl shell USERNAME@ 

With su the DBUS_SESSION_BUS_ADDRESS is missing on Fedora and RHEL9. This is a know issue, but RedHat states that “Using su or su – is not a currently supported mechanism of rootless podman.” I’m not sure it machinectl is supported or not, but I can tell you it is working. If you never heard of machinectl before or didn’t know that machinectl has a shell options – you are not alone. 🙂 The official way it to ssh as USERNAME into the maschine. (I prefer my way better :-))

Running but where are the log

If it is working you get this output at the end of the podman-compose systemd command like this:

you can use systemd commands like enable, start, stop, status, cat
all withoutsudo like this:

            systemctl --user enable --now 'podman-compose@COMPOSE'
            systemctl --user status 'podman-compose@COMPOSE'
            journalctl --user -xeu 'podman-compose@COMPOSE'

and for that to work outside a session
you might need to run the following command once

            sudo loginctl enable-linger 'USERNAME'

you can use podman commands like:

            podman pod ps
            podman pod stats 'pod_COMPOSE'
            podman pod logs --tail=10 -f 'pod_COMPOSE'

systemctl –user status ‘podman-compose@COMPOSE’ did work fine on RHEL8 and shows the output of the command. But on Fedora and RHEL9 it did not show anything. On all Version the command journalctl –user -xeu ‘podman-compose@COMPOSE’ never shows any output.

To fix this your non-root user has to become member of the systemd-journald group. But even then you have to use the right command on all Platforms. Not the one from the output above, but this instead:

journalctl  -xe --user-unit 'podman-compose@COMPOSE'

As you can see the podman-compose is quite nice, but there are a lot of stumbling block. After you know how to avoid them, it works quite well.

Some musings on matrix

Posted by Kevin Fenzi on February 17, 2024 06:21 PM

The fedoraproject has moved pretty heavily toward matix in the last while, and I thought I would share some thoughts on it, good, bad, technical and social.

The technical:

  • I wish I had known more about how rooms work. Here’s my current understanding:
    • When a room is created, it has a ‘internal roomid’. This is a ! (anoyingly for the shell) followed by a bunch of letters a : and the homeserver of the user who created it. It will keep this roomid forever, even if it has nothing at all to do with the homeserver of the user who created it anymore. It’s also just a identifier, it could say !asdjahsdakjshdasd:example.org, but the room isn’t ‘on’ example.org and could have 0 example.org users in it.
    • Rooms also have 0 or more local addresses. If it’s 0, people can still (if the room isn’t invite only) join by the roomid, but thats pretty unfriendly. local addresses look like :homeserver. Users can only add local addresses for their homeserver. So if you were using a matrix.org account, you could only add :matrix.org to any room, not any other server. local addresses can be used by people on your same homeserver to find rooms.
    • Rooms also have 0 or more published addresses. If 1 or more are set, one of them is ‘main published address’. These can only be set by room admins and optionally published in the admins homeservers directory. published addresses can only be chosen from the list of existing local addresses. ie, you have to add a local address, then you can make it a published address, a main published address and if it’s in your homeserver directory or not. If you do publish this address to your directory, it allows users to search your homeserver and find the room.
    • Rooms have names. Names can be set by admins/moderators and are the ‘human friendly’ name of the room. They can be changed and nothing about the roomid or addresses changes at all. Likewise topic, etc.
    • Rooms are federated to all the homeservers that have users in the room. That means if there’s just people from one homeserver in the room, it’s actually not federated/synced anywhere but that homeserver. If someone joins from another server, it gets the federated data and starts syncing. This can result in a weird case if someone makes a room, publishes it’s address to the homeserver directory, other people join and then the room creator (and all others from that homeserver) leave… the room is no longer actually synced on the server it’s address is published on (resulting in not being able to join it easily by address).
    • Rooms work on events published to them. If you create a room, then change the name, the ‘name changed’ event is in that rooms timeline. If somehow you ignored or looked at all events before that you can see the state at that time with the old name, etc.
    • Rooms have ‘versions’. Basically what version of the matrix spec the room knows about. In order to move to a newer version you have to create a new room.
    • Rooms can be a ‘space’. This is a organizational tool to show a tree of rooms. We have fedora.im users join the fedoraproject.org ‘space’ when they first login. This allows you to see the collection of rooms and join some default ones. They really are just rooms though, with a slightly diffrent config. Joining a space room joins you to the space.
  • The admin api is really handy, along with synadm ( https://github.com/JOJ0/synadm ). You can gather all kinds of interesting info, make changes, etc.
  • When you ‘tombstone’ a room (that is, you put an event there that says ‘hey this room is no longer used, go to this new room’, everyone doesn’t magically go to the new room. They have to click on the thing, and in some clients, they just stay in the old room too, and if it happened a long while back and people left a bunch, depending on your client you may not even see the ‘go to new room’ button. ;( For this reason, I’ve taken to renaming rooms that are old to make that more apparent.
  • There’s a bit of confusion about how fedoraproject has setup their servers, but it all hopefully makes sense: We have 2 managed servers (from EMS). One of them is the ‘fedora.im’ homeserver and one is the ‘fedoraproject.org’ homeserver. All users get accounts on the fedora.im homeserver. This allows them to use matrix and make rooms and do all the things that they might need to do. Having fedoraproject.org (with only a small number of admin users) allows us to control that homeserver. We can use it to make rooms ‘official’ (or at least more so) and published in the fedoraproject.org space. Since you have to be logged in from a specific homeserver before you can add local addresses in it, this nicely restricts ‘official’ rooms/addresses. It also means those rooms will be federated/synced between at least fedoraproject.org and fedora.im (but also it means we need to make sure to have at least one fedoraproject.org user in those rooms for that to happen).

The good:

  • When I used to reboot my main server (that runs my IRC bouncer), I would just loose any messages that happened while the machine was down. With matrix, my server just pulls those from federated servers. No loss!
  • In general things work fine, people are able to commnicate, meetings work fine with the new meeting bot, etc. I do think the lower barrier to entry (not having to run a bouncer, etc) has helped get some new folks that were not around on IRC. Of course there are some folks still just on IRC.
  • Being able to edit messages is kind of nice, but can be confusing. Most clients when you up arrow assume you want to edit your last line instead of repeating it. This is not great for bots, or if you wanted to actually say the same thing again with slightly different stuff added. I did find out that nheko lets you do control-p/control-n to get next/previous lines to resend (and up arrow does edit).

The bad:

  • Moderation tools are… poor. You kind of have to depend on sharing lists of spamming users to try and help others block them, but there’s no real flood control or the like. I’m hoping tools will mature here, but it’s really not great.
  • Clients are still under a lot of development. Many support only a subset of things available. Many seem to be falling into the ‘hey this is like a group text with 3 of your buddies’, which may be true sometimes, but the vast majority of my use is talking to communities where there can be 5, 10, more people talking. Little message bubbles don’t really cut it here, I need a lot of context I can see when answering people. I’m hopeful this will improve over time.
  • I get that everything is a room, but it’s a bit weird for Direct messages. Someone sends you a message, it makes a room, they join it, then it invites you. But if you arent around and the person decides they don’t care anymore and leaves the room, you can’t join and have to just reject the invite and never know what they were trying to send you.
  • Threading is a nice idea, but it doesn’t seem well implemented on the client side. In Element you have to click on a thread and it’s easy to miss. In Nheko, you click on a thread thingie, but then when you are ‘in’ a thread you only see that, not activity in the main room, which is sometimes confusing.
  • Notifications are a bit anoying. They are actually set on the server end and shared between clients. Sometimes this is not at all what I (or others) want. ie, I get 10 notications on my phone, I read them and see there’s some things I need to do when I get back to my computer. So, I get back later and… how can I find those things again? I have to remember them or hunt around, all the notifications are gone. I really really would love a ‘bookmark this event’ so I could then go back later and go through those and answer/address them. Apparently the beeper client has something like this.

Anyhow, that’s probably too much for now. See you all on matrix…

Swimming positions improvements

Posted by Adam Young on February 16, 2024 06:58 PM

I have been getting in the pool for the past couple of months, with a goal of completing a triathalon this summer. Today I got some pointers on things I need to improve on in my freestyle technique.


My kick needs some serious work. I am pulling almost exclusively with my arms. As Jimmy (the lifeguard and a long time swim coach) said, “You’re killing yourself.”

He had me do a drill with just a kickboard. “If your thighs are not hurting you are doing it wrong.” I focused on barely breaking the surface with my heels, pointing my toes, and keeping my legs relatively straight…only a slight bend.

Next he had me do a lap with small flippers. “You shouldn’t fell like you are fighting them.” The force you to point your toe. It felt almost too easy. We ran out of time for me to try integrating it into a regular stroke.

For weight lifting, he recommended squats.

For a sprint he recommended 3 kicks per stroke/6 per pair. For longer courses, two kicks per stroke. I think I will shoot for 2/ stroke, as I am going for 1/2 mile total.


Improving my kick will improve my whole body position, including my breathing. It seems I am pulling my head too far out of the water, mainly because my legs are dropping. Although the opposite is true, too: pulling my head too far out of the water is causing my legs to drop. The two should be fixed together.

Arm Entry

One other swimmer at the pool that I asked for advice told me to “lead with my elbows” and then to think about entering the water “like a knife through butter”. Jimmy added that I should be reaching “long and lean.” Like a fast sailboat.

After that, the stroke should go out, and then finish in an S.

I think I need to glide more during the initial entry of the arm into the water.

Jimmy recommended a drill, either using a kickboard or a ring, and holding that out in front, and passing it from hand to hand.

Head position

I should be looking down and to the front,while the the top of my head breaks the surface.

Contribute at the Fedora Linux Test Week for GNOME 46

Posted by Fedora Magazine on February 16, 2024 05:15 PM

The Desktop/Workstation team is working on final integration for GNOME46. This version was just recently released, and will arrive soon in Fedora Linux. As a result, the Fedora Desktop x QA teams are organizing a test week from Monday, February 19, 2024 to Monday, Feburary 26, 2023. The wiki page in this article contains links to the test images you’ll need to participate. Please continue reading for details.

GNOME 46 has landed and will be part of the change for Fedora Linux 40. Since GNOME is the default desktop environment for Fedora Workstation, and thus for many Fedora users, this interface and environment merits a lot of testing. The Workstation Working Group and Fedora Quality team have decided to split the test week into two parts:

Monday 19 February through Thursday 22 February, we will be testing GNOME Desktop and Core Apps. You can find the test day page here.

Thursday 22 Febuary through Monday 26 Febuary, the focus will be to test GNOME Apps in general. These will be shipped by default. The test day page is here.

How does a test week work?

A test week is an event where anyone can help ensure changes in Fedora Linux work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed before, this is a perfect way to get started.

To contribute, you only need to be able to do the following things:

  • Download test materials, which include some large files.
  • Read and follow directions step by step.

Happy testing, and we hope to see you on one of the test days.

Bulgaria’s three open source events in 2024

Posted by Bogomil Shopov - Bogo on February 16, 2024 01:54 PM

Attending FOSS (free and open source software) events is one of the most magical things you can do. If you have never been to one, here are three reasons to participate in an event organized mainly by its community.

  • it’s pretty different than a “commercial” event: The amount of marketing BS and product pitches is minimal; sometimes, it never happens. You have a maximum ratio of practical talks: time.
  • It focuses on freedom: The whole concept of the FOSS is about freedom. You would be able to feel the spirit of that in every action. I can’t explain it with concrete examples, but imagine the feeling of finally doing something you like, such as taking a day off work and going hiking.
  • You learn a lot: The free/libre and the open-source software ecosystem is changing constantly and fast. The community launches new protocols, technologies, RFS, and projects daily and modifies them every millisecond. Good luck keeping with that!
  • You can be part of it: There are many ways to volunteer and help for an event like that. Speaking from personal experience, volunteering at FOSDEM – it’s fantastic!

FOSS events in Bulgaria in 2024


Bulgaria is one of the most beautiful countries in Europe, so if you want to combine your appetite for sightseeing, culture, good food, and your curiosity about Free and open-source software, you are in a good place.

Let me help you with three events this year focused entirely on the FOSS movement.

OpenFest 2024

2-3 November
Sofia, Bulgaria

OpenFest is an annual event for open culture, free and open software, and knowledge sharing. The first OpenFest happened in 2003 in Sofia, and since then, it has been held every year – sometimes in several Bulgarian cities simultaneously.

This is the biggest event in the country. Sometimes, it attracts more than 4000 humans. It has several simultaneous tracks covering technical, community, and open art topics. It’s also famous for its practical workshops.

How.Camp 2024

27 July
Gabrovo, Bulgaria

How.Camp Gabrovo is the first edition of a small open and collaborative event in the small mountain town of Gabrovo, famous for the inhabitants’ unique sense of humor. The team behind that has over 20 years of experience organizing technical and FOSS events, so you are in good hands.

The session will cover topics like Linux on Phones, open-source careers, Thunderbird and how to contribute to its community, and many others still in the works. So watch it and ensure you get one of the 79 spots left!

TuxCon 2024

11-12 May
Plovdiv, Bulgaria

TuxCon is an annual free and open-source software and hardware event. The entrance is free for all visitors. TuxCon is a community event organized by volunteers. They also have a workshop session, but mainly focused on open hardware. Please keep an eye out for more on their website.

Be free!

The image is by Peter Hadzhiev under CC BY-SA 3.0

Infra & RelEng Update – Week 7 2024

Posted by Fedora Community Blog on February 16, 2024 10:00 AM

This is a weekly report from the I&R (Infrastructure & Release Engineering) Team. It also contain updates for CPE (Community Platform Engineering) Team as the CPE initiatives are in most cases tied to I&R work.

We provide you both infographic and text version of the weekly report. If you just want to quickly look at what we did, just look at the infographic. If you are interested in more in depth details look below the infographic.

Week: 12 February – 16 February 2024

<figure class="wp-block-image size-full wp-lightbox-container" data-wp-context="{ "core": { "image": { "imageLoaded": false, "initialized": false, "lightboxEnabled": false, "hideAnimationEnabled": false, "preloadInitialized": false, "lightboxAnimation": "zoom", "imageUploadedSrc": "https://communityblog.fedoraproject.org/wp-content/uploads/2024/02/IR_Weekly_07-scaled.jpg", "imageCurrentSrc": "", "targetWidth": "2355", "targetHeight": "2560", "scaleAttr": "", "dialogLabel": "Enlarged image" } } }" data-wp-interactive="data-wp-interactive">I&R infographic </figure>

Infrastructure & Release Engineering

Purpose of this team is to take care of day to day business regarding CentOS and Fedora Infrastructure and Fedora release engineering work.
It’s responsible for services running in Fedora and CentOS infrastructure and preparing things for the new Fedora release (mirrors, mass branching, new namespaces etc.).
List of planned/in-progress issues

Fedora Infra

CentOS Infra including CentOS CI

Release Engineering

CPE Initiatives


Extra Packages for Enterprise Linux (or EPEL) is a Fedora Special Interest Group that creates, maintains, and manages a high quality set of additional packages for Enterprise Linux, including, but not limited to, Red Hat Enterprise Linux (RHEL), CentOS and Scientific Linux (SL), Oracle Linux (OL).


If you have any questions or feedback, please respond to this report or contact us on -cpe channel on matrix.

The post Infra & RelEng Update – Week 7 2024 appeared first on Fedora Community Blog.

PHP version 8.2.16 and 8.3.3

Posted by Remi Collet on February 16, 2024 05:58 AM

RPMs of PHP version 8.3.3 are available in the remi-modular repository for Fedora ≥ 37 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...) and in the remi-php83 repository for EL 7.

RPMs of PHP version 8.2.16 are available in the remi-modular repository for Fedora ≥ 37 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...) and in the remi-php82 repository for EL 7.

emblem-notice-24.png The Fedora 39, 40, EL-8 and EL-9 packages (modules and SCL) are available for x86_64 and aarch64.

emblem-notice-24.pngThere is no security fix this month, so no update for version 8.1.27.

emblem-important-2-24.pngPHP version 8.0 has reached its end of life and is no longer maintained by the PHP project.

These versions are also available as Software Collections in the remi-safe repository.

Version announcements:

emblem-notice-24.pngInstallation: use the Configuration Wizard and choose your version and installation mode.

Replacement of default PHP by version 8.3 installation (simplest):

dnf module switch-to php:remi-8.3/common

or, the old EL-7 way:

yum-config-manager --enable remi-php83
yum update php\*

Parallel installation of version 8.3 as Software Collection

yum install php83

Replacement of default PHP by version 8.2 installation (simplest):

dnf module switch-to php:remi-8.2/common

or, the old EL-7 way:

yum-config-manager --enable remi-php82
yum update

Parallel installation of version 8.2 as Software Collection

yum install php82

And soon in the official updates:

emblem-important-2-24.pngTo be noticed :

  • EL-9 RPMs are built using RHEL-9.3
  • EL-8 RPMs are built using RHEL-8.9
  • EL-7 RPMs are built using RHEL-7.9
  • intl extension now uses libicu73 (version 73.2)
  • mbstring extension (EL builds) now uses oniguruma5php (version 6.9.9, instead of the outdated system library)
  • oci8 extension now uses the RPM of Oracle Instant Client version 21.12 on x86_64, 19.19 on aarch64
  • a lot of extensions are also available, see the PHP extensions RPM status (from PECL and other sources) page


Base packages (php)

Software Collections (php81 / php82 / php83)

Update Alma/RHEL with leapp and LUKS

Posted by Jens Kuehnel on February 15, 2024 10:11 PM

I migrated most of my Centos 7 and 8 boxes to version 8 of RHEL or Alma Linux. (Exception is on a box with Arch Linux running on an old Atom CPU which did not support RHEL8).

But the update from RHEL/Alma 8 to 9 has one big problem. leapp does not support installation with luks disks via the actor inhibitwhenluks and most of my machines have encryption enabled.

I wanted to test what the problem is. So I installed a VM with Alma Linux 8 and luks. Delete the actor inhibitwhenluks and try to run leapp. I expected an unbootable system or some strange problem, but nothing happened. It simply worked.

The default VM setting is still BIOS on Fedora, so maybe the problem only exists with UEFI. So I deleted the VM and reinstalled it with UEFI and Virt-SCSI as disc controller ( don’t forget to add a SCSI Controller type Virt-SCSI). Installed again with root on luks, run leapp. Again no problem.

So from my short test with 2 virtual machines, it worked better then the warning suggested. I can only explain it with an abundance of cautiousness on the Red Hat side and a simple copy of the files from Alma side. This also explains an error that prohibits the usage on the Alma side, because there is error that the kernel is not from Red Hat and upgrade is not finished.

This is what I did on RHEL and Alma:

  • delete actor “prohibit luks”:
    rm -rf /usr/share/leapp-repository/repositories/system_upgrade/common/actors/inhibitwhenluks

For Alma you need to allow non RedHat Kernel to be installed:

  • remove abort in /usr/share/leapp-repository/repositories/system_upgrade/common/actors/kernel/checkinstalledkernels/libraries/checkinstalledkernels.py
    sed -e 's:raise StopActorExecutionError:#raise StopActorExecutionError:g' /usr/share/leapp-repository/repositories/system_upgrade/common/actors/kernel/checkinstalledkernels/libraries/checkinstalledkernels.py

That fixed this problem, at least for me. Of course the other problems from the “leapp preuptrade” have to be handled. I only tested it with a fresh install. I will update my homeserver in the next couple of days/weeks/month.

P.S.: Don’t forget to cleanup old gpg Key that are using SHA1 to fix the problem with: “Hash algorithm SHA1 not available.”. I delete all gpg-pubkey packages, because yum will reinstall the needed keys anyway.

The syslog-ng Insider 2024-02: OpenObserve; configuration check; build services;

Posted by Peter Czanik on February 15, 2024 12:04 PM

The February syslog-ng newsletter is now on-line:

  • Version 4.5.0 of syslog-ng is now available with OpenObserve JSON API support
  • Syslog-ng PE can now send logs to Google BigQuery
  • Syslog-ng can now do a full configuration check
  • How build services make life easier for upstream developers

It is available at https://www.syslog-ng.com/community/b/blog/posts/the-syslog-ng-insider-2024-02-openobserve-configuration-check-build-services


syslog-ng logo

</figcaption> </figure>

Bisecting Regressions in Fedora Silverblue

Posted by Jiri Eischmann on February 15, 2024 10:00 AM

I know that some blog posts on this topic have already been published, but nevertheless I have decided to share my real-life experience of bisecting regressions in Fedora Silverblue.

My work laptop, Dell XPS 13 Plus, has an Intel IPU6 webcam. It still lacks upstream drivers, so I have to use RPMFusion packages containing Intel’s software stack to ensure the webcam’s functionality.

However, on Monday, I discovered that the webcam was not working. Just last week, it was functioning, and I had no idea what could have broken it. I don’t pay much attention to updates; they’re staged automatically and applied with the next boot. Fortunately, I use Fedora Silverblue, where problems can be identified using the bisection method.

Silverblue utilizes OSTree, essentially described as Git for the operating system. Each Fedora version is a branch, and individual snapshots are commits. You update the system by moving to newer commits and upgrade by switching to the branch with the new version. Silverblue creates daily snapshots, and since OSTree allows you to revert to any commit in the repository, you can roll back the system to any previous date.

I decided to revert to previous states and determine when the regression occurred. I downloaded commit metadata for the last 7 days:

sudo ostree pull --commit-metadata-only --depth 7 fedora fedora/39/x86_64/silverblue

Then, I listed the commits to get their labels and hashes:

sudo ostree log fedora:fedora/39/x86_64/silverblue

Subsequently, I returned the system to the beginning of the previous week:

rpm-ostree deploy 39.20240205.0

After rebooting, I found that the webcam was working, indicating that something had broken it last week. I decided to halve the interval and return to Wednesday:

rpm-ostree deploy 39.20240207.0

In the Wednesday snapshot, the webcam was no longer functioning. Now, I only needed to determine whether it was broken by Tuesday’s or Wednesday’s update. I deployed the Tuesday snapshot:

rpm-ostree deploy 39.20240206.0

I found out that the webcam was still working in this snapshot. It was broken by the Wednesday’s update, so I needed to identify which packages had changed. I used the hashes from one of the outputs above:

rpm-ostree db diff ec2ea04c87913e2a69e71afbbf091ca774bd085530bd4103296e4621a98fc835 fc6cf46319451122df856b59cab82ea4650e9d32ea4bd2fc5d1028107c7ab912

ostree diff commit from: ec2ea04c87913e2a69e71afbbf091ca774bd085530bd4103296e4621a98fc835
ostree diff commit to: fc6cf46319451122df856b59cab82ea4650e9d32ea4bd2fc5d1028107c7ab912
kernel 6.6.14-200.fc39 -> 6.7.3-200.fc39
kernel-core 6.6.14-200.fc39 -> 6.7.3-200.fc39
kernel-modules 6.6.14-200.fc39 -> 6.7.3-200.fc39
kernel-modules-core 6.6.14-200.fc39 -> 6.7.3-200.fc39
kernel-modules-extra 6.6.14-200.fc39 -> 6.7.3-200.fc39
llvm-libs 17.0.6-2.fc39 -> 17.0.6-3.fc39
mesa-dri-drivers 23.3.4-1.fc39 -> 23.3.5-1.fc39
mesa-filesystem 23.3.4-1.fc39 -> 23.3.5-1.fc39
mesa-libEGL 23.3.4-1.fc39 -> 23.3.5-1.fc39
mesa-libGL 23.3.4-1.fc39 -> 23.3.5-1.fc39
mesa-libgbm 23.3.4-1.fc39 -> 23.3.5-1.fc39
mesa-libglapi 23.3.4-1.fc39 -> 23.3.5-1.fc39
mesa-libxatracker 23.3.4-1.fc39 -> 23.3.5-1.fc39
mesa-va-drivers 23.3.4-1.fc39 -> 23.3.5-1.fc39
mesa-vulkan-drivers 23.3.4-1.fc39 -> 23.3.5-1.fc39
qadwaitadecorations-qt5 0.1.3-5.fc39 -> 0.1.4-1.fc39
uresourced 0.5.3-5.fc39 -> 0.5.4-1.fc39

The kernel upgrade from 6.6 to 6.7 was the logical suspect. I informed a colleague, who maintains the RPMFusion packages with the IPU6 webcams support, that the 6.7 kernel broke the webcam support for me. He asked for the model and informed me that it had an Intel VSC chip, which has a newly added driver in the 6.7 kernel. However, it conflicts with Intel’s software stack in RPMFusion packages. So he asked the Fedora kernel maintainer to temporarily disable the Intel VSC driver. This change will come with the next kernel update.

Until the update arrives, I decided to stay on the last functional snapshot from the previous Tuesday. To achieve this, I pinned it in OSTree. Because the system was currently running on that snapshot, all I had to do was:

sudo ostree admin pin 0

And that’s it. Just a small real-life example of how to identify when and what caused a regression in Silverblue and how to easily revert to a version before the regression and stay on it until a fix arrives.

Introduction of deny rules for SELinux policy

Posted by Lukas Vrabec on February 14, 2024 04:04 PM

After a few years, I’m glad I can share some SELinux updates. 🙂

For quite some time, I’ve encountered inquiries regarding the potential inclusion of a feature in the SELinux userspace that would facilitate the removal of SELinux rules from the system. Such functionality would indeed be beneficial, considering that SELinux policies utilized in operating systems like Fedora, CentOS, and RHEL are intricate, allowing numerous common operations. However, for specific use cases, this might not align with your preferences. Instead, you might aim to tighten the policy further.

So, what are the options? Let’s discuss several of them:

  1. Implementing a completely new SELinux policy for your OS. This approach is extremely time-consuming and complex. Given the resource constraints, it’s unlikely that there will be enough resources available to pursue this option.
  2. Replacing the affected SELinux policy module with your modifications. This solution also demands proficiency in SELinux policy writing, and it could prove challenging to remove just one rule within the policy module for replacement.
  3. Creating a new SELinux policy module with deny-type rules. With the new functionality introduced in the SELinux userspace since version 3.6, you can craft a custom module with rules incorporating the “deny” keyword to selectively drop specific rules. Let’s give it a try!

Let’s concentrate on the following use-case: You’re employing the confined user “user_u” as a tester on a Linux system. This confined user is allowed to utilize generic system tools, but you specifically want to prevent SSH connections to other systems. How can this be achieved using the new feature?

“tester” user is confined by SELinux as “user_u”:

[root@fedorabox ~]# semanage login -l

Login Name           SELinux User         MLS/MCS Range        Service

__default__          unconfined_u         s0-s0:c0.c1023       *
root                 unconfined_u         s0-s0:c0.c1023       *
tester               user_u               s0                   *

As you can see without any SELinux modification you’re able to use ssh as user “tester”:

[tester@fedorabox ~]$ ssh tester@localhost
[tester@fedorabox ~]$

Let’s deny it!

[root@fedorabox ~]# cat << EOF > denyrules.cil
(deny user_t ssh_exec_t (file (execute open getattr read)))
[root@fedorabox ~]# semodule -i denyrules.cil
[root@fedorabox ~]# getenforce 

We’ve removed the allow rule for the user_t to execute any binary file labeled as ssh_exec_t, thus preventing user_t from executing /usr/bin/ssh.

[tester@fedorabox ~]$ id -Z 
[tester@fedorabox ~]$ ssh localhost
-bash: /usr/bin/ssh: Permission denied 

This is just a simple example of how to restrict certain operations for a confined SELinux user, which are allowed in the default distribution policy, to tailor it to your specific needs. In general, I believe the “deny rules” feature will enable system administrators to customize the SELinux policy more precisely for their requirements without the need to write custom SELinux policy modules from scratch.

Cockpit 311

Posted by Cockpit Project on February 14, 2024 12:00 AM

Cockpit is the modern Linux admin interface. We release regularly.

Here are the release notes from cockpit-machines 307:

Machines: Replace SPICE for multiple machines

“Replace SPICE devices” was introduced in Cockpit 310 to adjust VMs using SPICE to run on a host without SPICE capability. When opened from the context menu of a VM on the overview list, the dialog now offers to convert multiple VMs in a single step.

screenshot of the "Replace SPICE devices" dialog with options to include multiple machines

Machines: Add storage pool support for pre-formatted block devices

Virtal machine storage pools now support pre-formatted block devices. This is a more robust alternative to “physical disk devices”, as it avoids the guest OS seeing the device on its own, which may lead to unintentional reformatting of a raw disk device.

Virtual machine creation modal dialog with "pre-formatted block device" selected

Thanks to Haruka Ohata for this contribution!

Try it out

cockpit-machines 307 is available now:

CentOS Connect 2024 report

Posted by Alexander Bokovoy on February 13, 2024 07:25 AM

February 1st-4th I participated in two events in Brussels: CentOS Connect and FOSDEM. FOSDEM is getting closer to its quarter a century anniversary next year. With 67 mini-conferences and another 30 events around it, it is considered one of the largest conferences in Europe, any topic included. This report is about CentOS Connect.

CentOS Connect

CentOS Connect was a two-day event preceding FOSDEM. Organized by the CentOS project, it brought together contributors from multiple projects around CentOS Stream upstreams (such as Fedora Project) and downstreams (from Red Hat Enterprise Linux, AlmaLinux, Rocky Linux, and others), long-time users and community members. A lot of talks were given on both days and several special interest groups (SIGs) ran their workshops on the first day.

CentOS Integration SIG

I have attended a workshop organized by the CentOS Integration SIG. The focus of this group is to define and run integration tests around CentOS Stream delivery to the public. While CentOS Stream composes are built using the messages on how RHEL next minor version is composed after being thoroughly tested by Red Hat’s quality engineering group, there is a value in testing CentOS Stream composes by other community members independently as their use cases might differ. Right now the set of tests is defined as a t_functional suite maintained by the CentOS Core SIG but this suite is not enough for the complex scenarios.

The Integration SIG is looking at improving the situation. Two projects were present and interested in this work: RDO (RPM-based distribution of OpenStack) and RHEL Identity Management. Both teams have to deal with coordinated package deliveries across multiple components and both need to get multi-host deployments tested. RDO can be treated as a product built on top of CentOS Stream where its own RDO deliveries are installed on top of CentOS Stream images. RHEL Identity Management (IdM), on the other hand, is a part of RHEL. It is famously known as a ‘canary in the RHEL mine’ (thanks to Stephen Gallagher’s being poetic a decade ago): if anything is broken in RHEL, chances are it will be visible in RHEL IdM testing. With RHEL IdM integration scope expanding to provide passwordless desktop login support, this becomes even more important and also requires multi-host testing.

CentOS Integration SIG’s proposal to address these requirements is interesting. Since CentOS Stream compose event is independent of the tests, a proposal by Aleksandra Fedorova and Adam Samalik is to treat the compose event in a way similar to a normal development workflow. Producing CentOS Stream compose would generate a merge request of a metadata associated with it to a particular git repository. This merge request then can be reacted to by the various bots and individuals. Since CentOS Stream compose is public, it is already accessible to third-party developers to run their tests and report back the results. This way qualification of the compose promotion to CentOS Stream mirrors can be affected by all parties and will help to keep already published composes in a non-conflicting state.

Since most of the projects involved already have their own continuous integration systems, adding another trigger for those would be trivial. For RHEL IdM and its upstream components (FreeIPA, SSSD, 389-ds, Dogtag PKI, etc.) it would also be possible finally to react to changes to CentOS Stream well ahead of time too. In the past this was complicated by a relative turbulence in the CentOS Stream composes early in RHEL next minor release development time when everybody updates their code and stabilization needs a lot of coordination. I am looking forward to Aleksandra’s proposal to land in the Integration SIG’s materials soon.

Update: Aleksandra has published her proposal at the Fedora Discussion board

CentOS Stream and EPEL

Two talks, by Troy Dawson and Carl George, described the current state of EPEL repository and its future interactions with CentOS 10 Stream. Troy’s discussion was fun, as usual: a lot of statistics obtained from the DNF repository trackers shows that old habits are hard to get rid off and moving forward is a struggle to many users despite security and supportability challenges with older software. Both CentOS 7 and CentOS 8 Stream end of life dates are coming in 2024, adding enough pressure themselves to that.

There are interesting statistics from the EPEL evolution. In August 2023 at Flock to Fedora conference Troy and Carl pointed out that 727 packagers maintained 7298 packages in EPEL 7, 489 packagers handled 4968 packages in EPEL 8, and 396 packagers were handling 5985 packages in EPEL 9. Half a year later we have 7874 packages in EPEL 7, 5108 packages in EPEL 8, and 6868 packages in EPEL 9. Pace seems to pick up EPEL 9 with upcoming end of life dates for older versions. Since soon only EPEL 9 and EPEL 10 would be actively built, there are probably more packagers that would be active in them as well.

EPEL 10 is coming soon. It will bring a slight difference in package suffixes and overall reduction of complexity to packagers. It is great to see how close EPEL work tracks to ELN activities in Fedora. One thing worth noting is that every Fedora package maintainer is a potential EPEL maintainer as well because EPEL reuses Fedora infrastructure for package maintenance. Even if someone is not maintaining EPEL branches on their Fedora packages (I am not doing that myself – my packages are mostly in RHEL), it allows easy jump-in and collaboration. After all, if packages aren’t in RHEL but are in Fedora, their EPEL presence is just one git branch (and Fedora infrastructure ticket) away.

20 years of CentOS project

First day of the CentOS Connect event ended with a party to celebrate 20 years of the CentOS Project. First release (“CentOS version 2”) went out on May 14th, 2004 but since CentOS Connect is the closest big event organized by the project this year, getting a lot of contributors together to celebrate this anniversary seemed appropriate. A huge cake was presented, so big that it wasn’t possible to consume it completely during the party. It was delicious (like a lot of Belgian chocolate!) and next day coffee breaks allowed me to enjoy it a lot.

FreeIPA and CentOS project infrastructure

My own talk was originally planned to gather feedback from all projects which build on top of CentOS as they use a very similar infrastructure. CentOS Project infrastructure is shared with Fedora Project which is built around FreeIPA as a backend and Noggin as a user management frontend. I asked in advance for some feedback from Fedora, CentOS, AlmaLinux, and RockyLinux infrastructure teams and haven’t gotten that much which prompted my own investigation. It is not an easy job since most organizations aren’t really interested in telling the world details of their core infrastructure. Hope was that I’d be able to see real world usage and maybe take some lessons from it back to the development teams.

While working on my talk, we also experienced an outage in Fedora infrastructure related to the upgrade of the FreeIPA setup used there. My team has been helping Fedora infrastructure administrators so I finally got the feedback I was looking for. That led to several fixes upstream and they have recently been backported to RHEL and CentOS Stream as well. However, the overall lack of feedback is concerning – or at least I thought so.

During CentOS Connect I had the opportunity to discuss this with both AlmaLinux (Jonathan) and RockyLinux (Luis) projects’ sysadmins. “It just works” is a common response I get. Well, that’s nice to hear but what was more exciting to hear is that these projects went a bit further than we did in Fedora and CentOS Stream with their environments. Luis has published the whole RockyLinux infrastructure code responsible for FreeIPA deployment. It is based heavily on ansible-freeipa, reuses the same components we developed for Fedora Infrastructure. Rocky also runs FreeIPA tests in OpenQA instance, similarly to Fedora, and hopefully Luis would contribute more tests to cover passwordless login, already available in CentOS Stream. This would be a very welcome contribution in the light of the Integration SIG activities, helping us to test more complex scenarios community-wide. AlmaLinux infrastructure also uses ansible-freeipa but the code is not publicly available, for whatever reason. Jonathan promised to rectify this problem.

Hallway tracks

I had a number of discussions with people from all kinds of CentOS communities. FreeIPA sits at a unique position by providing secure infrastructure to run POSIX workloads and linking it with modern requirements to web authentication. Our effort to improve passwordless login to Linux environments with reuse of Kerberos to propagate authorization state across multiple machines helps to solve the ‘between the worlds’ problems. That is something that many academic institutions have noticed already and many talks I had in the hallways were related to it.

Next Open NeuroFedora meeting: 12 February 1300 UTC

Posted by The NeuroFedora Blog on February 12, 2024 09:42 AM
Photo by William White on Unsplash

Photo by William White on Unsplash.

Please join us at the next regular Open NeuroFedora team meeting on Monday 12 February at 1300 UTC. The meeting is a public meeting, and open for everyone to attend. You can join us in the Fedora meeting channel on chat.fedoraproject.org (our Matrix instance). Note that you can also access this channel from other Matrix home severs, so you do not have to create a Fedora account just to attend the meeting.

You can use this link to convert the meeting time to your local time. Or, you can also use this command in the terminal:

$ date --date='TZ="UTC" 1300 2024-02-12'

The meeting will be chaired by @ankursinha. The agenda for the meeting is:

We hope to see you there!

JSON and JSONB support in SQLite

Posted by Fedora Magazine on February 12, 2024 08:00 AM

This article provides insight on SQLite’s support for JSON (JavaScript Object Notation) and the latest addition, JSONB. It explains how SQLite facilitates handling JSON and JSONB data and what the differences are between JSON and JSONB. Additionally, the article provides practical “hello world” examples.

What is JSON

JSON is a format for structuring data in an easily readable way, both for computers and for humans. JSON can code four primitive types of data: strings, numbers, booleans and null. It can also include non-primitive types, namely objects and arrays.

In addition there is the JSON5 format – offering extended syntax to cover more use cases for JSON, like using single quotes, no quotes, or trailing commas.

JSON support in SQLite

At first SQLite provided functions for JSON handling as an opt-in extension, controlled by the “-DSQLITE_ENABLE_JSON1” compile-time option. Since SQLite version 3.38.0, JSON1 support is built-in and can be omitted during compile time by adding the “-DSQLITE_OMIT_JSON” option. SQLite supports the JSON5 standard since the 3.42.0 version. While SQLite can process JSON5 structures, its functions will consistently return JSON in its JSON1 form.

SQLite, unlike other database engines, is flexibly typed, which is important for handling JSONs. When creating a table, you have the option to omit specifying the data type of each column entirely. SQLite operates with so called “storage classes”, where input is always classified into one of five categories [NULL, INTEGER, REAL, TEXT, BLOB]. These classes are internally divided into more granular types, aligning more naturally with the C language. However, this granularity remains hidden from users’ view. JSON naturally falls under the TEXT class. The following examples outline how you can store the JSON structure and query it in various ways within the SQLite database:

Practical examples of JSON handling in SQLite

This section will use the following JSON for demonstrative purposes:

$ JSON1='{ 
  "country": "Luxembourg",
  "capital": "Luxembourg City",
  "languages": [ "French", "German", "Luxembourgish" ]
$ JSON2='{
  "country": "Netherlands",
  "capital": "Amsterdam",
  "languages": [ "Dutch" ]

Parsing JSON before it enters a database

JSON can be parsed programmatically or by existing CLI tools, such as sqlite-utils, before being inserted into the database. In this process, the tool parses the JSON, organizing it into different columns within a table. The external tool processes the JSON to SQL commands directly inserting JSON fields into the table:

$ echo $JSON1 | sqlite-utils insert states.db states -
$ echo $JSON2 | sqlite-utils insert states.db states -
sqlite> .open states.db
sqlite> SELECT * FROM states;
Luxembourg|Luxembourg City|["French", "German", "Luxembourgish"]

Storing whole JSON into one column

Users can store JSON as it is or use the json() function, converting the JSON into its minified version before storing it. The json() function also throws an error if the JSON is invalid. However, it is advised to use the json_valid() or json_error_position() function for explicit validity tests before hand. The following example shows both cases:

sqlite> CREATE TABLE states(data TEXT);
sqlite> SELECT json_valid('{"country":"Luxembourg","capital":"Luxembourg City","languages":["French","German","Luxembourgish"]}');
sqlite> INSERT INTO states VALUES ('{"country":"Luxembourg","capital":"Luxembourg City","languages":["French","German","Luxembourgish"]}');
sqlite> INSERT INTO states VALUES (json('{"country":"Netherlands","capital":"Amsterdam","languages":["Dutch"]}'));
sqlite> SELECT * FROM states;
{"country":"Luxembourg","capital":"Luxembourg City","languages":["French","German","Luxembourgish"]}

Query JSON field as standard text

JSON stored in the TEXT data class means it can be selected without any added overhead like any other TEXT field. It is up to the application that uses this approach to correctly parse the data from the obtained JSON structure.

sqlite> SELECT data FROM states;
{"country":"Luxembourg","capital":"Luxembourg City","languages":["French","German","Luxembourgish"]}

Using JSON built-in functions

Built-in JSON functionality provides a couple of handy functions for querying specific elements of the stored JSON. json_extract() returns one or more values from the provided JSON. Similar are the “->” and “->>” operators. There are semantic differences between json_extract(), “->” and “->>”. The json_extract() function returns a JSON structure if the queried data is a JSON object or array and returns SQL format otherwise. The operator “->” always returns the JSON representation, and “->>” returns the SQL representation of the queried structure.

Using JSON functions for handling the JSON data leaves the parsing part of the job for the database, and the user application no longer needs to expend any effort on it. When accessing JSON elements, SQLite must parse JSON stored as a string each time.

Querying specific column and row can look like this:

sqlite> SELECT data->>'country' FROM states WHERE data->>'capital'=='Amsterdam';
sqlite> SELECT data->'country' FROM states WHERE data->>'capital'=='Amsterdam';
sqlite> SELECT json_extract(data,'$.country') FROM states WHERE json_extract(data, '$.capital')=='Amsterdam';
sqlite> SELECT json_extract(data, '$.languages') FROM states;

Other JSON functions in SQLite

Help with creating JSON structures is available with functions like json_array(), and json_object(). Functions for adjusting existing JSON structure are json_insert(), json_replace() and json_set() . A helpful function for debugging is json_type(), which returns the type of the JSON element. SQLite documentation provides more detailed information about these and many more JSON functions.


A big new feature is introduced in the SQLite 3.45.0 release – the SQLite JSONB. The aim of this feature is to speed up the JSON manipulation, since storing JSON as BLOB will save time normally spent on parsing the standard JSON saved as string. The JSONB object consists of a header and a body. The header of each element stores its properties, like size or type. Knowing the size of the JSON element speeds up its parsing; eliminating the need of searching for the next delimiter. SQLite offers various functions for JSONB handling. Many standard JSON functions have their JSONB equivalent, like the jsonb() function, which returns a JSONB object, or jsonb_extract(), extracting values from a JSONB blob. Many standard JSON functions can also take JSONB blob as a parameter.

Practical examples of JSONB handling in SQLite

Inserting a JSONB blob

When storing the JSON as a blob the jsonb() function can be used. This function takes valid JSON as a parameter and returns JSONB binary data. It can also take valid JSONB as parameter, in this situation it simply returns it back.

sqlite> CREATE TABLE states(data BLOB);
sqlite> INSERT INTO states VALUES(jsonb('{"country":"Luxembourg","capital":"Luxembourg City","languages":["French","German","Luxembourgish"]}'));
sqlite> INSERT INTO states VALUES(jsonb('{"country":"Netherlands","capital":"Amsterdam","languages":["Dutch"]}'<bdo dir="ltr" lang="en">));</bdo>

Retrieving JSONB blob

If we choose the standard way for retrieving JSONB, its binary representation is returned.

sqlite> SELECT data FROM states;
�wcountryLuxembourgwcapital�Luxembourg Citylanguages�gFrenchgGerman�Luxembourgish

It can be more useful to retrieve the text form of stored JSONB. For this the json() and json_extract() functions can be used.

sqlite> SELECT json(data) FROM states;
{"country":"Luxembourg","capital":"Luxembourg City","languages":["French","German","Luxembourgish"]}
sqlite> SELECT json_extract(data, '$.languages') FROM states;

On the other hand, jsonb_extract() returns JSONB binary if a JSON object or array is retrieved and returns values otherwise.

sqlite> SELECT jsonb_extract(data, '$.languages') FROM states;
sqlite> SELECT jsonb_extract(data, '$.capital') FROM states;
Luxembourg City

In conclusion

This article presented the handling of JSON in the SQLite database, demonstrating various approaches for storing and retrieving JSON data. Additionally, it briefly outlined SQLite’s new feature, SQLite JSONB, with illustrative examples.

Week 6 in Packit

Posted by Weekly status of Packit Team on February 12, 2024 12:00 AM

Week 6 (February 6th – February 12th)

  • Packit now searches for bugzilla about new release created by Upstream Release Monitoring to reference each time it syncs the release downstream. (packit#2229)
  • We have introduced new CLI command packit dist-git init that initializes Packit configuration for release automation in dist-git repository. (packit#2225)
  • We have introduced new configuration options require.label.present and require.label.absent. By configuring these you can specify labels that need to be present or absent on a pull request for Packit to react on such PR. (packit-service#2333)
  • Interface for labels was unified and labels property for PullRequest and Issue now return list of PRLabel and IssueLabel respectively. (ogr#839)

Episode 415 – Reducing attack surface for less security

Posted by Josh Bressers on February 12, 2024 12:00 AM

Josh and Kurt talk about a blog post explaining how to create a very very small container image. Generally in the world of security less is more, but it’s possible to remove too much. A lot of today’s security tooling relies on certain things to exist in a container image, if we remove them we could actually result in worse security than leaving it in. It’s a weird topic, but probably pretty important.

<audio class="wp-audio-shortcode" controls="controls" id="audio-3316-3" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_415_Reducing_attack_surface_for_less_security.mp3?_=3" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_415_Reducing_attack_surface_for_less_security.mp3</audio>

Show Notes

coverity 2022.6.0 and LibreOffice

Posted by Caolán McNamara on February 11, 2024 06:02 PM

After a long slog since November when the previous version of coverity was EOLed and we had to start using 2022.6.0 with its new suggestions for std::move etc, LibreOffice is now finally back to a 0 warnings coverity state

New and old apps on Flathub

Posted by Bastien Nocera on February 09, 2024 02:44 PM

3D Printing Slicers

 I recently replaced my Flashforge Adventurer 3 printer that I had been using for a few years as my first printer with a BambuLab X1 Carbon, wanting a printer that was not a “project” so I could focus on modelling and printing. It's an investment, but my partner convinced me that I was using the printer often enough to warrant it, and told me to look out for Black Friday sales, which I did.

The hardware-specific slicer, Bambu Studio, was available for Linux, but only as an AppImage, with many people reporting crashes on startup, non-working video live view, and other problems that the hardware maker tried to work-around by shipping separate AppImage variants for Ubuntu and Fedora.

After close to 150 patches to the upstream software (which, in hindsight, I could probably have avoided by compiling the C++ code with LLVM), I manage to “flatpak” the application and make it available on Flathub. It's reached 3k installs in about a month, which is quite a bit for a niche piece of software.

Note that if you click the “Donate” button on the Flathub page, it will take you a page where you can feed my transformed fossil fuel addiction buy filament for repairs and printing perfectly fitting everyday items, rather than bulk importing them from the other side of the planet.


Preparing a Game Gear consoliser shell

I will continue to maintain the FlashPrint slicer for FlashForge printers, installed by nearly 15k users, although I enabled automated updates now, and will not be updating the release notes, which required manual intervention.

FlashForge have unfortunately never answered my queries about making this distribution of their software official (and fixing the crash when using a VPN...).


As I was updating the Rhythmbox Flatpak on Flathub, I realised that it just reached 250k installs, which puts the number of installations of those 3D printing slicers above into perspective.


The updated screenshot used on Flathub

Congratulations, and many thanks, to all the developers that keep on contributing to this very mature project, especially Jonathan Matthew who's been maintaining the app since 2008.

Introducing Fedora Atomic Desktops

Posted by Fedora Magazine on February 09, 2024 12:00 PM

We are happy to announce the creation of a new family of Fedora Linux spins: Fedora Atomic Desktops! As Silverblue has grown in popularity, we’ve seen more of our mainline Fedora Linux spins make the jump to offer a version that implements rpm-ostree. It’s reached the point where it can be hard to talk about all of them at the same time. Therefore we’ve introduced a new brand that will serve to simplify how we discuss rpm-ostree and how we name future atomic spins.

Some may note that this is more of a reintroduction. Project Atomic was started 10 years ago with the development of Atomic Host. As the team stated back then, “the Atomic Host comprises a set of packages from an operating system (…) , pulled together with rpm-ostree to create a filesystem tree that can be deployed, and updated, as an atomic unit.” In 2018 we saw the start of Fedora Atomic Workstation, a desktop client implementation using GNOME, which became Silverblue a year later.

2021 saw the introduction of Kinoite in Fedora 35. Things seemed quiet for a while until last year, when we saw the release of two more rpm-ostree spins – Sericea in Fedora 38 and Onyx in Fedora 39.

Why a New Brand?

That leads to the first reason for needing to adjust our branding: more spins may come. We have four traditional Fedora Linux spins that do not yet have atomic variants. Some of these desktop environments are being experimented with, like Vauxite (Xfce) from within the Universal Blue custom images project. There are other desktop environments, like Pantheon or the upcoming COSMIC, that we would love to welcome to the community if contributors would like to make that happen. As this group of spins grows, we need to organize them under one umbrella.

Secondly, not having a unified way to talk about our atomic spins makes it harder to talk about them. Have you ever tripped over yourself trying to mention all four atomic spins, or using a shorthand for referring to them (ie Silverblue and friends)? It’s a byproduct of how unwieldy it is to have one spin, Silverblue, represent three others while also meaning something specific, an rpm-ostree implementation of Fedora Linux Workstation. There is also confusion about what aspects of these spins are shared. For example, some folks may be looking for documentation on Kinoite not realizing that an article about Silverblue also applies to their problem. Using so many keywords when you’re looking for information on the one aspect they all share is inefficient.

Thirdly, this nice branding term is also a more accurate way of talking about how rpm-ostree works. Fedora Atomic spins are not actually immutable. There are ways to get around the read-only aspects of the implementation even though it is much harder. The nature of the OS, where updates are only implemented when they successfully build and you can rollback or rebase between core host systems, is better described by atomicity than immutability. Atomic is also how many of the contributors who work on rpm-ostree prefer to talk about it! Rebranding provides an opportunity to change the language surrounding this technology.

The Good Part

Fedora Atomic Desktops is made up of four atomic spins:

Silverblue and Kinoite retained their names because of brand recognition and being around for much longer. There are many articles and videos made with the Silverblue or Kinoite brands, and we don’t want to waste those resources by making them harder to find with a rebrand. Sericea and Onyx are much newer and both SIGs wanted to switch to the new naming convention.

Going forward, new atomic spins will use the ‘Fedora (DE name) Atomic’ format to keep things simple and clear. No more questions about which name refers to what desktop environment. No more mispronunciations. Much more clarity on why these Fedora spins are different from the regular spins.

The new umbrella brand also gives us a new name to put alongside their rpm-ostree cousins! Fedora Atomic Desktops live alongside Fedora CoreOS and Fedora IoT as they all use rpm-ostree in serving different needs.

The Fedora Atomic Desktops SIG, and many in the community, are very excited about this change. We hope it makes talking and learning about this kind of operating system easier.

Click here to learn more about Fedora Atomic. Let’s start using #FedoraAtomic to streamline our conversations on social media too!

Infra & RelEng Update – Week 6 2024

Posted by Fedora Community Blog on February 09, 2024 10:00 AM

This is a weekly report from the I&R (Infrastructure & Release Engineering) Team. It also contain updates for CPE (Community Platform Engineering) Team as the CPE initiatives are in most cases tied to I&R work.

We provide you both infographic and text version of the weekly report. If you just want to quickly look at what we did, just look at the infographic. If you are interested in more in depth details look below the infographic.

Week: 05 February – 09 February 2024

<figure class="wp-block-image size-full wp-lightbox-container" data-wp-context="{ "core": { "image": { "imageLoaded": false, "initialized": false, "lightboxEnabled": false, "hideAnimationEnabled": false, "preloadInitialized": false, "lightboxAnimation": "zoom", "imageUploadedSrc": "https://communityblog.fedoraproject.org/wp-content/uploads/2024/02/Weekly-Report-Template19-scaled.jpg", "imageCurrentSrc": "", "targetWidth": "2560", "targetHeight": "1706", "scaleAttr": "", "dialogLabel": "Enlarged image" } } }" data-wp-interactive="data-wp-interactive">I&R infographic </figure>

Infrastructure & Release Engineering

Purpose of this team is to take care of day to day business regarding CentOS and Fedora Infrastructure and Fedora release engineering work.
It’s responsible for services running in Fedora and CentOS infrastructure and preparing things for the new Fedora release (mirrors, mass branching, new namespaces etc.).
List of planned/in-progress issues

Fedora Infra

CentOS Infra including CentOS CI

Release Engineering


Extra Packages for Enterprise Linux (or EPEL) is a Fedora Special Interest Group that creates, maintains, and manages a high quality set of additional packages for Enterprise Linux, including, but not limited to, Red Hat Enterprise Linux (RHEL), CentOS and Scientific Linux (SL), Oracle Linux (OL).

  • Presented “EPEL 10 Overview” at CentOS Connect
  • Staffed Fedora and CentOS booths at FOSDEM with multiple EPEL discussions

If you have any questions or feedback, please respond to this report or contact us on -cpe channel on matrix.

The post Infra & RelEng Update – Week 6 2024 appeared first on Fedora Community Blog.

NixOS Grafana Tailscale Auth

Posted by Zach Oglesby on February 08, 2024 11:11 PM

It took me a while to figure out how to configure an auth proxy on Grafana in a NixOS config, turns out it was really easy. I was doing everything right except wrapping auth.proxy in quotes.

services.grafana = {
      "auth.proxy" = {
        enabled = true;
        header_name = "Tailscale-User-Login";
        header_property = "username";
        auto_sign_up = true;
        sync_ttl = 60;
        whitelist = "";
        headers = "Email:Tailscale-User-Login Name:Tailscale-User-Name";
        enable_login_token = true;

Fixing FreeIPA ACME

Posted by Tomasz Torcz on February 08, 2024 08:17 PM

I've managed to fix one of the recent problems with my FreeIPA. To be more specific – after few months breakage, ACME issuer is working again for me.

Solution was dead simple after I've found the relevant line in the logs:

2024-02-07 21:59:24 [ajp-nio-0:0:0:0:0:0:0:1-8009-exec-3]
SEVERE: Servlet.service() for servlet [ACME] in context with path [/acme] threw exception
org.jboss.resteasy.spi.UnhandledException: com.netscape.certsrv.base.UnauthorizedException:
Authorization failed: Authorization failed on resource: group=Certificate Manager Agents, operation: {1}

I've fired up an LDAP client (I'm using JXplorer), went to /ipaca/groups/Certificate Manager Agents and added two records. Type uniqueMember with value uid=acme-kaitain.pipebreaker.pl,ou=people,o=ipaca, plus another one for the second replica. FreeIPA started to issue certificates again, although there are still some strange trackbacks in the logs.

Why the group membership dissapeared? I don't know.

Mise en place de zram-generator sur Rocky Linux

Posted by Guillaume Kulakowski on February 08, 2024 12:41 PM

Je n’y avais pas fait attention, mais mon VPS Scaleway sous Rocky Linux 8, de base, n’avait de swap. Celui de Fedora-Fr (sous Rocky Linux 9) non plus. La commande swapon -s ne me renvoyez rien. Après plusieurs échanges avec Nicolas sur ZRAM (permet de compresser la mémoire vive pour éviter les écritures sur disque), […]

Cet article Mise en place de zram-generator sur Rocky Linux est apparu en premier sur Guillaume Kulakowski's blog.

fwupd: Auto-Quitting On Idle, Harder

Posted by Richard Hughes on February 05, 2024 01:25 PM

In fwupd 1.9.12 and earlier we had the following auto-quit behavior: Auto-quit on idle after 2 hours, unless:

  • Any thunderbolt controller, thunderbolt retimer or synaptics-mst devices exist.

These devices are both super slow to query and also use battery power to query as you have to power on various hungry things and then power them down to query for the current firmware version.

In 19.13, due to be released in a few days time, we now: Auto-quit after 5 minutes, unless:

  • Any thunderbolt controller, thunderbolt retimer or synaptics-mst devices exist.
  • Any D-Bus client (that used or is using fwupd) is still alive, which includes gnome-software if it’s running in the background of the GNOME session
  • The daemon took more than 500ms to start – on the logic it’s okay to wait 0.5 seconds on the CLI to get results to a query, but we don’t want to be waiting tens of seconds to check for updates on a deeply nested USB hub devices.

The tl;dr: is that most laptop and desktop machines have Thunderbolt or MST devices, and so they already had fwupd running all the time before, and continue to have it running all the time now. Trading 3.3MB of memory and an extra process for instant queries on a machine with GBs of memory is probably worthwhile. For embedded machines like IoT devices, and for containers (that are using fwupd to update things like the dbx) fwupd was probably starting and then quitting after 2h before, and now fwupd is only going to be alive for 5 minutes before quitting.

If any of the thresholds (500 ms) or timeouts (5 mins) are offensive to you then it’s all configurable, see man fwupd.conf for details. Comments welcome.

PHPUnit 11

Posted by Remi Collet on February 05, 2024 11:24 AM

RPM of PHPUnit version 11 are available in remi repository for Fedora ≥ 38 and for Enterprise Linux (CentOS, RHEL, Alma, Rocky...).

Documentation :

emblem-notice-24.pngThis new major version requires PHP ≥ 8.2 and is not backward compatible with previous versions, so the package is designed to be installed beside versions 7, 8, 9 and 10.

Installation, Fedora and Enterprise Linux 8:

dnf --enablerepo=remi install phpunit10

Installation, Enterprise Linux 7:

yum --enablerepo=remi install phpunit10

Notice: this tool is an essential component of PHP QA in Fedora. This version should be soon available in Fedora ≥ 39 (after the review of 21 new packages).

Fedora Linux Flatpak cool apps to try for February

Posted by Fedora Magazine on February 05, 2024 08:00 AM

This article introduces projects available in Flathub with installation instructions.

Flathub is the place to get and distribute apps for all of Linux. It is powered by Flatpak, allowing Flathub apps to run on almost any Linux distribution.

Please read “Getting started with Flatpak“. In order to enable flathub as your flatpak provider, use the instructions on the flatpak site.

Starting with this article, we will present four apps in four categories. The four categories will be:

  • Productivity
  • Games
  • Creativity
  • Miscellaneous


In the Productivity section we have Joplin, a note-taking and to-do list application. One of its main features is the organization of your notes. They can be placed in notebooks where they are easily searchable. Joplin has no reason to be envious of closed source note-taking applications, and it’s also multi-platform. For me, a main feature is that Markdown is available for writing your notes.


  • Multi-platform
  • Notes are in Markdown.
  • Notes can be synchronized with various cloud services, as well as its own cloud.
  • It is configured as “offline first”

This is an app that I tried on mobile first, and then I notice the desktop app. Both are incredible.

<figure class="wp-block-image size-large"></figure>

You can install “Joplin” by clicking the install button on the web site or manually using this command:

flatpak install flathub net.cozic.joplin_desktop

Battle for Wesnoth

In the Games section we have Battle for Wesnoth. This is a classic linux “turn-based” strategy game with a high fantasy theme.

In “Battle for Wesnoth” you can build up a great army, gradually turning raw recruits into hardened veterans. In later sessions, recall your toughest warriors and form a deadly host that no one can stand against! Choose units from a large pool of specialists and hand-pick a force with the right strengths to fight well on different terrains, and against all manner of opposition.

<figure class="wp-block-image size-large"></figure>

You can install “Battle for Wesnoth” by clicking the install button on the web site or manually using this command:

flatpak install flathub org.wesnoth.Wesnoth

Battle for Wesnoth is also available as rpm on fedora’s repositories


In the Creativity section we have Peek. Peek is a screen recorder for capturing and saving short GIFs from your screen.

Some of its features are:

  • Selectable screen region to record
  • Recorded video saved as an optimized animated GIF
  • Record directly to WebM or MP4 format
  • Simple user interface optimized for the task
  • Support for HiDPI screens
  • Works inside a GNOME Shell Wayland session (using XWayland)
<figure class="wp-block-image size-full"></figure>

You can install “Peek” by clicking the install button on the web site or manually using this command:

flatpak install flathub com.uploadedlobster.peek

Tux Paint

In the Miscellaneous section we have Tux Paint. This classic application is a drawing program aimed at ages 3 to 12. But the truth is that we can do awesome drawings, not only stamps and colors.

Tux Paint has been in Linux for a long time, and it’s great software. It’s in the Miscellaneous category because, even though it was born as educational software, it can be use as a drawing tool and as a game.

<figure class="wp-block-image size-full"></figure>

You can install “Tux Paint” by clicking the install button on the web site or manually using this command:

flatpak install flathub org.tuxpaint.Tuxpaint

Tux Paint is also available as rpm on fedora’s repositories

anv: vulkan av1 decode status

Posted by Dave Airlie on February 05, 2024 03:16 AM

 Vulkan Video AV1 decode has been released, and I had some partly working support on Intel ANV driver previously, but I let it lapse.

The branch is currently [1]. It builds, but is totally untested, I'll get some time next week to plug in my DG2 and see if I can persuade it to decode some frames.

Update: the current branch decodes one frame properly, reference frames need more work unfortunately.

[1] https://gitlab.freedesktop.org/airlied/mesa/-/commits/anv-vulkan-video-decode-av1

Week 5 in Packit

Posted by Weekly status of Packit Team on February 05, 2024 12:00 AM

Week 5 (January 30th – February 5th)

  • packit validate-config now checks whether the Upstream Release Monitoring for the package is correctly configured if pull_from_upstream job is present in the configuration. (packit#2226)
  • There is a new global configuration option parse_time_macros that allows to configure macros to be explicitly defined or undefined at spec file parse time. (packit#2222)
  • We have added additional retries and improvements for task processing. (packit-service#2326)

Episode 414 – The exploited ecosystem of open source

Posted by Josh Bressers on February 05, 2024 12:00 AM

Josh and Kurt talk about open source projects proving builds, and things nobody wants to pay for in open source. It’s easy to have unrealistic expectations for open source projects, but we have the open source that capitalism demands.

<audio class="wp-audio-shortcode" controls="controls" id="audio-3310-4" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_414_The_exploited_ecosystem_of_open_source.mp3?_=4" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_414_The_exploited_ecosystem_of_open_source.mp3</audio>

Show Notes


Posted by Christiano Anderson on February 04, 2024 03:57 PM
It was fun to attend FOSDEM for the first time. The event is of immense magnitude, as multiple talks are occurring simultaneously, and approximately 8000 individuals from all over the globe attended the event. It’s a chance to meet new people, learn about new technology, get stickers (I’m trying to find more space on my laptop to put more stickers) and help out at the event. My first FOSDEM and my first time volunteering to heralding the lightning talks, what a great way to get immersed in the event!

Vector times Matrix in C++

Posted by Adam Young on February 03, 2024 11:56 PM

The basic tool for neural networks is: vector times matrix equals vector. The first vector is your input pattern, the second is your output pattern. Stack these in a series and you have a deep neural network. The absolute simplest implementation I could find for this is in C++ using boost.

Here it is:

#include <boost/numeric/ublas/matrix.hpp>
#include <boost/numeric/ublas/io.hpp>

int main () {
    using namespace boost::numeric::ublas;
    matrix<double> m (3, 3);
    vector<double> v (3);
    for (unsigned i = 0; i < std::min (m.size1 (), v.size ()); ++ i) {
        for (unsigned j = 0; j < m.size2 (); ++ j)
            m (i, j) = 3 * i + j;
        v (i) = i;

     vector<double> out1 = prod (m, v);
     vector<double> out2 = prod (v, m);

    std::cout << out1 << std::endl;
    std::cout << out2 << std::endl;


This is almost verbatim the sample code from Boost BLAS. Scroll down to the prod example. So now the question is, how does one go from that to a neural network? More to come.

Another question that comes to my mind is how would you optimize this if you have a vector based co-processor on your machine?

Infra & RelEng Update – Week 5 2024

Posted by Fedora Community Blog on February 02, 2024 10:00 AM

This is a weekly report from the I&R (Infrastructure & Release Engineering) Team. It also contains updates for CPE (Community Platform Engineering) Team as the CPE initiatives are in most cases tied to I&R work.

We provide you both an infographic and a text version of the weekly report. If you just want to quickly look at what we did, just look at the infographic. If you are interested in more in-depth details look below the infographic.

Week: 29 January – 02 February 2024

Read more: Infra & RelEng Update – Week 5 2024 <figure class="wp-block-image size-full wp-lightbox-container" data-wp-context="{ "core": { "image": { "imageLoaded": false, "initialized": false, "lightboxEnabled": false, "hideAnimationEnabled": false, "preloadInitialized": false, "lightboxAnimation": "zoom", "imageUploadedSrc": "https://communityblog.fedoraproject.org/wp-content/uploads/2024/02/Week-5-2024-Report-Infographics-scaled.jpg", "imageCurrentSrc": "", "targetWidth": "2560", "targetHeight": "2511", "scaleAttr": "", "dialogLabel": "Enlarged image" } } }" data-wp-interactive="data-wp-interactive">I&R infographic </figure>

Infrastructure & Release Engineering

The purpose of this team is to take care of day-to-day business regarding CentOS and Fedora Infrastructure and Fedora release engineering work.
It’s responsible for services running in Fedora and CentOS infrastructure and preparing things for the new Fedora release (mirrors, mass branching, new namespaces, etc.).
List of planned/in-progress issues

Fedora Infra

CentOS Infra including CentOS CI

Release Engineering

CPE Initiatives


Extra Packages for Enterprise Linux (or EPEL) is a Fedora Special Interest Group that creates, maintains, and manages a high-quality set of additional packages for Enterprise Linux, including, but not limited to, Red Hat Enterprise Linux (RHEL), CentOS and Scientific Linux (SL), Oracle Linux (OL).


  • Packaging workshop at Centos Connect

List of new releases of apps maintained by CPE

  • Minor update of Anitya from 1.8.1 to 1.9.0 on 2024-01-30: https://github.com/fedora-infra/anitya/releases/tag/1.9.0

If you have any questions or feedback, please respond to this report or contact us on -cpe channel on matrix.

The post Infra & RelEng Update – Week 5 2024 appeared first on Fedora Community Blog.