Fedora People

Dedicated Windows XML eventlog parser in syslog-ng

Posted by Peter Czanik on March 05, 2024 10:58 AM

Version 4.6 of syslog-ng introduced windows-eventlog-xml-parser(), a dedicated parser for XML-formatted event logs from Windows. It makes the EventData portion of log messages more useful, as it combines two arrays into a list of name-value pairs.

https://www.syslog-ng.com/community/b/blog/posts/dedicated-windows-xml-eventlog-parser-in-syslog-ng

<figure><figcaption>

syslog-ng logo

</figcaption> </figure>

Aggregating messages in syslog-ng using grouping-by()

Posted by Peter Czanik on March 05, 2024 10:53 AM

Sometimes you have many log messages from an app, but none of them have the exact content you need. This is where the grouping-by() parser of syslog-ng can help. It allows you to aggregate information from multiple log messages into a single message.

In this blog, I will show you how to parse sshd logs using the patterndb parser of syslog-ng, and then create an aggregate message from the opening and closing log message using grouping-by.

https://www.syslog-ng.com/community/b/blog/posts/aggregating-messages-in-syslog-ng-using-grouping-by

<figure><figcaption>

syslog-ng logo

</figcaption> </figure>

Go 1.21 in Fedora 38

Posted by Álex Sáez on March 05, 2024 08:39 AM

Fedora 38 will have Go 1.21 until its EOL (2024-05-14). FESCo approved the change, and it was announced few days ago. The update is on testing and unless something happens it will be push soon. Feel free to test it and give karma if everything works for you.

By the way, this doesn’t mean there will be a mass rebuild of everything in Fedora 38 to use Go 1.21. Only new updates will use it. This is the third time we do this kind of update to avoid having an unsupported version in Fedora for weeks.

And, just a reminder that if you want the latest Go releases, you can always enable the golang-rawhide COPR project that I maintain.

Welcome Outreachy 2024 applicants!

Posted by Fedora Community Blog on March 05, 2024 08:00 AM

On March 4th, 2024, the application phase kicked off for the Outreachy 2024 internship program. Fedora is proud to continue our participation in Outreachy again this year. We are offering three internships that will run from May to August 2024. This blog post is an orientation for both community members and new applicants to the Fedora community to understand Outreachy, what projects we are running this year, and some best practices for working with the Fedora community.

Read on for more details!

About Outreachy

From the Outreachy website:

Outreachy provides internships in open source and open science. Outreachy provides internships to people subject to systemic bias and impacted by underrepresentation in the technical industry where they are living. […] Outreachy internships are for applicants from around the world who face under-representation, and systemic bias or discrimination in the technology industry of their country. Our internships are completely remote, paid, and last three months.

outreachy.org

Outreachy is unique among internship programs because it includes a contribution phase, where applicants actually get involved and participate with the project before interns are selected. Project mentors evaluate applicants based on their participation and contributions in the community before they are selected. The application phase runs from March 4th, 2024 to April 2nd, 2024, when applicants must submit their final application.

So, because of this, many newcomers will start showing up in Fedora Project contributor spaces for the next couple of weeks. This can cause a flood of activity and engagement in project-related channels, which might be overwhelming if you are not expecting it! Fortunately, Fedora is lucky to have an excellent team of mentors running three projects this year.

Fedora Outreachy 2024 projects

Fedora is running three projects under Outreachy this year. The descriptions below are taken directly from the Outreachy website as shown to applicants:

Create a gateway from webhooks to Fedora Messaging

We currently have multiple applications that receive a webhook from an online app and turn the content into a Fedora Messaging message. Some of them are very old and are still using fedmsg, others are kinda recent and more maintainable (with tests, CI, etc).

It would simplify our setup and our maintenance to regroup all that in a single app with multiple endpoints. There’s been an investigation on this subject already: https://fedora-arc.readthedocs.io/en/latest/webhook2fedmsg/

Create an outreach strategy, write documentation, run a marketing campaign, and measure results

Join the Fedora community as a Community Architect Intern, working together with Fedora Project leadership and the Marketing Team to develop an outreach strategy, improve technical documentation, run and execute a marketing campaign, and measure results on efforts. This project proposal does not require a software engineering or computer science background, although this can be an asset.

May to August is an exciting time of year for the Fedora Project. We will release Fedora Linux 40 in early May, the DevConf CZ conference in Brno, Czechia will happen in June, and our flagship contributor conference, Flock, will take place in August. In addition, Fedora is in the process of defining a strategy for the global project until 2028. In a community as diverse and globally distributed as Fedora, communications are a key part to our success in boosting community awareness for key initiatives and also inviting participation in at the right times. We are looking for an intern who can work with stakeholders to build an outreach strategy for May to August, see it through from start to finish, and measure results on the impact of the outreach strategy.

A day in the life of a Community Architect Intern might look like the list of tasks below:

  • Write blog posts to share important dates or deadlines.
  • Improve documentation on how a community member can request support on promoting a new change in Fedora Linux.
  • Come up with social media campaigns for our various accounts.
  • Look over analytics and metrics about blog posts, social media posts, and other available data to figure out if a campaign is working or if it needs a new direction.
  • Create slideshow presentations that orient travelers to an event about key information, messaging about the Fedora Project, and how to get help promoting scheduled content (e.g. an accepted conference session by a Fedora speaker) on Fedora’s social media accounts.

This list is an example, but there is room for an intern to also bring their own ideas and creativity to help us share the story of the Fedora Project and why we have an amazing community.

Create a tool to use natural language to generate NetworkManager configuration

NetworkManager is the standard Linux network configuration tool suite. It supports a large range of networking setups, from desktop to servers and mobile and integrates well with popular desktop environments and server configuration management tools.

Nmstate is a library with an accompanying command line tool that manages host networking settings in a declarative manner. The networking state is described by a pre-defined schema. Reporting of current state and changes to it (desired state) both conform to the schema.

Linux System Roles is a project related to Ansible, a tool for automating configuration management, application deployment and software provisioning. The goal of linux system roles is to provide a consistent user interface, abstracting from any particular implementation of the linux subsystems, but trying to get the most out of the particular libraries on each one of them. The Network Linux System Role currently provides a unique configuration interface for network-scripts and NetworkManager.

The topic for this internship is enhancing this ecosystem with AI capabilities to improve the user experience for these projects.

While it is rather easy for users to describe in natural language what they would like to configure, it can be hard to find the right options or using the right syntax in configuration files. AI provides a way to use natural language. As part of the internship, the projects should be enhanced to provide user support TUI based on prompts such as “Configure network devices eth0 and eth1 in a linux bridge”.

Best practices for contributing to Fedora in Outreachy 2024

Are you an applicant for the Outreachy 2024 round? If so, this section is for you! Fedora has been around for over 20 years. There are norms and behaviors in our community that differ from other open source communities. Understanding these norms and following some best practices will help you be lead with the right forward on your journey contributing to Fedora.

Make your Fedora toolbox

There are a few important tools we use to communicate and get work done in Fedora:

  • Fedora Account System (FAS): Every Fedora contributor needs to register an account in the Fedora Account System, shortened to FAS. Your FAS account is like your digital passport in Fedora. You will use this account to log into different apps and services across the Fedora community.
    • Tip: Make sure to fill out your profile completely, including time zone, Matrix ID, and a profile picture. This will help others get to know you and also know when you might be asleep or offline!
  • Fedora Discussion: Our online forum. You use a FAS account to log in here. We use this for long-form discussions or things that need wider community input.
  • Fedora Chat: Our Matrix homeserver. You can use a FAS account to get a free Matrix account, or you can register one using Element. Anyone with a Matrix account can join Fedora Chat rooms. We use Matrix chat rooms for quick discussions or short-term planning and coordination.
  • Git forges: A git forge is a platform which uses git to collaborate. This can be for code or for issues to track, discuss, and plan work. Depending on what part of Fedora you are contributing to, this could be on GitLab, Pagure, or GitHub.

Setting yourself up on these tools will equip you for success. Regardless of which project you are contributing to during this application round, all Fedora contributors are encouraged to use these tools. Getting set up on these will help you be connected and informed about what is happening in Fedora.

Join important Fedora Chat rooms

Once you are oriented on Matrix, there are a few rooms that Outreachy applicants should make sure to join for the best experience:

  • Fedora: This is a Matrix Space that encompasses all of the Fedora rooms on Matrix. Joining this Matrix Space will make it easier to find all the Fedora rooms that exist on Matrix.
  • Fedora Mentoring: All Outreachy applicants should be in this room, as it is a general room for guidance and questions about contributing to Fedora. Mentors of other Outreachy projects are also in this room, so there is a lot of mentor knowledge there. Make sure you are joined here!
  • Announcements: Another good room to get a pulse on what is happening in the community. New blog posts automatically get posted here by a chat bot.
  • Fedora Meeting: For nearly 20 years, Fedora has run team and project meetings over chat rooms. We even have a bot to help us with those meetings, and the bot is in this room! You can see other teams having meetings there too, which will help give you a view of what all is happening in Fedora.

Introduce yourself on Fedora Discussion

Once you create an account in the Fedora Account System, you can log into Fedora Discussion. Check out this Outreachy 2024 welcome topic. Mentors and applicants are both encouraged to check out this topic and answer a few icebreaker questions so we can all get to know each other better.

Join the Fedora Outreachy 2024 community

If you want to stay informed with the Fedora Mentored Projects programs, please join #mentoring:fedoraproject.org on Fedora Chat and/or watch the #mentored-projects-team tag on Fedora Discussion. These are main places where we coordinate on our mentored projects. We also welcome contributors who are willing to be friendly faces to newcomers to the Fedora community. Mentors are the faces of the Fedora community, and it can have a lasting impression on a newcomer to have a friendly face welcome them into the community.

I wish the very best to all Outreachy 2024 applicants this round. Good luck on your final applications!

The post Welcome Outreachy 2024 applicants! appeared first on Fedora Community Blog.

[Short Tip] Processing line by line in a loop in Nushell

Posted by Roland Wolters on March 04, 2024 08:24 AM
<figure class="alignright size-thumbnail"></figure>

For a test I recently had to process a plain list of items that was outputted by a program. In Bash, the usual way to do so is:

while read -r line; do COMMAND $line; done

But how could this be done in Nushell? Just using the same command gives an error:

❯ flatpak list|grep system|cut -f 2|while read -r line; do flatpak info $line; done
Error: nu::parser::parse_mismatch

× Parse mismatch during operation.
╭─[entry #2:1:1]
1 │ flatpak list|grep system|cut -f 2|while read -r line; do flatpak info $line; done
· ─┬
· ╰── expected operator
╰────

❯ flatpak list|grep system|cut -f 2|while read line; do flatpak info $line; done
Error: nu::parser::parse_mismatch

× Parse mismatch during operation.
╭─[entry #3:1:1]
1 │ flatpak list|grep system|cut -f 2|while read line; do flatpak info $line; done
· ──┬─
· ╰── expected block, closure or record
╰────

Instead, the trick is to tell Nushell to read the input line by line with lines, and then process each and every item with a sub-function:

flatpak list|grep system|cut -f 2| lines|each { |it| flatpak info ($it) }

This worked flawlessly.

Fedora Linux Flatpak cool apps to try for March

Posted by Fedora Magazine on March 04, 2024 08:00 AM

This article introduces projects available in Flathub with installation instructions.

Flathub is the place to get and distribute apps for all of Linux. It is powered by Flatpak, allowing Flathub apps to run on almost any Linux distribution.

Please read “Getting started with Flatpak“. In order to enable flathub as your flatpak provider, use the instructions on the flatpak site.

These apps are classified into four categories:

  • Productivity
  • Games
  • Creativity
  • Miscellaneous

Planify

In the Productivity section we have Planify, a great task manager, with a great UI and synchronization with clouds like Todoist and Nextcloud.

Features:

  • Neat visual style.
  • Drag and Order: Sort your tasks wherever you want.
  • Progress indicator for each project.
  • Be more productive and organize your tasks by ‘Sections’.
  • Visualize your events and plan your day better.
  • Reminder system, you can create one or more reminders, you decide.
<figure class="wp-block-image size-full"></figure>

You can install “Planify” by clicking the install button on the web site or manually using this command:

flatpak install flathub io.github.alainm23.planify

Minetest

In the Games section we have Minetest. Minetest is an infinite-world block sandbox game and game engine. Players can create and destroy various types of blocks in a three-dimensional open world. This allows forming structures in every possible creation, on multiplayer servers or in single player.

Minetest is designed to be simple, stable, and portable. It is lightweight enough to run on fairly old hardware.

Minetest has many features, including:

  • Ability to walk around, dig, and build in a near-infinite voxel world
  • Crafting of items from raw materials
  • Fast and able to run on old and slow hardware
  • A simple modding API that supports many additions and modifications to the game
  • Multiplayer support via servers hosted by users
  • Beautiful lightning-fast map generator
<figure class="wp-block-image size-large"></figure>

You can install “Minetest” by clicking the install button on the web site or manually using this command:

flatpak install flathub net.minetest.Minetest

Minetest is also available as rpm on fedora’s repositories

Minder

In the Miscellaneous section we have Minder. This is a hard decision to put in here, because it’s also a productivity tool. But I think personally that being able to organize, you can be more creative.

Minder is a mind-mapping tool, that helps you to shape and to organize your ideas.

The main features are:

  • Quickly create visual mind-maps using the keyboard and automatic layout.
  • Choose from many tree layout choices.
  • Support for Markdown formatting.
  • Support for insertion of Unicode characters.
  • Add notes, tasks and images to your nodes.
  • Add node-to-node connections with optional text and notes.
  • Stylize nodes, links and connections to add more meaning and improve readability.
  • Quick search of node and connection titles and notes, including filtering options.
  • Zoom in or enable focus mode to focus on certain ideas or zoom out to see the bigger picture.
  • Enter focus mode to better view and understand portions of the map.
  • Unlimited undo/redo of any change.
  • Colorized node branches.
  • Open multiple mindmaps with the use of tabs.
  • Gorgeous animations.
  • Import from OPML, FreeMind, Freeplane, PlainText (formatted), Markdown, Outliner, PlantUML, Portable Minder, filesystem, and XMind formats.
  • Export to CSV, FreeMind, Freeplane, JPEG, BMP, SVG, Markdown, Mermaid, OPML, Org-Mode, Outliner, PDF, PNG, PlainText, PlantUML, Portable Minder, filesystem, XMind and yEd formats.
<figure class="wp-block-image size-large"></figure>

You can install “Minder” by clicking the install button on the web site or manually using this command:

flatpak install flathub com.github.phase1geo.minder

Minetest is also available as rpm on fedora’s repositories

Metronome

In the Creativity section we have Metronome. This little tool is designed to help musicians, to keep the tempo. It’s a tool that I use a lot with my son, he’s studying violin and this app is a must in his Fedora.

<figure class="wp-block-image size-full"></figure>

You can install “Metronome” by clicking the install button on the web site or manually using this command:

flatpak install flathub com.adrienplazas.Metronome

Week 9 in Packit

Posted by Weekly status of Packit Team on March 04, 2024 12:00 AM

Week 9 (February 27th – March 4th)

  • We have fixed the bug about Packit overwriting the final state of the build in case the build start is being processed later than the end of the build. (packit-service#2358)
  • We have improved the reporting of configuration errors in GitLab. (packit-service#2357)
  • GitLabProject.get_file_content() can now correctly handle file paths starting with ./. (ogr#844)

Episode 418 – Being right all the time is hard

Posted by Josh Bressers on March 04, 2024 12:00 AM

Josh and Kurt talk about recent stories about data breaches, flipper zero banning, and realistic security. We have a lot of weird challenges in the world of security, but hard problems aren’t impossible problems. Sometimes we forget that.

<audio class="wp-audio-shortcode" controls="controls" id="audio-3331-1" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_418_Being_right_all_the_time_is_hard.mp3?_=1" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_418_Being_right_all_the_time_is_hard.mp3</audio>

Show Notes

Bash $PATH filtering

Posted by James Just James on March 03, 2024 12:54 AM
As most modern GNU+Linux distro users already know, you get a lot of tools included for free! Many of these may clutter up your $PATH and make bash tab completion more difficult than it has to be. Here’s a way to improve this! A mess: Here’s what I see when I tab-complete cd<TAB>: james@computer:~$ cd cd cddb_query cd-info cd-read cd-convert cd-drive cd-it8 cdrecord cd-create-profile cd-fix-profile cdmkdir cdtmpmkdir cdda-player cd-iccdump cd-paranoia I genuinely only use three of those commands.

Untitled Post

Posted by Zach Oglesby on March 03, 2024 12:26 AM

Took my boys to the game store to play the prerelease of the new Star Wars TGC. They had lots of fun and everyone was so nice to them.

Contribute at the Fedora Linux 40 i18n Test Week

Posted by Fedora Magazine on March 02, 2024 05:41 PM

The i18n team is testing changes for Fedora Linux 40 (ibus-anthy 1.5.16, IBus 1.5.30 and many more). As a result, the i18n and QA teams organized a test week from Tuesday, March 05, 2024, to Monday, March 11, 2024. The wiki page in this article contains links to the test images you’ll need to participate. Please continue reading for details.

How does a test week work?

A test week is an event where anyone can help ensure changes in Fedora Linux work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed before, this is a perfect way to get started.

To contribute, you only need to be able to do the following things:

  • Download test materials, which include some large files
  • Read and follow directions step by step

The wiki page for the i18n test week has a lot of good information on what and how to test. After you’ve done some testing, you can log your results in the test week web application. If you’re available on or around the days of the event, please do some testing and report your results.

Happy testing, and we hope to see you on one of the test days.

Drive Failures - Data Recovery with Open-Source Tools (part 2)

Posted by Steven Pritchard on March 01, 2024 08:39 PM

This is part 2 of a multi-part series.  See part 1 for the beginning of the series.

Note that this is material from 2010 and earlier that pre-dates the common availability of solid state drives.

Detecting failures

Mechanical failures

Mechanical drive failure is nearly always accompanied by some sort of audible noise.  One common sound heard from failing hard drives is the so-called "Click of Death", a sound similar to a watch ticking (but much louder).  This can have various causes, but it is commonly caused by the read/write head inside a drive being stuck or possibly trying to repeatedly read a failing block.

Another common noise is a very high-pitched whine.  This is caused by bearings in a drive failing (most likely rubbing metal-on-metal), usually as a result of old age.  Anything that moves inside a computer (fans, for example) can make a noise like this, so always check a suspect drive away from other sources of noise to verify that the sound is indeed coming from the drive.

Drive motors failing and head crashes can cause other distinctive noises.  As a rule, any noise coming from a hard drive that does not seem normal is probably an indicator of imminent failure.

Electronic failures

Failing electronics can cause a drive to act flaky, not detect, and occasionally catch fire.

Hard drives have electronics on the inside of the drive which are inaccessible without destroying the drive (unless you happen to have a clean room).  Unfortunately, if those fail, there isn't much you can do.

The external electronics on a hard drive are usually a small circuit board that contains the interface connector and is held onto the drive with a few screws.  In many cases, multiple versions of a drive (IDE, SATA, SCSI, SAS, etc.) exist with different controller interface boards.  Generally speaking, it is possible to transplant the external electronics from a good drive onto a drive with failing electronics in order to get data off the failing drive.  Usually the controller board will need to be off an identical drive with similar manufacturing dates.

Dealing with physical failures

In addition to drive electronics transplanting, just about any trick you've heard of (freezing, spinning, smacking, etc.) has probably worked for someone, sometime.  Whether any of these tricks work for you is a matter of trial and error.  Just be careful.

Freezing drives seem to be especially effective.  Unfortunately, as soon as a drive is operating, it will tend to heat up quickly, so some care needs to be taken to keep drives cool without letting them get wet from condensation.

Swapping electronics often works when faced with electronic failure, but only when the donor drive exactly matches the failed drive.

Freezing drives often helps in cases of crashed heads and electronic problems. Sometimes they will need help to stay cold (ice packs, freeze spray, etc.), but often once they start spinning, they'll stay spinning. Turning a drive on its side sometimes helps with physical problems as well.


Unfortunately, we do have to get a drive to spin for any software data recovery techniques to work.

To be continued in part 3.

SMART - Data Recovery with Open-Source Tools (part 3)

Posted by Steven Pritchard on March 01, 2024 08:38 PM

This is part 3 of a multi-part series.  See part 1 for the beginning of the series.

SMART

SMART (Self-Monitoring, Analysis, and Reporting Technology) can, in many cases, be used to detect drive failures. The utility smartctl (from the smartmontools package, see https://www.smartmontools.org/) can be used to view SMART data, initiate self-tests, etc.

Specifying device types

Historically, smartctl has guessed that devices named /dev/hdn are ATA (IDE) drives, and devices named /dev/sdn are SCSI drives. Since SATA drives and IDE drives using the libata driver show up as /dev/sdn, recent versions of smartctl have been modified to generally detect ATA drives named /dev/sdn, but to be sure, or in cases where smartctl needs to be told what type of device you're accessing, use the -t option. To test how you are accessing the drive, use the -i (AKA --info) option.
  • ATA (SATA and IDE drives)
smartctl -d ata -i /dev/sdn
  • SCSI
smartctl -d scsi -i /dev/sdn
  • 3ware controller, port n
smartctl -d 3ware,n -i /dev/twe0 (8000-series and earlier controllers)
smartctl -d 3ware,n -i /dev/twa0 (9000-series controllers)

smartctl supports various other device types (other RAID controllers, some USB-to-ATA bridges, etc.). See the man page or the smartmontools web site for more information.

Enabling SMART

If SMART is not enabled on the device (like when it is disabled in the BIOS), it can be enabled with smartctl -s on device. There is also a -S option that turns on autosave of vendor-specific attributes. In most cases, it shouldn't be necessary to turn this on, but it can't hurt to turn it on.

Displaying SMART data

If you only remember one option for smartctl, make sure it is -a. That will show you everything smartctl knows about a drive. It is equivalent to -H -i -c -A -l error -l selftest -l selective for ATA drives and -H -i -A -l error -l selftest for SCSI drives.

Health

Drives use a combination of factors to determine their overall health. The drive's determination can be displayed with smartctl -H. For a failing drive, the output might look like this:

# smartctl -d ata -H /dev/sdb
smartctl 5.39.1 2010-01-28 r3054 [x86_64-redhat-linux-gnu] (local build)
Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: FAILED!
Drive failure expected in less than 24 hours. SAVE ALL DATA.
Failed Attributes:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
5 Reallocated_Sector_Ct 0x0033 033 033 036 Pre-fail Always FAILING_NOW 2747


For a drive that isn't failing (or, more accurately, that SMART on the drive doesn't think is failing), the output will look like this:

# smartctl -d ata -H /dev/sda
smartctl 5.39.1 2010-01-28 r3054 [x86_64-redhat-linux-gnu] (local build)
Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED


Please note that a failing health self-assessment should always be taken as a clear indication of a failure, but passing this test should not be used as an indication that a drive is fine. Most actively failing drives do not trip this test.

Information

As previously mentioned, the -i option for smartctl will report drive information, such as model number, serial number, capacity, etc. The output of smartctl -i will look something like this:

# smartctl -d ata -i /dev/sda
smartctl 5.39.1 2010-01-28 r3054 [x86_64-redhat-linux-gnu] (local build)
Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net

=== START OF INFORMATION SECTION ===
Model Family: Seagate Barracuda 7200.12 family
Device Model: ST31000528AS
Serial Number: X4JZDJRF
Firmware Version: CC38
User Capacity: 1,000,204,886,016 bytes
Device is: In smartctl database [for details use: -P show]
ATA Version is: 8
ATA Standard is: ATA-8-ACS revision 4
Local Time is: Wed Jul 7 21:01:41 2010 CDT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

In some cases, drives that are known to have firmware bugs will also give output like this:

==> WARNING: There are known problems with these drives,
see the following Seagate web pages:
http://seagate.custkb.com/seagate/crm/selfservice/search.jsp?DocId=207931
http://seagate.custkb.com/seagate/crm/selfservice/search.jsp?DocId=207951
http://seagate.custkb.com/seagate/crm/selfservice/search.jsp?DocId=207957

Capabilities

The -c option for smartctl displays drive capabilities. The most interesting bit of information displayed with this option is the suggested amount of time required for various self-tests. The full output will look like this:

# smartctl -d ata -c /dev/sda
smartctl 5.39.1 2010-01-28 r3054 [x86_64-redhat-linux-gnu] (local build)
Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net

=== START OF READ SMART DATA SECTION ===
General SMART Values:
Offline data collection status: (0x82) Offline data collection activity
                                       was completed without error.
                                       Auto Offline Data Collection: Enabled.
Self-test execution status:     (   0) The previous self-test routine completed
                                       without error or no self-test has ever
                                       been run.
Total time to complete Offline
data collection:                ( 600) seconds.
Offline data collection
capabilities:                   (0x7b) SMART execute Offline immediate.
                                       Auto Offline data collection on/off support.
                                       Suspend Offline collection upon new
                                       command.
                                       Offline surface scan supported.
                                       Self-test supported.
                                       Conveyance Self-test supported.
                                       Selective Self-test supported.
SMART capabilities:           (0x0003) Saves SMART data before entering
                                       power-saving mode.
                                       Supports SMART auto save timer.
Error logging capability:       (0x01) Error logging supported.
                                       General Purpose Logging supported.
Short self-test routine
recommended polling time:       (   1) minutes.
Extended self-test routine
recommended polling time:       ( 175) minutes.
Conveyance self-test routine
recommended polling time:       (   2) minutes.
SCT capabilities:             (0x103f) SCT Status supported.
                                       SCT Feature Control supported.
                                       SCT Data Table supported.

SMART attributes

The -A option for smartctl displays vendor-specific device attributes that are stored by the device.

# smartctl -d ata -A /dev/sdb
smartctl 5.39.1 2010-01-28 r3054 [x86_64-redhat-linux-gnu] (local build)
Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net

=== START OF READ SMART DATA SECTION ===
SMART Attributes Data Structure revision number: 10
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG   VALUE WORST THRESH TYPE     UPDATED WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000f 099   087   006    Pre-fail Always  -           134820080
  3 Spin_Up_Time            0x0003 095   095   000    Pre-fail Always  -           0
  4 Start_Stop_Count        0x0032 100   100   020    Old_age  Always  -           16
  5 Reallocated_Sector_Ct   0x0033 033   033   036    Pre-fail Always  FAILING_NOW 2748
  7 Seek_Error_Rate         0x000f 072   062   030    Pre-fail Always  -           16103679
  9 Power_On_Hours          0x0032 097   097   000    Old_age  Always  -           3165
 10 Spin_Retry_Count        0x0013 100   100   097    Pre-fail Always  -           0
 12 Power_Cycle_Count       0x0032 100   100   020    Old_age  Always  -           8
183 Runtime_Bad_Block       0x0032 100   100   000    Old_age  Always  -           0
184 End-to-End_Error        0x0032 100   100   099    Old_age  Always  -           0
187 Reported_Uncorrect      0x0032 100   100   000    Old_age  Always  -           0
188 Command_Timeout         0x0032 100   099   000    Old_age  Always  -           8590065676
189 High_Fly_Writes         0x003a 100   100   000    Old_age  Always  -           0
190 Airflow_Temperature_Cel 0x0022 071   065   045    Old_age  Always  -           29 (Lifetime Min/Max 27/30)
194 Temperature_Celsius     0x0022 029   040   000    Old_age  Always  -           29 (0 9 0 0)
195 Hardware_ECC_Recovered  0x001a 044   020   000    Old_age  Always  -           134820080
197 Current_Pending_Sector  0x0012 100   100   000    Old_age  Always  -           0
198 Offline_Uncorrectable   0x0010 100   100   000    Old_age  Offline -           0
199 UDMA_CRC_Error_Count    0x003e 200   200   000    Old_age  Always  -           0
240 Head_Flying_Hours       0x0000 100   253   000    Old_age  Offline -           257186936654939
241 Total_LBAs_Written      0x0000 100   253   000    Old_age  Offline -           2601921204
242 Total_LBAs_Read         0x0000 100   253   000    Old_age  Offline -           551656776

Generally speaking, these attributes should be mostly self-explanatory. For example, attribute #9, Power_On_Hours, stores the number of hours that the drive has been powered on. In this example, the drive has been on 3165 hours (seen in the RAW_VALUE column), which is a bit over 4 months.

Drives store thresholds for what value indicates a failure. In this example, note that attribute 5, Reallocated_Sector_Ct, which has a value of 2748, is considered FAILING_NOW.

SMART logs

The -l name option for smartctl displays the SMART log name stored on the device. There are several such logs that any given device might support, but the most interesting are the error and selftest logs.

The error log is, like the name suggests, a log of events that are seen as errors by the drive. A device that supports (and stores) a SMART error log, but currently has nothing logged, will look like this:

# smartctl -d ata -l error /dev/sda
smartctl 5.39.1 2010-01-28 r3054 [x86_64-redhat-linux-gnu] (local build)
Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net

=== START OF READ SMART DATA SECTION ===
SMART Error Log Version: 1
No Errors Logged


And here's an example of a device with one error logged:

# smartctl -d ata -l error /dev/sda
smartctl 5.39.1 2010-01-28 r3054 [i386-redhat-linux-gnu] (local build)
Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net

=== START OF READ SMART DATA SECTION ===
SMART Error Log Version: 1
ATA Error Count: 1
        CR = Command Register [HEX]
        FR = Features Register [HEX]
        SC = Sector Count Register [HEX]
        SN = Sector Number Register [HEX]
        CL = Cylinder Low Register [HEX]
        CH = Cylinder High Register [HEX]
        DH = Device/Head Register [HEX]
        DC = Device Command Register [HEX]
        ER = Error register [HEX]
        ST = Status register [HEX]
Powered_Up_Time is measured from power on, and printed as
DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes,
SS=sec, and sss=millisec. It "wraps" after 49.710 days.

Error 1 occurred at disk power-on lifetime: 4775 hours (198 days + 23 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER ST SC SN CL CH DH
  -- -- -- -- -- -- --
  40 51 00 aa b9 2f 04

  Commands leading to the command that caused the error were:
  CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
  -- -- -- -- -- -- -- -- ---------------- --------------------
  60 00 08 a7 b9 2f 44 00 5d+18:49:37.312 READ FPDMA QUEUED
  61 00 10 87 2e de 44 00 5d+18:49:37.296 WRITE FPDMA QUEUED
  61 00 01 9a 7b 56 40 00 5d+18:49:37.272 WRITE FPDMA QUEUED
  61 00 20 ff ff ff 4f 00 5d+18:49:37.235 WRITE FPDMA QUEUED
  60 00 10 f7 98 59 40 00 5d+18:49:37.212 READ FPDMA QUEUED

The error log will only show the five most recent entries, but that is usually enough context to get an idea what is wrong.

SMART self-tests

The -t type option tells smartctl to run a self-test of type type on the drive. type can be one of several options, although the most common are short, long, and conveyance. smartctl -t short runs a SMART Short Self Test, which usually finishes in just a couple of minutes. smartctl -t long runs a SMART Extended Self Test, which often will take an hour or more to run. smartctl -t conveyance runs a SMART Conveyance Self Test, which checks for damage sustained during transport (drops and such).

The output will look like this:

# smartctl -t short /dev/sdb
smartctl 5.39.1 2010-01-28 r3054 [x86_64-redhat-linux-gnu] (local build)
Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net

=== START OF OFFLINE IMMEDIATE AND SELF-TEST SECTION ===
Sending command: "Execute SMART Short self-test routine immediately in off-line mode".
Drive command "Execute SMART Short self-test routine immediately in off-line mode" successful.
Testing has begun.
Please wait 2 minutes for test to complete.
Test will complete after Mon Sep 6 20:22:49 2010

Use smartctl -X to abort test.


After waiting the appropriate amount of time (2 minutes, in the previous case, as seen in the smartctl -t short output, but which can also be found with smartctl -c), you can use smartctl -l selftest to view the self-test results.

# smartctl -l selftest /dev/sdb
smartctl 5.39.1 2010-01-28 r3054 [x86_64-redhat-linux-gnu] (local build)
Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net

=== START OF READ SMART DATA SECTION ===
SMART Self-test log structure revision number 1
Num Test_Description     Status                  Remaining LifeTime(hours) LBA_of_first_error
# 1 Short offline        Completed: read failure       90%    8835         17135
# 2 Short offline        Completed without error       00%    0            -

In the example above, a short test completed successfully at a lifetime of 0 hours, but another short test failed with a read failure with 90% remaining at a lifetime of 8835 hours. (Test results are listed in order of most recent to oldest.)

More information

Google has done some excellent work in determining how SMART and various other data relates to drive failure. See https://static.googleusercontent.com/media/research.google.com/en//archive/disk_failures.pdf.

To be continued in part 4.

Infra & RelEng Update – Week 9, 2024

Posted by Fedora Community Blog on March 01, 2024 12:06 PM

This is a weekly report from the I&R (Infrastructure & Release Engineering) Team. It also contains updates for the CPE (Community Platform Engineering) Team as the CPE initiatives are in most cases tied to I&R work.

We provide you both an infographic and a text version of the weekly report. If you just want to quickly look at what we did, just look at the infographic. If you are interested in more in-depth details look at the infographic.

Week: February 25-29, 2024

<figure class="wp-block-image size-large">Infra&Releng Infographics</figure>

Infrastructure & Release Engineering

The purpose of this team is to take care of day-to-day business regarding CentOS and Fedora Infrastructure and Fedora release engineering work. It’s responsible for services running in Fedora and CentOS infrastructure and preparing things for the new Fedora release (mirrors, mass branching, new namespaces, etc.).
List of planned/in-progress issues

Fedora Infra

CentOS Infra including CentOS CI

Release Engineering

CPE Initiatives

EPEL

Extra Packages for Enterprise Linux (or EPEL) is a Fedora Special Interest Group that creates, maintains, and manages a high-quality set of additional packages for Enterprise Linux, including, but not limited to, Red Hat Enterprise Linux (RHEL), CentOS, and Scientific Linux (SL), and Oracle Linux (OL).

Updates

  • Packaged onnx into EPEL9 at the request of the Fedora AI/ML SIG

If you have any questions or feedback, please respond to this report or contact us on the -cpe channel on matrix.

The post Infra & RelEng Update – Week 9, 2024 appeared first on Fedora Community Blog.

Toolbx is a release blocker for Fedora 39 onwards

Posted by Debarshi Ray on March 01, 2024 11:44 AM

This is the second instalment of my 2023 retrospective series on Toolbx. 1

One very important thing that we did behind the scenes was to make Toolbx a release blocker for Fedora 39 and onwards. This means that the registry.fedoraproject.org/fedora-toolbox OCI image is considered a release-blocking deliverable, and there are release-blocking test criteria to ensure that the toolbox RPM is usable.

Why do that?

Earlier, there was no formal requirement for Toolbx to be usable when a new Fedora was released. That was a problem for a tool that’s so popular and provides something as fundamental as an interactive command line environment for software development and troubleshooting the host operating system. Everybody expects their CLI environment to just work even under very adverse conditions, and Toolbx should be no different. Except that Toolbx is slightly more complicated than running Bash or Z shell directly on the host OS, and, therefore, requires a bit more diligence.

Toolbx has two parts — an OCI image, which defaults to registry.fedoraproject.org/fedora-toolbox on Fedora hosts, and the toolbox RPM. The OCI image is pulled by the RPM to set up a containerized interactive CLI environment.

Let’s look at each separately.

<figure class="wp-block-image size-large"></figure>

The image

First, we wanted to ensure that there is an up to date fedora-toolbox OCI image published on registry.fedoraproject.org as a release-blocking deliverable at critical points in the development schedule, just like the installation ISOs for the Editions from download.fedoraproject.org. For example, when an upcoming Fedora release is branched from Rawhide, and for the Beta and Final releases.

One of the recurring complaints that we used to get were from users of Fedora Rawhide Toolbx containers, when Rawhide gets branched in preparation for the Beta for the next Fedora release. At this point, the previous Rawhide version becomes the Branched version, and the current Rawhide version increases by one. If the fedora-toolbox images aren’t part of the mass branching performed by Fedora Release Engineering, then someone has to quickly step in after they have finished to refresh the images to ensure consistency. This sort of ad hoc manual co-ordination rarely works, and it left users in the lurch.

With this change, the fedora-toolbox image is part of the nightly Fedora composes, and the branching is handled by Fedora Release Engineering just like any other release-blocking deliverable. This makes the image as readily available and updated as the fedora and fedora-minimal OCI images or any other deliverable, and we hope that it will improve the user experience for Rawhide Toolbx containers.

If someone installs the Fedora Beta or the Final on their host, and creates a Toolbx container using the default image, then, barring exceptions, the host and the container now have the same RPM versions for all packages. Just like Fedora Silverblue and Workstation are released with the same versions. This ensures greater consistency in terms of bug-fixes, features and pending updates.

In the past, this wasn’t the case and it led to occasional surprises. For example, the change to make RPM use a Sequoia based OpenPGP parser made it impossible to install third party RPMs in the fedora-toolbox image, even long after the actual bug was fixed.

The RPM

Second, we wanted to have release-blocking test criteria to ensure that the toolbox RPM is usable at critical points in the development schedule. This is to ensure that changes in the Toolbx stack, and future changes in other parts of the operating system do not break Toolbx — at least not for the Beta and Final releases. It’s good to have the fedora-toolbox image be more readily available and updated, but it’s better if Toolbx works more reliably as a whole.

Examples of changes in the Toolbx stack causing breakage can be FUSE preventing RPMs with file capabilities from being installed inside Toolbx containers, Toolbx bind mounts preventing RPMs with %attr() from being installed or causing systemd-tmpfiles(8) to throw errors, etc.. Examples of changes in other parts of the OS can be changes to Fedora’s Kerberos stack causing Kerberos to stop working inside Toolbx containers, changes to the sysctl(8) configuration breaking ping(8), changes in Mutter breaking graphical applications, etc..

The test criteria for the toolbox RPM also implicitly tests the fedora-toolbox image, and co-ordinates several disparate groups of developers to ensure that the containerized interactive command line Toolbx environments on Fedora are just as reliable as those running directly on the host OS.

Tooling changes

This does come with a significant tooling change that isn’t obvious at first. The fedora-toolbox OCI image is no longer defined as a layered image through a Container/Dockerfile. Instead, it’s built as a base image through Kickstarts and Pungi, just like the fedora and fedora-minimal images.

This was necessary because the nightly Fedora composes work with Kickstarts and Pungi, not Container/Dockerfiles. Moreover, building Fedora OCI images from a Dockerfile with fedpkg container-build uses an ancient unmaintained version of OpenShift Build Service that requires equally unmaintained ancient versions of Fedora to run, and the fedora-toolbox image was the only thing using Container/Dockerfiles in Fedora.

We either had to update the Fedora infrastructure to use OpenShift Build Service 2.x; or use Kickstarts and Pungi, which uses Image Factory, to build the fedora-toolbox image. We chose the latter, because updating the infrastructure would be a significant effort, and by using Kickstarts and Pungi we get to stay close to the fedora and fedora-minimal images and simplify the infrastructure.

The Fedora Flatpaks were also being built using the same ancient and unmaintained version of OpenShift Build Service, and they too are in the process being migrated. However, that’s outside the scope of this post.

One big benefit of fedora-toolbox not being a layered image based on top of the fedora image is that it removes the constant fight against the efforts to minimize the size of the latter. The fedora-toolbox image is designed for interactive command line use in long-lived containers, and not for deploying server-side applications and services in ephemeral ones. This means that dictionaries, documentation, locales, iconv converter modules, translations, etc. are more important than reducing the size of the images. Now that the image is built from scratch, it has full control over what goes into it.

Unfortunately, Image Factory is weakly maintained and setting it up on one’s local machine is a lot more complicated than using podman build. One can do scratch builds on the Fedora infrastructure with koji image-build --scratch, but only if they have been explicitly granted permissions, and then they have to download the tarball and use skopeo copy to place them in containers-storage so that Podman can see it. All that is again more complicated than doing a podman build.

Due to this difficulty of untangling the image build from the Fedora infrastructure, we haven’t published the sources of the fedora-toolbox image for recent Fedora versions upstream. We do have a fedora-toolbox:39 image defined through a Container/Dockerfile, but that was done purely as a contingency during the Fedora 39 development cycle.

This does degrade the developer experience of working on the fedora-toolbox image, but, given all the other advantages, we think that it’s worth it.

As of this writing, there’s a Fedora 40 Change to switch to using KIWI to build the OCI images, including fedora-toolbox, instead of Image Factory. KIWI seems more strongly maintained and a lot easier to set up locally, which is fantastic. So, it should be all rainbows and unicorns, once we soldier through another port of the fedora-toolbox image to a different tooling and source language.

Acknowledgements

Last but not the least, getting all this done on time required a good deal of co-ordination and help from several different individuals. I must thank Sumantro for leading the effort; Kevin, Tomáš and Samyak for all the infrastructure and release engineering work; and Adam and Kamil for all the testing and validation.

  1. Toolbx now offers built-in support for Arch Linux and Ubuntu ↩

4 cool new projects to try in Copr for March 2024

Posted by Fedora Magazine on March 01, 2024 08:00 AM

Copr is a build-system for anyone in the Fedora community. It hosts thousands of projects for various purposes and audiences. Some of them should never be installed by anyone, some are already being transitioned to the official Fedora Linux repositories, and the rest are somewhere in between. Copr gives you the opportunity to install 3rd party software that is not available in Fedora Linux repositories, try nightly versions of your dependencies, use patched builds of your favorite tools to support some non-standard use-cases, and just experiment freely.

This article takes a closer look at interesting projects that recently landed in Copr.

If you don’t know how to enable a repository or if you are concerned about whether is it safe to use Copr, please consult the project documentation.

PyInstaller

Do you need to ship your Python script to a customer? And for a reason, you cannot use either RPM or Flatpak? Try PyInstaller! You just use the following command:

pyinstaller yourscript.py

and pyinstaller will extract your dependencies and bundle everything together. It will create a yourscript binary (ELF) and the yourscript/ directory with all these bundles. I tried it with one of my small scripts, and it created a 566 MB big directory and 20MB binary. See PyInstaller documentation for how to use it.

Installation instructions

The repo currently provides PyInstaller for EPEL 9, Fedora 38, 39, 40, and Fedora Rawhide. To install it, use these commands:

sudo dnf copr enable slaanesh/PyInstaller
sudo dnf -y install python3-pyinstaller

Anime Games Launcher

Anime Games Launcher is very similar to Lutris, but focuses on Anime Games only. It is a wrapper around Wine and allows you to launch the game. E.g. you can run a very popular game GenShin. You can do the same with Lutris, but it certainly requires more effort. With Anime Games Launcher, it was literally two clicks.

<figure class="wp-block-image size-large"></figure>

Installation instructions

The repo currently provides Launcher for Fedora 39, 40, and Fedora Rawhide. To install it, use these commands:

sudo dnf copr enable retrozinndev/anime-games-launcher
sudo dnf install anime-games-launcher
anime-games-launcher

ec2-instance-connect

AWS EC2 Instance Connect Configuration is a set of configurations and scripts that allows you to connect to a VM instance from the AWS dashboard using EC2 Instance Connect. For the connections, SSH is used – so this is a configuration for SSH and several supporting tools. This repository is owned by Major Hayden, a Cloud SIG member. The description of this project is “initial packaging”, so we may find this package in core Fedora one day.

Installation instructions

The repo currently provides packages for RHEL 9, Fedora 39, and Fedora Rawhide. To install it, use this command:

sudo dnf copr enable mhayden/ec2-instance-connect 

Gitea

I will quote Wikipedia:

“Gitea (/ɡɪˈtiː/) is a forge software package for hosting software development version control using Git as well as other collaborative features like bug tracking, code review, continuous integration, kanban boards, tickets, and wikis. It supports self-hosting but also provides a free public first-party instance…”

This Copr project includes packages that allow you to deploy the self-hosted instance.

<figure class="wp-block-image size-large"></figure>

Installation instructions

The repo currently provides packages for Fedora 38, 39, and 40. To install it, use this command:

sudo dnf copr enable felfert/gitea  

See upstream installation instructions.

PHP version 8.2.17RC2 and 8.3.4RC1

Posted by Remi Collet on March 01, 2024 06:26 AM

Release Candidate versions are available in the testing repository for Fedora and Enterprise Linux (RHEL / CentOS / Alma / Rocky and other clones) to allow more people to test them. They are available as Software Collections, for a parallel installation, the perfect solution for such tests, and also as base packages.

RPMs of PHP version 8.3.4RC1 are available

  • as base packages
    • in the remi-modular-test for Fedora 38-40 and Enterprise Linux ≥ 8
    • in the remi-php83-test repository for Enterprise Linux 7
  • as SCL in remi-test repository

RPMs of PHP version 8.2.17RC2 are available

  • as base packages
    • in the remi-modular-test for Fedora 38-40 and Enterprise Linux ≥ 8
    • in the remi-php82-test repository for Enterprise Linux 7
  • as SCL in remi-test repository

emblem-notice-24.png The Fedora 39, 40, EL-8 and EL-9 packages (modules and SCL) are available for x86_64 and aarch64.

emblem-notice-24.pngPHP version 8.1 is now in security mode only, so no more RC will be released.

emblem-notice-24.pngInstallation: follow the wizard instructions.

emblem-notice-24.png Announcements:

Parallel installation of version 8.3 as Software Collection:

yum --enablerepo=remi-test install php83

Parallel installation of version 8.2 as Software Collection:

yum --enablerepo=remi-test install php82

Update of system version 8.3 (EL-7) :

yum --enablerepo=remi-php83,remi-php83-test update php\*

or, the modular way (Fedora and EL ≥ 8):

dnf module switch-to php:remi-8.3
dnf --enablerepo=remi-modular-test update php\*

Update of system version 8.2 (EL-7) :

yum --enablerepo=remi-php82,remi-php82-test update php\*

or, the modular way (Fedora and EL ≥ 8):

dnf module switch-to php:remi-8.2
dnf --enablerepo=remi-modular-test update php\*

emblem-notice-24.png Notice:

  • version 8.3.4RC1 is also in Fedora rawhide for QA
  • EL-9 packages are built using RHEL-9.3
  • EL-8 packages are built using RHEL-8.9
  • EL-7 packages are built using RHEL-7.9
  • oci8 extension uses the RPM of the Oracle Instant Client version 21.13 on x86_64 or 19.19 on aarch64
  • intl extension uses libicu 73.2
  • RC version is usually the same as the final version (no change accepted after RC, exception for security fix).
  • versions 8.2.17 and 8.3.4 are planed for March 15th, in 2 weeks.

Software Collections (php82, php83)

Base packages (php)

Securing via systemd, a story

Posted by Kushal Das on February 29, 2024 11:36 AM

Last night I deployed a https://writefreely.org based blog and secured it with systemd by adding DynamicUser=yes. But the service itself could not write to the sqlite database.

Feb 28 21:37:52 kushaldas.se writefreely[1652088]: ERROR: 2024/02/28 21:37:52 database.go:3000: Couldn't insert into posts: attempt to write a readonly database

Today morning I realized that the settings blocked writing to all paths except few temporary ones. I had to add a StateDirectory and used the same in WorkingDirectory so that the service works correctly.

Replacing the SSD in my laptop

Posted by suve's ramblings on February 29, 2024 08:29 AM
A short (or so I hoped) venture into replacing the SSD in my Thinkpad X220 laptop.

Announcing Flock 2024 in Rochester, New York

Posted by Fedora Magazine on February 29, 2024 08:00 AM

The Flock to Fedora 2024 organizing team announces the next edition of Flock to Fedora. It will take place in Rochester, New York, United States from Wednesday, August 7th to Saturday, August 10th. Flock is the Fedora Project’s annual contributor-focused conference. The conference provides a venue for face-to-face meetings and conversations. It is also a place to celebrate our community. Major changes to Flock 2024 include a fourth day and our first return to the United States since 2017.

Read on for more details on Flock 2024. This includes travel and location details, call for proposals and registration, sponsorship opportunities, and registration. You can also learn the story behind how we selected Rochester for this year’s edition of Flock.

Flock 2024 travel & location

Flock 2024 will be in Rochester, New York at the Hyatt Regency Rochester hotel and conference center. The conference begins on Wednesday morning, August 7th and ends Saturday afternoon, August 10th. A hotel block is reserved for Flock to Fedora attendees and will be available soon. We encourage all attendees to book rooms through our hotel block when available for a discounted rate. The hotel block will be announced soon when it becomes available.

The Rochester airport has easy connections with other major US airports like New York City, Atlanta, Chicago, and Boston. There are also Amtrak trains with daily service to Rochester from Toronto, New York City, and Boston. Downtown Rochester has a bus network, but coverage by public transit is limited. See below for more about Rochester from visitrochester.com:

Rochester is the cultural capital of Upstate New York. The third largest city in New York State offers an abundance of cultural experiences and institutions, and has previously been named the 17th most arts-vibrant city in the country by the National Center for Arts Research, ranking alongside large cities such as New York City, Los Angeles and Washington D.C.

www.visitrochester.com

Flock 2024 call for proposals (CFP) & registration

The Flock to Fedora call for proposals (CFP) will open in early March 2024. We invite our contributor community to submit both presentations and workshops to this year’s call for proposals. Flock is not just about presentations and talks though! We can also use our valuable time together to work through challenges and opportunities. While thinking about content to submit for Flock 2024, consider adding interaction and participation into your submission proposals.

This year, we encourage submissions that connect to one or more of the following focus areas. These focus areas are aligned with the Fedora 2028 Strategy. Proposers will be asked to connect their submission(s) to one of the following focus areas:

  • Accessibility (a11y)
  • Reaching the World
  • Community Sustainability
  • Technology Innovation and Leadership
  • Editions, Spins, and Interests
  • Ecosystem Connections

More guidance about the CFP will be provided in the coming weeks.

Flock 2024 sponsorship opportunities

We are now accepting sponsorship for Flock 2024. Our sponsorship tiers will remain the same as 2023:

  • Bronze ($1,500 USD)
  • Silver ($5,000 USD)
  • Gold ($10,000 USD)
  • Platinum ($20,000 USD)

A menu of additional sponsored activities (e.g. Meal & Beverage, audio/video, DEI, and Community Social) will also be offered. A sponsorship prospectus will be available soon on the Flock website. In the meantime, send sponsorship questions using the email address on the Flock to Fedora website.

How we chose Rochester for Flock 2024

If you made it this far, you might be wondering… how did we choose Rochester, New York? If you attended the Fedora Linux 39 Release Party & 20th Anniversary last November, you might remember that we announced Mexico. It is true that this Flock nearly went to Mexico City. We were excited to bring Flock 2024 to LATAM. However, we ran into unexpected pricing difficulties at the last minute with our negotiated venues. In fact, the price we targeted nearly doubled when we were about to sign contracts for Mexico City. However, we found that the high price from our planned hotel was not unique. Price was also a challenge for other Mexico City hotels that met our requirements for Flock. When we faced this pricing challenge, we went back to the drawing board for our options.

In the original community discussion, we also evaluated Montreal, Canada and Rochester, New York, USA. We originally ruled out Montreal due to higher costs for international travel and venues. The ongoing civil dispute between the Canadian and Indian governments also presented a challenge for visa sponsorship. So, we looked again at Rochester, which we had originally ruled out due to poor availability of dates. We went back directly to our venue of choice. The dates we wanted for Flock were actually available! So we quickly worked to finalize them.

A return to 2015?

Rochester is not a new place for Flock. In fact, this will be the first Flock that returns to a city we have been to before. Flock 2015 was also held in Rochester, New York. Yet it was our top contender this year for a North American city due to a range of factors. One reason is the strong open source connections via the Rochester Institute of Technology. Both Open@RIT and the open source academic curriculum foster a community of open source in the academic space. We are well-positioned to deliver an amazing Flock to Fedora experience in Rochester this year. We hope to offer a new experience for first-timers and also some surprises for our old-timers and past visitors of Flock 2015.

Questions about Flock 2024

The best place to ask questions for Flock 2024 is on Fedora Discussion and Fedora Chat. Post on Fedora Discussion using the #flock tag. Message on Fedora Chat/Matrix in the #flock:fedoraproject.org room.

More details about Flock 2024 will be shared in the coming weeks. Stay tuned for next updates like the Flock 2024 call for proposals.

2023 Year in Review: Infra & Releng

Posted by Fedora Community Blog on February 29, 2024 08:00 AM

This is a summary of the work done by Fedora Infrastructure & Release Engineering teams as of 2023. As these teams are working closely together, we will summarize the work done in one blog post by both teams.

This update is made from infographics and detailed updates. If you want to just see what’s new, check the infographics. If you want more details, continue reading.

<figure class="wp-block-image size-full is-style-default wp-lightbox-container" data-wp-context="{ "core": { "image": { "imageLoaded": false, "initialized": false, "lightboxEnabled": false, "hideAnimationEnabled": false, "preloadInitialized": false, "lightboxAnimation": "zoom", "imageUploadedSrc": "https://communityblog.fedoraproject.org/wp-content/uploads/2024/01/Fedora_2023-scaled.jpg", "imageCurrentSrc": "", "targetWidth": "1533", "targetHeight": "2560", "scaleAttr": "", "dialogLabel": "Enlarged image" } } }" data-wp-interactive="data-wp-interactive">Infographic </figure>

About

Purpose of these teams is to take care of day-to-day business regarding Fedora Infrastructure and Fedora Release Engineering work. It’s responsible for developing and maintaining services running in Fedora and preparing things for the new Fedora Linux release (mirrors, mass branching, new namespaces etc.). 

Issue trackers

Closed tickets

  • Fedora Infrastructure – 585
  • Fedora RelEng – 617

Fedora Infrastructure highlights

Fedora Release Engineering highlights


Photo by Taylor Vick on Unsplash. Modified by Justin W. Flory. CC BY-SA 4.0.

The post 2023 Year in Review: Infra & Releng appeared first on Fedora Community Blog.

Build and publish multi-arch containers with Quay and GitHub Actions

Posted by Fabio Alessandro Locati on February 29, 2024 12:00 AM
When I deploy a system, I always try to automate it fully. There are many reasons for this, one of which is that, in this way, the automation becomes the documentation for the system itself. Another reason that drives me to automate everything is my preference for clean systems. Another consequence of this preference I have is that in the last few years, I’ve moved many systems to a Fedora rpm-ostree flavor (eg: Fedora CoreOS, Fedora IoT, Fedora Atomic) with the various services running in containers managed directly by systemd via podman.

Mullvad VPN repository for Fedora

Posted by Kushal Das on February 27, 2024 05:37 PM

desktop client

Mullvad VPN now has proper rpm repository for their desktop client. You can use it in the following way on you Fedora system:

sudo dnf config-manager --add-repo https://repository.mullvad.net/rpm/stable/mullvad.repo
sudo dnf install mullvad-vpn

Remember to verify the OpenPGP key Fingerprint:

Importing GPG key 0x66DE8DDF:
 Userid     : "Mullvad (code signing) <admin@mullvad.net>"
 Fingerprint: A119 8702 FC3E 0A09 A9AE 5B75 D5A1 D4F2 66DE 8DDF
 From       : https://repository.mullvad.net/rpm/mullvad-keyring.asc

2023 Year in Review: Community Platform Engineering (CPE)

Posted by Fedora Community Blog on February 27, 2024 02:39 PM

This is a summary of the work done on initiatives by the Community Platform Engineering (CPE) Team. Every quarter, the CPE team works together with CentOS Project and Fedora Project community leaders and representatives to choose projects that will be being worked upon in that quarter. The CPE team is then split into multiple smaller sub-teams that will work on the chosen initiatives and day-to-day work that needs to be done. Some of the sub-teams are dedicated to the continuous efforts in the team whilst some are created only for the initiative purposes.

This update is made from infographics and detailed updates. If you want to just see what’s new, check the infographics. If you want more details, continue reading.

<figure class="wp-block-image size-full wp-lightbox-container" data-wp-context="{ "core": { "image": { "imageLoaded": false, "initialized": false, "lightboxEnabled": false, "hideAnimationEnabled": false, "preloadInitialized": false, "lightboxAnimation": "zoom", "imageUploadedSrc": "https://communityblog.fedoraproject.org/wp-content/uploads/2024/01/CPE_2023-scaled.jpg", "imageCurrentSrc": "", "targetWidth": "1081", "targetHeight": "2560", "scaleAttr": "", "dialogLabel": "Enlarged image" } } }" data-wp-interactive="data-wp-interactive">infographic </figure>

About

The Community Platform Engineering Team is a Red Hat team that is working exclusively on community projects. Its members are part of Fedora Infrastructure, Fedora Release Engineering and CentOS Infrastructure teams. This team works on initiatives, which are projects with larger scope related to community work that needs to be done. It also investigates possible initiatives  with the ARC (The Advance Reconnaissance Crew), which is formed from a subset of the Infrastructure & Release Engineering sub-team members based on the initiative that is being investigated.

Issue trackers

Initiatives

PDC Retirement

PDC is the Product Definition Center, running at: https://pdc.fedoraproject.org/.

However, this application which was developed internally, is no longer maintained. This codebase has been “orphaned” for a few years now and we need to find a solution for it.

We are reviewing and having a critical look on what we store in there, see what is really needed and then find a solution for its replacement.

Status: In Progress

Issue trackers

Documentation

Application URLs

Matrix Native Zodbot

With ongoing stability issues with the Matrix <-> IRC bridge and many contributors switching over to Matrix, Zodbot has become increasingly unreliable. The bridge is currently shut off completely. This initiative aims to provide a future-proof solution and allow us to conduct meetings without wasting time troubleshooting the Matrix <-> IRC bridge and Zodbot.

Status: In Progress

Issue trackers

Documentation

FMN Replacement

FMN is a project that allows people in our community to get notified when messages that interest them fire on the message-bus, making the message-bus more useful to people that are not directly developing or troubleshooting applications running in our infra.

The previous solution had plenty of tech debt which caused lag times between an event happening and the subscriber to be notified, so this initiative rewrote the service from scratch and is now live! Users are recommended to migrate their rules to the new service and notifications can also now be configured to email, IRC and Matrix.

Status: Done

Issue trackers

Documentation

Application URLs

DNF-Countme Update

DNF Mirrors Countme scripts are used to gain statistics data about the downloads of Fedora. Purpose of this initiative is to optimize the current solution by adding more comprehensive testing, removing unnecessary code and reducing storage consumption of the data. 

Status: Done

Issue trackers

Documentation

ARC Investigations

Investigate moving registry.fp.o to quay.io

Traditionally, registry.fedoraproject.org was needed as quay.io did not support multiarch which it now does. The purpose of this ticket is to carry out some investigation work to confirm all the above is true as well as finding any other potential blockers to the move.

Status: Done

Documentation

Spam fighting

We had plenty of spam on pagure.io this year. To fight it more effectively, the ARC team tried a few different approaches to recognize and delete spam. It’s now much easier to delete the spam user with all the spam it created.

Status: Done

ARC investigation/planning for FCAS

In order to have a quantitative understanding of how the contributor activity has changed over the years and to provide the foundational support to the Fedora Project strategy 2028’s guiding star about doubling the number of active contributors every week, it is important to have a service that tracks their statistics. This measurement would help make the strategy goal meaningful as well as assist the Fedora Council and the related bodies understand how far they have progressed into making this happen and identify the underlying particular problems that act as a barrier in realizing this objective.

Status: Done

Documentation

Badges backend for new Service

Fedora Badges is a service that grants virtual accolades for milestones and completing tasks within the Fedora Project community. For example, a community member may collect badges for testing package updates on Bodhi when they test 1, 5, 10, 20, 40, 80, 125, 250, 500 and 1000 updates.

Status: Done

Documentation

Pagure to GitLab importer

With Fedora and CentOS now having an official namespace on GitLab, multiple projects want to migrate their repositories from Pagure to GitLab. This initiative is aimed to provide an easy way to migrate those projects.

Status: Done

Documentation

DNF-countme

The purpose of this work was about investigating the current solution and it’s bottlenecks to identify what needs to be done to solve the following problems:

  • Storage bottleneck when creating the intermediate database file
  • Operations efficiency for the infrastructure team

Status: Done

Documentation

Dist-Git decoupling & ecosystem mapping

The objective of the potential initiative is to move repository contents (including but not limited to source codes, Packit configurations, RPM specfiles) from Pagure Dist-Git to another platform and confirm that the associated tooling and services (including but not limited to FMN, Datanommer, COPR, Toddlers, FMN, CI, Monitor-Gating, Packit, Bodhi, Fedpkg) work well with the newer platform. The investigation aims to be as agnostic as it can be regarding the destination platform to help ideate a general solution for the compatibility of the associated tooling and services.

Status: Done

Documentation

Epilogue

If you get here, thank you for reading this. If you want to contact us, feel free to do it on matrix.

As CPE members are part of Fedora Infrastructure, Fedora Release Engineering and CentOS Infrastructure, see also the Fedora Infra & Releng update and CentOS Infrastructure update.

The post 2023 Year in Review: Community Platform Engineering (CPE) appeared first on Fedora Community Blog.

Working with multi-line logs in syslog-ng

Posted by Peter Czanik on February 27, 2024 01:19 PM

Most log messages fit on a single line. However, Windows and some developer tools and services, like Tomcat, write multi-line log messages. These can come in various formats. For example, new log messages start with a date in a specific format. You use the multi-line-prefix() of the syslog-ng file() source to send multi-line messages as single messages instead of line by line.

I must admit that I have never seen multi-line logs in production. I am not a developer, do not run Tomcat or Windows. However, recently I tested a software on Windows, which produced multi-line log messages.

You can read more at https://www.syslog-ng.com/community/b/blog/posts/working-with-multi-line-logs-in-syslog-ng

<figure><figcaption>

syslog-ng logo

</figcaption> </figure>

Kiwi TCMS 13.1.1

Posted by Kiwi TCMS on February 27, 2024 12:39 PM

We're happy to announce Kiwi TCMS version 13.1.1!

IMPORTANT: This is a hot-fix release for a bug introduced in v13.1!

Recommended upgrade path:

13.1 -> 13.1.1

You can explore everything at https://public.tenant.kiwitcms.org!

---

Upstream container images (x86_64):

kiwitcms/kiwi   latest  5574cf84d49e    696MB

IMPORTANT: version tagged and multi-arch container images are available only to subscribers!

Changes since Kiwi TCMS 13.1

Bug fixes

  • Downgrade node_modules/datatables.net-buttons from 3.0.0 to 2.4.3. Fixes Issue #3552

Refactoring

  • Use max() built-in function instead of an if block

Kiwi TCMS Enterprise v13.1.1-mt

  • Based on Kiwi TCMS v13.1.1

Private container images

quay.io/kiwitcms/version            13.1.1 (aarch64)        d9bdea3736ce    27 Feb 2024     707MB
quay.io/kiwitcms/version            13.1.1 (x86_64)         5574cf84d49e    27 Feb 2024     696MB
quay.io/kiwitcms/enterprise         13.1.1-mt (aarch64)     0c6e3d1d7a05    27 Feb 2024     1.06GB
quay.io/kiwitcms/enterprise         13.1.1-mt (x86_64)      fe2cb1e64b75    27 Feb 2024     1.04GB

IMPORTANT: version tagged, multi-arch and Enterprise container images are available only to subscribers!

How to upgrade

Backup first! Then follow the Upgrading instructions from our documentation.

Happy testing!

---

If you like what we're doing and how Kiwi TCMS supports various communities please help us grow!

Kiwi TCMS 13.1

Posted by Kiwi TCMS on February 26, 2024 02:45 PM

We're happy to announce Kiwi TCMS version 13.1!

IMPORTANT: This is a small release which contains several improvements, new settings and API methods, bug fixes and internal refactoring!

Recommended upgrade path:

13.0 -> 13.1

You can explore everything at https://public.tenant.kiwitcms.org!

---

Upstream container images (x86_64):

kiwitcms/kiwi   latest  b64472d820a2    698MB

IMPORTANT: version tagged and multi-arch container images are available only to subscribers!

Changes since Kiwi TCMS 13.0

Improvements

  • Update django from 4.2.9 to 4.2.10
  • Update django-simple-history from 3.4.0 to 3.5.0
  • Update mysqlclient from 2.2.1 to 2.2.4
  • Update psycopg from 3.1.17 to 3.1.18
  • Update tzdata from 2023.4 to 2024.1
  • Update uwsgi from 2.0.23 to 2.0.24
  • Update node_modules/datatables.net-buttons from 2.4.2 to 3.0.0
  • Add robots.txt file to tell various robots to stop probing Kiwi TCMS
  • Resolve the path /favicon.ico because some browsers still search for it
  • Send Referer: header for container HEALTHCHECK command in order to make NGINX logs more readable
  • Allow users to reset their email by asking them to confirm their new address. Fixes Issue #3211
  • Add support for custom email validators on the registration page
  • Move X-Frame-Options header definition into settings
  • Move X-Content-Type-Options header definition into settings
  • Enable anonymous analytics, see here

Settings

  • New settings ANONYMOUS_ANALYTICS and PLAUSIBLE_DOMAIN control anonymous analytics
  • New setting EMAIL_VALIDATORS for custom email validation during registration
  • Add the following settings in order to document them - CSRF_COOKIE_AGE, CSRF_COOKIE_HTTPONLY, SESSION_COOKIE_HTTPONLY, CSRF_COOKIE_SECURE and SESSION_COOKIE_SECURE. Most likely you don't need to change their values
  • Respect X_FRAME_OPTIONS setting, defaults to DENY
  • Respect SECURE_CONTENT_TYPE_NOSNIFF setting, defaults to nosniff
  • Configure SECURE_SSL_REDIRECT setting to True

API

  • New method TestExecution.remove() which should be used in favor of TestRun.remove_case()

Bug fixes

  • Fix a bug where non-distinct values made it into generated property matrix
  • On TestRun page allow removal of individual parameterized TestExecution(s). Closes Pull #3282

Refactoring and testing

  • Update codecov/codecov-action from 3 to 4
  • Update node_modules/webpack from 5.89.0 to 5.90.3
  • Update runner image for CircleCI
  • Fix failure in test_utf8_uploads on CircleCI
  • Several improvements around performance benchmark tests
  • Refactor RegistrationForm.clean_email() using field validator function
  • Add tests for test matrix generation functionality

Kiwi TCMS Enterprise v13.1-mt

  • Based on Kiwi TCMS v13.1

  • Replace NGINX with OpenResty with built-in support for Lua scripting

  • Implement request limits configurable via environment variables

  • Initial integration with Let's Encrypt. Closes Issue #253

    WARNINGS:

    • true wildcard certificates are only possible via certbot's DNS plugins while current integration uses --webroot
    • you need to bind-mount /etc/letsencrypt/ and /Kiwi/ssl/ inside the container if you want the Let's Encrypt certificates to persist a restart
  • Replace raven with sentry-sdk

  • Override HEALTHCHECK command

  • Add more tests for container and http functionality

Private container images

quay.io/kiwitcms/version            13.1 (aarch64)          a611a00ee2bc    26 Feb 2024     709MB
quay.io/kiwitcms/version            13.1 (x86_64)           b64472d820a2    26 Feb 2024     698MB
quay.io/kiwitcms/enterprise         13.1-mt (aarch64)       76ef5773b488    26 Feb 2024     1.07GB
quay.io/kiwitcms/enterprise         13.1-mt (x86_64)        9781119c2348    26 Feb 2024     1.04GB

IMPORTANT: version tagged, multi-arch and Enterprise container images are available only to subscribers!

SaaS changes since v13.0

Applies to any digital property under *.tenant.kiwitcms.org!

  • Newly registered accounts are no longer possible using @yahoo email addresses
  • Anonymous analytics has been enabled, see here

How to upgrade

Backup first! Then follow the Upgrading instructions from our documentation.

Happy testing!

---

If you like what we're doing and how Kiwi TCMS supports various communities please help us grow!

Next Open NeuroFedora meeting: 26 February 1300 UTC

Posted by The NeuroFedora Blog on February 26, 2024 09:42 AM
Photo by William White on Unsplash

Photo by William White on Unsplash.


Please join us at the next regular Open NeuroFedora team meeting on Monday 26 February at 1300 UTC. The meeting is a public meeting, and open for everyone to attend. You can join us in the Fedora meeting channel on chat.fedoraproject.org (our Matrix instance). Note that you can also access this channel from other Matrix home severs, so you do not have to create a Fedora account just to attend the meeting.

You can use this link to convert the meeting time to your local time. Or, you can also use this command in the terminal:

$ date --date='TZ="UTC" 1300 2024-02-26'

The meeting will be chaired by @ankursinha. The agenda for the meeting is:

We hope to see you there!

Episode 417 – Linux Kernel security with Greg K-H

Posted by Josh Bressers on February 26, 2024 12:10 AM

Josh and Kurt talk to GregKH about Linux Kernel security. We most focus on the topic of vulnerabilities in the Linux Kernel, and what being a CNA will mean for the future of Linux Kernel security vulnerabilities. The future of Linux Kernel security vulnerabilities is going to be very interesting.

<audio class="wp-audio-shortcode" controls="controls" id="audio-3325-2" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_417_Linux_Kernel_security_with_Greg_K-H.mp3?_=2" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_417_Linux_Kernel_security_with_Greg_K-H.mp3</audio>

Show Notes

Week 8 in Packit

Posted by Weekly status of Packit Team on February 26, 2024 12:00 AM

Week 8 (February 20th – February 26th)

  • Packit now checks the version to propose against the version in specfile and doesn't create downgrade PRs. (packit#2239)

django-ca, HSM and PoC

Posted by Kushal Das on February 25, 2024 09:25 AM

django-ca is a feature rich certificate authority written in Python, using the django framework. The project exists for long, have great documentation and code comments all around. As I was looking around for possible CAs which can be used in multiple projects at work, django-ca seems to be a good base fit. Though it has still a few missing parts (which are important for us), for example HSM support and Certificate Management over CMS.

I started looking into the codebase of django-ca more and meanwhile also started cleaning up (along with Magnus Svensson) another library written at work for HSM support. I also started having conversion with Mathias (who is the author of django-ca) about this feature.

Thanks to the amazing design of the Python Cryptography team, I could just add several Private key implementations in our library, which in turn can be used as a normal private key.

I worked on a proof of concept branch (PoC), while getting a lot of tests also working.

===== 107 failed, 1654 passed, 32 skipped, 274 errors in 286.03s (0:04:46) =====

Meanwhile Mathias also started writing a separate feature branch where he is moving the key operations encapsulated inside of backends, and different backends can be implemented to deal with HSM or normal file based storage. He then chatted with me on Signal over 2 hours explaining the code and design of the branch he is working on. He also taught me many other django/typing things which I never knew before in the same call. His backend based approach makes my original intention of adding HSM support very easy. But, it also means at first he has to modify the codebase (and the thousands of test cases) first.

I am writing this blog post also to remind folks that not every piece of code needs to go to production (or even merged). I worked on a PoC, that validates the idea. And then we have a better and completely different design. It is perfectly okay to work hard for a PoC and later use a different approach.

As some friends asked on Mastodon, I will do a separate post about the cleanup of the other library.

Data Recovery with Open-Source Tools (part 1)

Posted by Steven Pritchard on February 24, 2024 06:25 PM

This is material from a class I taught a long time ago.  Some of it may still be useful.  🙂

The original copyright notice:

Copyright © 2009-2010 Steven Pritchard / K&S Pritchard Enterprises, Inc.

This work is licensed under the Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-nd/3.0/us/ or send a letter to Creative Commons, 171 Second Street, Suite 300, San Francisco, California, 94105, USA.


This is part 1 of a multi-part series.

Identifying drives

An easy way to get a list of drives attached to a system is to run fdisk -l.  The output will look something like this:


# fdisk -l


Disk /dev/sda: 80.0 GB, 80026361856 bytes

255 heads, 63 sectors/track, 9729 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Disk identifier: 0xcab10bee


   Device Boot      Start         End      Blocks   Id  System

/dev/sda1   *           1        8673    69665841    7  HPFS/NTFS

/dev/sda2            8675        9729     8474287+   c  W95 FAT32 (LBA)


In many cases, you'll see a lot of (generally) uninteresting devices that are named /dev/dm-n.  These are devices created by device mapper for everything from software RAID to LVM logical volumes.  If you are primarily interested in the physical drives attached to a system, you can suppress the extra output of fdisk -l with a little bit of sed.  Try the following:


fdisk -l 2>&1 | sed '/\/dev\/dm-/,/^$/d' | uniq


Whole devices generally show up as /dev/sdx (/dev/sda, /dev/sdb, etc.) or /dev/hdx (/dev/hda, /dev/hdb, etc.).  Partitions on the individual devices show up as /dev/sdxn (/dev/sda1, /dev/sda2, etc.), or, in the case of longer device names, the name of the device with pn appended (an example might be /dev/mapper/loop0p1).

Hardware

PATA/SATA

The vast majority of hard drives currently in use connect to a computer using either an IDE (or Parallel ATA) interface or a SATA (Serial ATA) interface.  For the most part, SATA is just IDE with a different connector, but when SATA came out, the old Linux IDE driver had accumulated enough cruft that a new SATA driver (libata) was developed to support SATA controller chipsets.  Later, the libata driver had support for most IDE controllers added, obsoleting the old IDE driver.


There are some differences in the two drivers, and often those differences directly impact data recovery.  One difference is device naming.  The old IDE driver named devices /dev/hdx, where x is determined by the position of the drive.


/dev/hda    Master device, primary controller

/dev/hdb    Slave device, primary controller

/dev/hdc    Master device, secondary controller

/dev/hdd    Slave device, secondary controller


And so on.


Unlike the IDE driver, the libata driver uses what was historically SCSI device naming, /dev/sdx, where x starts at "a" and increments upwards as devices are detected, which means that device names are more-or-less random, and won't be consistent across reboots.


The other major difference between the old IDE driver and the libata driver that affects data recovery is how the drivers handle DMA (direct memory access).  The ATA specification allows for various PIO (Programmed I/O) and DMA modes.  Both the old IDE driver and the libata driver will determine the best mode, in most cases choosing a DMA mode initially, and falling back to a PIO mode in error conditions.  The old IDE driver would also let you manually toggle DMA off and on for any device using the command hdparm.


hdparm -d /dev/hd    Query DMA on/off state for /dev/hdx

hdparm -d0 /dev/hdx    Disable DMA on /dev/hdx

hdparm -d1 /dev/hdx    Enable DMA on /dev/hdx


The libata driver currently lacks the ability to toggle DMA on a running system, but it can be turned off for all hard drives with the kernel command line option libata.dma=6, or for all devices (including optical drives) with libata.dma=0.  On a running system, the value of libata.dma can be found in /sys/module/libata/parameters/dma.  (The full list of numeric values for this option can be found in http://www.kernel.org/doc/Documentation/kernel-parameters.txt.)  There does not appear to be a way to way to toggle DMA per device with the libata driver.


There are several reasons why you might want to toggle DMA on or off for a drive.  In some cases, failing drives simply won't work unless DMA is disabled, or even in some rare cases might not work unless DMA is enabled. In some cases the computer might have issues when reading from a failing drive with DMA enabled.  (The libata driver usually handles these situations fairly well.  The old IDE driver only began to handle these situations well in recent years.)


In addition to those reasons, PIO mode forces a drive to a maximum speed of 25MB/s (PIO Mode 6, others are even slower), while DMA modes can go up to 133MB/s.  Some drives appear to work better at these lower speeds.

SCSI

While SCSI drives and controllers are less common than they once were, all current hard drive controller interfaces now use the kernel SCSI device layers for device management and such.  For example, all devices that use the SCSI layer will show up in /proc/scsi/scsi.


# cat /proc/scsi/scsi

Attached devices:

Host: scsi0 Channel: 00 Id: 00 Lun: 00

  Vendor: TSSTcorp Model: CD/DVDW TS-L632D Rev: AS05

  Type:   CD-ROM                           ANSI  SCSI revision: 05

Host: scsi1 Channel: 00 Id: 00 Lun: 00

  Vendor: ATA      Model: ST9160821A       Rev: 3.AL

  Type:   Direct-Access                    ANSI  SCSI revision: 05

Host: scsi3 Channel: 00 Id: 00 Lun: 00

  Vendor: ATA      Model: WDC WD10EACS-00Z Rev: 01.0

  Type:   Direct-Access                    ANSI  SCSI revision: 05


In most cases, it is safe to remove a device that isn't currently mounted, but to be absolutely sure it is safe, you can also explicitly tell the kernel to disable a device by writing to /proc/scsi/scsi.  For example, to remove the third device (the Western Digital drive in this example), you could do the following:


echo scsi remove-single-device 3 0 0 0 > /proc/scsi/scsi

Note that the four numbers correspond to the controller, channel, ID, and LUN in the example.


In cases where hot-added devices don't automatically show up, there is also a corresponding add-single-device command.

When recovering data from SCSI (and SCSI-like drives such as SAS), there are no special tricks like DMA.

USB, etc.

The Linux USB drivers are rather resilient in the face of errors, so no special consideration needs to be given when recovering data from thumb drives and other flash memory (except that these devices tend to work or not, and, of course, dead shorts across USB ports are a Bad Thing).  USB-to-ATA bridge devices are a different matter entirely though.  They tend to lock up hard or otherwise behave badly when they hit errors on a failing drive.  Generally speaking, they should be avoided for failing drives, but drives that are OK other than a trashed filesystem or partition table should be completely fine on a USB-to-ATA bridge device.

To be continued in part 2.

Back on the market

Posted by Ben Cotton on February 23, 2024 09:48 PM

Nearly 10 months to the day since the last time this happened, I was informed yesterday that my position has been eliminated. As before, it’s unrelated to my performance, but the end result is the same: I am looking for a new job.

So what am I looking for? The ideal role would involve leadership in open source strategy or other high-level work. I’m excited by the opportunities to connect open source development to business goals in a way that makes it a mutually-beneficial relationship between company and community. In addition to my open source work, I have experience in program management, marketing, HPC, systems administration, and meteorology. You know, in case you’re thinking about clouds. My full resume (also in PDF) is available on my website.

If you have something that you think might be a good mutual fit, let me know. In the meantime, you can buy Program Management for Open Source Projects for all of your friends and enemies. I’m also available to give talks to communities (for free) and companies (ask about my reasonable prices!) on any subject where I have expertise. My website has a list of talks I’ve given, with video when available.

The post Back on the market appeared first on Blog Fiasco.

Untitled Post

Posted by Zach Oglesby on February 23, 2024 02:14 PM

I track the books I read on my blog and have loose goals of how many books I want to read a year, but I have been rereading the Stormlight Archive books this year in preparation for the release of book 5. Questioning if I should count them or not.

آموزش نصب و استفاده از ابزار TFSwitch

Posted by Fedora fans on February 23, 2024 01:57 PM
tfswich

tfswich

اگر از Terraform استفاده می کنید و پروژه های مختلفی دارید که هر کدام از آنها دارای نسخه های متفاوتی از  Terraformهستند، شاید نصب و حذف نسخه های مختلف  Terraform روی سیستم جهت کار با آن پروژه ها ساده و منطقی نباشد.راه حل استفاده از ‌tfswitch است. ابزار خط فرمان tfswitch به شما امکان می دهد بین نسخه های مختلف terraform سوئیچ کنید. اگر نسخه خاصی از terraform را نصب نکرده اید، tfswitch به شما امکان می دهد نسخه مورد نظر خود را دانلود کنید. نصب سریع و آسان است. پس از نصب، با اجرای دستور tfswitch به سادگی نسخه مورد نیاز خود را از منوی کشویی انتخاب کنید و شروع به استفاده از terraform کنید.
روش دیگر اینکه، کافیست تا وارد پروژه ی Terraform خود شوید و سپس دستور tfswitch را اجرا کنید. ابزار tfswitch فایل provider.tf یا version.tf و یا هر فایل دیگری که نسخه Terraform پروژه را مشخص کرده اید را می خواند و بصورت خودکار همان نسخه از Terraform را دانلود و نصب می کند.

 

نصب TFSwitch در لینوکس

 

برای نصب tfswitch در لینوکس کافیست تا دستور زیر را با کاربر root اجرا کنید:

# curl -L https://raw.githubusercontent.com/warrensbox/terraform-switcher/release/install.sh | bash

 

 

The post آموزش نصب و استفاده از ابزار TFSwitch first appeared on طرفداران فدورا.

Anonymous analytics via Plausible.io

Posted by Kiwi TCMS on February 23, 2024 10:15 AM

Since the very beginning when we launched Kiwi TCMS our team has been struggling to understand how many people use it, how active these users are, which pages & functionality they spend the most time with, how many installations of Kiwi TCMS are out there in the wild and which exactly versions are the most used ones!

We reached over 2 million downloads without any analytics inside the application because we do not want to intrude on our users' privacy and this has not been easy! Inspired by a recent presentation we learned about Plausible Analytics - GDPR, CCPA and cookie law compliant, open source site analytics tool - and decided to use it! You can check-out how it works here.

What is changing

Starting with Kiwi TCMS v13.1 anonymous analytics will be enabled for statistical purposes. Our goal is to track overall usage patterns inside the Kiwi TCMS application(s), not to track individual visitors. All the data is in aggregate only. No personal data is sent to Plausible.

Anonymous analytics are enabled on this website, inside our official container images and for all tenants provisioned under https://*.tenant.kiwitcms.org. Running containers will report back to Plausible Analytics every 5 days to send the version number of Kiwi TCMS, nothing else! Here's a preview of what it looks like:

"preview of versions report"

You can examine our source code here and here.

Staying true to our open source nature we've made the kiwitcms.org stats dashboard publicly available immediately! In several months we are going to carefully examine the stats collected by the kiwitcms-container dashboard and consider making them publicly available as well! Most likely we will!

Who uses Plausible

A number of [open source] organizations have publicly endorsed the use of Plausible Analytics:

You can also inspect this huge list of Websites using Plausible Analytics compiled by a 3rd party vendor!

How can I opt-out

  • Leave everything as-is and help us better understand usage of Kiwi TCMS
  • Update the setting PLAUSIBLE_DOMAIN and collect your own stats with Plausible
  • Update the setting ANONYMOUS_ANALYTICS to False and disable all stats

IMPORTANT: Private Tenant customers and demo instance users cannot opt-out! Given that they are consuming digital resources hosted by our own team they have already shared more information with us than what gets sent to Plausible! Note that we do not track individual users, analyze or sell your information to 3rd parties even across our own digital properties!

Happy Testing!


If you like what we're doing and how Kiwi TCMS supports various communities please help us!

Infra & RelEng Update – Week 8 2024

Posted by Fedora Community Blog on February 23, 2024 10:00 AM

This is a weekly report from the I&R (Infrastructure & Release Engineering) Team. It also contains updates for CPE (Community Platform Engineering) Team as the CPE initiatives are mostly tied to I&R work.

We provide you with both an infographic and a text version of the weekly report. If you want to quickly look at what we did, just look at the infographic. If you are interested in more in-depth details look at the infographic.

Week: 19 February – 23 February 2024

Read more: Infra & RelEng Update – Week 8 2024 <figure class="wp-block-image size-full wp-lightbox-container" data-wp-context="{ "core": { "image": { "imageLoaded": false, "initialized": false, "lightboxEnabled": false, "hideAnimationEnabled": false, "preloadInitialized": false, "lightboxAnimation": "zoom", "imageUploadedSrc": "https://communityblog.fedoraproject.org/wp-content/uploads/2024/02/Weekly-Report_week8.jpg", "imageCurrentSrc": "", "targetWidth": "1006", "targetHeight": "994", "scaleAttr": "", "dialogLabel": "Enlarged image" } } }" data-wp-interactive="data-wp-interactive">I&R infographic </figure>

Infrastructure & Release Engineering

The purpose of this team is to take care of day-to-day business regarding CentOS and Fedora Infrastructure and Fedora release engineering work.
It’s responsible for services running in Fedora and CentOS infrastructure and preparing things for the new Fedora release (mirrors, mass branching, new namespaces etc.).
List of planned/in-progress issues

Fedora Infra

CentOS Infra including CentOS CI

Release Engineering

CPE Initiatives

EPEL

Extra Packages for Enterprise Linux (or EPEL) is a Fedora Special Interest Group that creates, maintains, and manages a high quality set of additional packages for Enterprise Linux, including, but not limited to, Red Hat Enterprise Linux (RHEL), CentOS and Scientific Linux (SL), Oracle Linux (OL).

Updates

Matrix Native Zodbot

With ongoing stability issues with the Matrix <-> IRC bridge and many contributors switching over to Matrix, zodbot has become increasingly unreliable. The bridge is currently shut off completely. This initiative will provide a future proof solution and allow us to conduct meetings without wasting time troubleshooting the bridge and zodbot.

Updates

  • This initiative is now finished as the Zodbot is already running in Matrix for a few months and most of the initial issues were resolved

If you have any questions or feedback, please respond to this report or contact us on -cpe channel on Matrix.

The post Infra & RelEng Update – Week 8 2024 appeared first on Fedora Community Blog.

Fedora and older XFS filesystem format V4

Posted by Justin M. Forbes on February 22, 2024 01:40 PM

Upstream deprecated the V4 format for XFS with commit b96cb835.  With next year being the date that it defaults to unsupported (though can be enabled with a kernel config for a while still).  As such, Fedora 40 will be the last release that supports these older XFS filesystems.  Once Fedora 40 is EOL around June of 2025, the default Fedora kernel will no longer be able to mount them.

Hello Nushell!

Posted by Michel Lind on February 21, 2024 12:00 AM
After about two years of on-and-off work, I’m happy to report that Nushell has finally landed in Fedora and EPEL 9, and can be installed simply using sudo dnf --refresh install nu on Fedora and your favorite Enterprise Linux distribution (I’m partial to CentOS Stream myself, but also RHEL, AlmaLinux, etc.) (For those not familiar with Nushell yet, think of it as a cross-platform Powershell written in Rust - it lets you manipulate pipelines of structured data, the way you might be using jc and jq with a regular shell)

Anatomy of a Jam Performance

Posted by Adam Young on February 20, 2024 10:53 PM

My Band, The Standard Deviants, had a rehearsal this weekend. As usual, I tried to record some of it. As so often happens, our best performance was our warm up tune. This time, we performed a tune called “The SMF Blues” by my good friend Paul Campagna.

<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio">
<iframe allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen="allowfullscreen" frameborder="0" height="329" src="https://www.youtube.com/embed/ZWs9G1dTfJA?feature=oembed" title="SMF Blues" width="584"></iframe>
</figure>

What is going on here? Quite a bit. People talk about Jazz as improvisation, but that does not mean that “Everything” is made up on the spot. It takes a lot of preparation in order for it to be spontaneous. Here are some of the things that make it possible to “just jam.”

Blues Funk

While no one in the band besides me knew the tune before this session, the format of the tune is a 12 bar blues. This is probably the most common format for a jam tune. A lot has been written on the format elsewhere, so I won’t detail here. All of the members of this group know a 12 bar blues, and can jump in to one after hearing the main melody once or twice.

The beat is a rock/funk beat set by Dave, our Drummer. This is a beat that we all know. This is also a beat that Dave has probably put 1000 hours into playing and mastering in different scenarios. Steve, on Bass, has played a beat like this his whole career. We all know the feel and do not have to think about it.

This song has a really strong turn around on the last two bars of the form. It is high pitched, repeated, and lets everyone know where we are anchored in the song. It also tells people when a verse is over, and we reset.

Signals

The saxophone plays a lead role in this music, and I give directions to the members of the band. This is not to “boss” people around, but rather to reduce the number of options at any given point so that we as a unit know what to do. Since we can’t really talk in this scenario, the directions have to be simple. There are three main ways I give signals to the band.

The simplest is to step up an play. As a lead instrument, the saxophone communicates via sound to the rest of the band one of two things; either we are playing the head again, or I am taking a solo. The only person that really has to majorly change their behavior is Adam 12 on Trombone. Either he plays the melody with me or he moves into a supporting role. The rest of the band just adjusts their energy accordingly. We play louder for a repetition of the head, and they back off a bit for a solo.

The second way I give signals to the band is by direct eye contact. All I have to do is look back at Dave in the middle of a verse to let him know that the next verse is his to solo. We have played together long enough that he knows what that means. I reinforce the message by stepping to the side and letting the focus shift to him.

The third way is to use hand gestures. As a sax player, I have to make these brief, as I need two hands to play many of the notes. However, there are alternative fingerings, so for short periods, I can play with just my left hand, and use my right to signal. The most obvious signal here is the 4 finger hand gesture I gave to Adam 12 that we are going to trade 4s. This means that each of us play for four measures and then switch. If I gave this signal to all of the band, it would mean that we would be trading 4s with the drums, which we do on longer form songs. Another variation of this is 2s, which is just a short spot for each person.

One of the wonderful thing about playing with this band is how even “mistakes” such as when i tried to end the tune, and no one caught the signal…and it became a bass solo that ended up perfectly placed. Steve realized we all quieted out, and that is an indication for the Bass to step up, and he did.

Practice Practice Practice

Everyone here knows their instrument. We have all been playing for many decades, and know what to do. But the horn is a harsh mistress, and she demands attention. As someone once said:

Skip a day and you know. Skip two days and your friends know. Skip three days and everyone knows.

Musical Wisdom from the ages

Adam 12 is the newest member of the band. It had been several years since he played regularly before he joined us. His first jam sessions were fairly short. We have since heard the results of the hard work that he has put in.

I try to play every day. It completes with many other responsibilities and activities in modern life. But my days seems less complete if I did not at least blow a few notes through the horn.

Listening

Music is timing. Our ears are super sensitive to changes in timing, whether at the micro level, translating to differences in pitch, or the macro level, with changes in tempo…which is just another word for time. Dave is the master of listening. He catches on to a pattern one of us it playing and he works it into his drumming constantly. Steve is the backbone of the band. Listening to the bass line tells us what we need to know about speed and location in the song. The more we play together, the more we pick up on each others cues through playing. The end effect is that we are jointly contributing to an event, an experience, a performance.

Debugging an odd inability to stream video

Posted by Matthew Garrett on February 19, 2024 10:30 PM
We have a cabin out in the forest, and when I say "out in the forest" I mean "in a national forest subject to regulation by the US Forest Service" which means there's an extremely thick book describing the things we're allowed to do and (somewhat longer) not allowed to do. It's also down in the bottom of a valley surrounded by tall trees (the whole "forest" bit). There used to be AT&T copper but all that infrastructure burned down in a big fire back in 2021 and AT&T no longer supply new copper links, and Starlink isn't viable because of the whole "bottom of a valley surrounded by tall trees" thing along with regulations that prohibit us from putting up a big pole with a dish on top. Thankfully there's LTE towers nearby, so I'm simply using cellular data. Unfortunately my provider rate limits connections to video streaming services in order to push them down to roughly SD resolution. The easy workaround is just to VPN back to somewhere else, which in my case is just a Wireguard link back to San Francisco.

This worked perfectly for most things, but some streaming services simply wouldn't work at all. Attempting to load the video would just spin forever. Running tcpdump at the local end of the VPN endpoint showed a connection being established, some packets being exchanged, and then… nothing. The remote service appeared to just stop sending packets. Tcpdumping the remote end of the VPN showed the same thing. It wasn't until I looked at the traffic on the VPN endpoint's external interface that things began to become clear.

This probably needs some background. Most network infrastructure has a maximum allowable packet size, which is referred to as the Maximum Transmission Unit or MTU. For ethernet this defaults to 1500 bytes, and these days most links are able to handle packets of at least this size, so it's pretty typical to just assume that you'll be able to send a 1500 byte packet. But what's important to remember is that that doesn't mean you have 1500 bytes of packet payload - that 1500 bytes includes whatever protocol level headers are on there. For TCP/IP you're typically looking at spending around 40 bytes on the headers, leaving somewhere around 1460 bytes of usable payload. And if you're using a VPN, things get annoying. In this case the original packet becomes the payload of a new packet, which means it needs another set of TCP (or UDP) and IP headers, and probably also some VPN header. This still all needs to fit inside the MTU of the link the VPN packet is being sent over, so if the MTU of that is 1500, the effective MTU of the VPN interface has to be lower. For Wireguard, this works out to an effective MTU of 1420 bytes. That means simply sending a 1500 byte packet over a Wireguard (or any other VPN) link won't work - adding the additional headers gives you a total packet size of over 1500 bytes, and that won't fit into the underlying link's MTU of 1500.

And yet, things work. But how? Faced with a packet that's too big to fit into a link, there are two choices - break the packet up into multiple smaller packets ("fragmentation") or tell whoever's sending the packet to send smaller packets. Fragmentation seems like the obvious answer, so I'd encourage you to read Valerie Aurora's article on how fragmentation is more complicated than you think. tl;dr - if you can avoid fragmentation then you're going to have a better life. You can explicitly indicate that you don't want your packets to be fragmented by setting the Don't Fragment bit in your IP header, and then when your packet hits a link where your packet exceeds the link MTU it'll send back a packet telling the remote that it's too big, what the actual MTU is, and the remote will resend a smaller packet. This avoids all the hassle of handling fragments in exchange for the cost of a retransmit the first time the MTU is exceeded. It also typically works these days, which wasn't always the case - people had a nasty habit of dropping the ICMP packets telling the remote that the packet was too big, which broke everything.

What I saw when I tcpdumped on the remote VPN endpoint's external interface was that the connection was getting established, and then a 1500 byte packet would arrive (this is kind of the behaviour you'd expect for video - the connection handshaking involves a bunch of relatively small packets, and then once you start sending the video stream itself you start sending packets that are as large as possible in order to minimise overhead). This 1500 byte packet wouldn't fit down the Wireguard link, so the endpoint sent back an ICMP packet to the remote telling it to send smaller packets. The remote should then have sent a new, smaller packet - instead, about a second after sending the first 1500 byte packet, it sent that same 1500 byte packet. This is consistent with it ignoring the ICMP notification and just behaving as if the packet had been dropped.

All the services that were failing were failing in identical ways, and all were using Fastly as their CDN. I complained about this on social media and then somehow ended up in contact with the engineering team responsible for this sort of thing - I sent them a packet dump of the failure, they were able to reproduce it, and it got fixed. Hurray!

(Between me identifying the problem and it getting fixed I was able to work around it. The TCP header includes a Maximum Segment Size (MSS) field, which indicates the maximum size of the payload for this connection. iptables allows you to rewrite this, so on the VPN endpoint I simply rewrote the MSS to be small enough that the packets would fit inside the Wireguard MTU. This isn't a complete fix since it's done at the TCP level rather than the IP level - so any large UDP packets would still end up breaking)

I've no idea what the underlying issue was, and at the client end the failure was entirely opaque: the remote simply stopped sending me packets. The only reason I was able to debug this at all was because I controlled the other end of the VPN as well, and even then I wouldn't have been able to do anything about it other than being in the fortuitous situation of someone able to do something about it seeing my post. How many people go through their lives dealing with things just being broken and having no idea why, and how do we fix that?

(Edit: thanks to this comment, it sounds like the underlying issue was a kernel bug that Fastly developed a fix for - under certain configurations, the kernel fails to associate the MTU update with the egress interface and so it continues sending overly large packets)

comment count unavailable comments

Week 7 in Packit

Posted by Weekly status of Packit Team on February 19, 2024 12:00 AM

Week 7 (February 13th – February 19th)

  • Packit now supports special value ignore for trigger in jobs configuration that indicates not to execute the job at all. This can be useful for templates or temporarily disabled jobs. (packit#2234)
  • We have fixed the caching of data for the usage API endpoint. (packit-service#2350)
  • We have fixed an issue that caused loading the same data multiple times on the dashboard within the project views. (packit-service#2349)
  • We have also fixed crashing of dashboard's Usage page in case of unsuccessful queries. (dashboard#378)
  • We have fixed parsing of resolved Bugzillas in comments with multiple arguments specified, e.g. /packit pull-from-upstream --with-pr-config --resolved-bugs rhbz#123. (packit-service#2346)

Episode 416 – Thomas Depierre on open source in Europe

Posted by Josh Bressers on February 19, 2024 12:00 AM

Josh and Kurt talk to Thomas Depierre about some of the European efforts to secure software. We touch on the CRA, MDA, FOSDEM, and more. As expected Thomas drops a huge amount of knowledge on what’s happening in open source. We close the show with a lot of ideas around how to move the needle for open source. It’s not easy, but it is possible.

<audio class="wp-audio-shortcode" controls="controls" id="audio-3321-3" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_416_Thomas_Depierre_on_open_source_in_Europe.mp3?_=3" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_416_Thomas_Depierre_on_open_source_in_Europe.mp3</audio>

Show Notes

New Samsung S90C OLED – Green Screen Of Death

Posted by Jon Chiappetta on February 18, 2024 11:56 PM

So last year, for my birthday, I purchased a new Sony PS5 to update my gaming console after soo many years and it immediately failed on me by always shutting down during game play. This year, for my birthday, I decided to try and update my 8 year old TV with a new Samsung OLED for the first time and as my luck would have it, I was presented with the “Green Screen Of Death”. The TV only just arrived a few days ago and is now dead so I have to go through the process of trying to contact a certified Samsung repair person to see if it can even be fixed. I can’t tell if its just my bad luck going on lately or if quality control at these companies has gone down hill but it’s starting to get harder and harder to find good quality alternatives! 😦

<figure class="wp-block-embed aligncenter is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio">
<iframe allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen="allowfullscreen" frameborder="0" height="567" src="https://www.youtube.com/embed/vjmcboqltgA?feature=oembed" title="New Samsung S90C OLED - Green Screen Of Death" width="1008"></iframe>
</figure> <figure class="wp-block-embed aligncenter is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio">
<iframe allowfullscreen="true" class="youtube-player" height="567" sandbox="allow-scripts allow-same-origin allow-popups allow-presentation allow-popups-to-escape-sandbox" src="https://www.youtube.com/embed/cyWlACuhqNg?version=3&amp;rel=1&amp;showsearch=0&amp;showinfo=1&amp;iv_load_policy=1&amp;fs=1&amp;hl=en&amp;autohide=2&amp;wmode=transparent" style="border:0;" width="1008"></iframe>
</figure>

El Alto Costo de Volar

Posted by Arnulfo Reyes on February 18, 2024 05:53 PM

El incidente con el vuelo 1282 de Alaska Airlines, en el que se desprendió una puerta de salida de emergencia de un Boeing 737-MAX9 en pleno vuelo, ha reavivado la discusión sobre la decadencia de Boeing como fabricante de aviones. Este avión pertenece a la misma familia 737 MAX que sufrió dos accidentes mortales en 2018 y 2019.

<figure><figcaption>https://www.airlinereporter.com/wp-content/uploads/2013/07/First-737-Under-Construction.jpg</figcaption></figure>

La mayoría de las fuentes apuntan al mismo evento desencadenante: la fusión de 1997 con McDonnell-Douglas, que transformó a Boeing de una empresa impulsada por la ingeniería y enfocada en construir los mejores aviones posibles, a una que se centra abrumadoramente en las finanzas y el precio de las acciones.

Esta descripción parece precisa, pero omite una gran parte del panorama: la brutal estructura de la industria de aviones comerciales que impulsa a empresas como Boeing a crear cosas como el 737 MAX (una actualización de un avión de 50 años) en lugar de crear un nuevo modelo desde cero. Al desglosar cómo funciona la industria de aviones comerciales, podemos comprender mejor el comportamiento de Boeing.

Las palabras de George Ball, director gerente de Lehman Brothers en 1982, y Jean Pierson, ex CEO de Airbus, resuenan con fuerza en este contexto: “No hay precedentes históricos ni paralelos actuales para la magnitud del riesgo de exposición financiera asumido por una empresa de estructuras aéreas estadounidense” y “No puedes ganar, no puedes salirte a mano, y no puedes renunciar”. Estas citas subrayan la complejidad y los desafíos inherentes a la industria de la aviación comercial.

El riesgo de desarrollar un nuevo avión

La fabricación de aviones comerciales es similar a cualquier otra industria manufacturera en algunos aspectos. Una empresa desarrolla un producto y trata de venderlo por un precio suficiente para cubrir los costos de desarrollo y producción. Si tiene éxito y genera ganancias, continúa desarrollando nuevos productos; si no, cierra sus operaciones. Lo que distingue a la industria de la aviación es la escala en la que ocurren estas cosas.

Desarrollar un avión comercial es increíblemente costoso. Los presupuestos para nuevos programas de desarrollo de aeronaves están en los miles de millones de dólares, y los inevitables sobrecostos pueden llevar los costos de desarrollo a $20–30 mil millones o más.

<figure></figure>

Esto no es simplemente un caso de empresas modernas e infladas que han olvidado cómo hacer las cosas de manera eficiente (aunque hay algo de eso): desarrollar un avión a reacción siempre ha sido costoso. Boeing gastó entre $1.2 y $2 mil millones para desarrollar el 747 a finales de los años 60 (~$10–20 mil millones en dólares de 2023), y otros fabricantes de la época como Lockheed y McDonnell-Douglas señalaron que sus propios costos de desarrollo de nuevos aviones eran similares.

El costo de desarrollar un nuevo avión comercial puede ser una fracción significativa, si no mayor, del valor total de la empresa. Por ejemplo, Boeing gastó $186 millones en el desarrollo de su primer avión a reacción en 1952, el 707, que era $36 millones más de lo que valía la empresa.

Cuando Boeing comenzó el desarrollo del 747 en 1965, la empresa estaba valorada en $375 millones, menos de un tercio de lo que gastó en el desarrollo del 747. La mayoría de los otros programas no son tan desequilibrados, pero aún representan un riesgo enorme. Se estima que el Boeing 777 costó entre $12 y $14 mil millones para desarrollar en un momento en que Boeing valía alrededor de $30 mil millones. Y cuando Airbus lanzó el programa A380, presupuestó $10.7 mil millones, la mitad del valor de la empresa (y mucho menos de lo que finalmente se gastó). Los fabricantes de aviones a menudo apuestan por la empresa cuando deciden desarrollar un nuevo modelo de jet.

Gastar miles de millones de dólares en el desarrollo de nuevos productos no es exclusivo de la industria de la aviación: un nuevo modelo de automóvil costará miles de millones de dólares para desarrollar, al igual que un nuevo medicamento. Pero tanto los automóviles como los medicamentos pueden distribuir sus costos de desarrollo entre millones de ventas de productos. El mercado de aviones comerciales, por otro lado, es mucho más pequeño; solo se venden alrededor de 1,000 jets grandes cada año. Los fabricantes de aviones necesitan poder recuperar los miles de millones gastados en el desarrollo de productos, fábricas y herramientas con solo unos pocos cientos de ventas.

Esto crea algunas dificultades para los fabricantes de aviones. Por un lado, hace que las curvas de aprendizaje sean muy importantes. Las curvas de aprendizaje son el fenómeno por el cual los costos de producción (o alguna medida relacionada, como las horas de trabajo) tienden a caer un porcentaje constante por cada duplicación acumulativa del volumen de producción: pasar de 10 a 20 unidades producidas produce la misma disminución porcentual de costos que pasar de 10,000 a 20,000.

Los productos de alto volumen pasan la mayor parte de su tiempo en una porción relativamente “plana” de la curva de aprendizaje, donde las duplicaciones están cada vez más separadas. Si has producido 1,000,000 de algo, si haces otros 500 o 5,000 no hará casi ninguna diferencia en términos de la curva de aprendizaje. Pero si solo has hecho 50 de algo, hacer otros 500 hace una gran diferencia en el nivel de reducción de costos que se puede lograr. Por lo tanto, si solo planeas vender unos pocos cientos de algo, un número relativamente pequeño de ventas tendrá un gran impacto en cuán eficientemente estás produciendo y cuán rentable eres.

Los fabricantes de aviones comerciales dependen de obtener suficientes pedidos para empujarlos lo suficientemente lejos en la curva de aprendizaje donde están ganando suficiente dinero por avión para recuperar los costos de desarrollarlo. Los primeros aviones podrían producirse de manera tan ineficiente que se venden por menos de lo que cuesta hacerlos. Esto es típicamente en el vecindario de 500 aviones (y podría ser mucho más si un programa se excede mucho del presupuesto); vende menos, y el programa perderá dinero.

El número relativamente pequeño de ventas de aviones también crea una intensa presión para predecir con precisión las tendencias en los viajes aéreos. Los fabricantes necesitan patinar donde estará el disco: desarrollar el tipo de avión que las aerolíneas querrán durante muchos años en el futuro.

Adivinar mal puede ser desastroso. Airbus perdió enormes cantidades de dinero cuando malinterpretó el mercado para su enorme A380 (solo vendió 251 aviones, muy por debajo de lo que se necesitaba para equilibrar, y el último A380 salió de la línea en 2021). Airbus proyectó que el crecimiento continuo en los viajes aéreos crearía demanda para un avión aún más grande que pudiera mover pasajeros de manera económica entre los grandes aeropuertos centrales. Pero, de hecho, los viajes internacionales se fragmentaron, y las aerolíneas vuelan cada vez más vuelos directos entre destinos utilizando aviones más pequeños y fáciles de llenar como el Boeing 787. A finales de la década de 1960, Lockheed y McDonnell-Douglas se arruinaron mutuamente al desarrollar cada uno un nuevo avión (el L-1011 y el DC-10, respectivamente) para lo que resultó ser un mercado muy pequeño: se vendieron menos de 700 aviones entre ellos. Lockheed terminó abandonando el mercado de aviones comerciales después de perder $2.5 mil millones en el programa (~$9 mil millones en dólares de 2023), y McDonnell-Douglas nunca se recuperó, finalmente se vendió a Boeing en 1997 a medida que su participación en el mercado disminuyó y no pudo financiar el desarrollo de nuevos aviones.

Pero adivinar bien viene con sus propios problemas. Si un programa se retrasa, eso puede causar pedidos perdidos, falta de confianza de las aerolíneas y, en última instancia, llevar a los clientes a los brazos de un competidor. Boeing originalmente planeó introducir su 787 en 2008, pero se retrasó hasta 2011, lo que llevó a los clientes a comprar el Airbus A330 (Airbus se jactó de que vendió más A330 después de que Boeing lanzó el 787). Si Boeing hubiera entregado a tiempo, su ventaja sobre Airbus habría sido enorme.

Y tener demasiados pedidos para un nuevo avión puede ser casi tan malo como tener muy pocos. A finales de la década de 1960, Douglas se ahogó en una demanda inesperadamente alta para su DC-9: no pudo cumplir con sus plazos de entrega, se vio obligado a pagar compensación a las aerolíneas afectadas y casi se vio obligado a la quiebra debido a problemas de flujo de efectivo, lo que resultó en una fusión con McDonnell Aircraft. Boeing tuvo una lucha similar al tratar de aumentar rápidamente la producción de su 737 revisado (llamado 737 Next Generation, o NG) a mediados de la década de 1990: finalmente se vio obligado a detener temporalmente la producción debido al caos, lo que resultó en entregas tardías (y penalizaciones asociadas) y una pérdida neta para el año 1997, la primera de la compañía desde 1959. Se estima que Boeing perdió mil millones de dólares en los primeros 400 737NG, a pesar de que eran derivados de un avión que Boeing había estado construyendo desde la década de 1960.

Los eventos globales pueden cambiar rápidamente las tendencias en los viajes aéreos, cambiando completamente los tipos de aviones que las aerolíneas quieren comprar. Airbus, por ejemplo, tuvo poco éxito inicial vendiendo su primer modelo, el A300. En 1978, cuatro años después de su introducción, solo había vendido 38 de ellos. Pero el aumento de los precios del combustible debido a la segunda crisis del petróleo, junto con la creciente competencia de las aerolíneas, creó una demanda para un avión de pasajeros de fuselaje ancho y dos motores eficientes en combustible, y solo el A300 cumplía con los requisitos. Para 1979, las ventas habían aumentado a más de 300. De manera similar, la desregulación de la industria de las aerolíneas en Estados Unidos obligó a las aerolíneas a ser mucho más competitivas en precio y a centrarse en cosas como la eficiencia del combustible y el costo operativo. Esto cambió el cálculo de los tipos de aviones que estaban interesados en comprar, y aumentó la demanda de aviones como el 737 de Boeing.

A menudo, el éxito se reduce a la suerte tanto como a cualquier otra cosa.

Airbus tuvo suerte cuando los precios del petróleo en alza significaron que su A300 estaba de repente en alta demanda y no tenía competencia. Boeing tuvo suerte con su 737 en la década de 1960, que entró en servicio más de dos años después del similar DC-9, y solo tuvo éxito en parte debido a los retrasos en la producción de Douglas. Y Boeing tuvo suerte nuevamente con el 747, que, al igual que el A380, era un avión enorme que pocas aerolíneas realmente necesitaban. Solo tuvo éxito porque Juan Trippe, el fundador de Pan Am, los compró por capricho (a Trippe le gustaba tener los aviones más recientes y “no veía la necesidad de un análisis de mercado”). Otras aerolíneas siguieron su ejemplo, sin querer conceder a Pan Am el beneficio de marketing de tener los aviones más grandes (aunque el 747 se volvió cada vez más útil a medida que aumentaban los viajes internacionales).

Los fabricantes de aviones se enfrentan a la ingrata tarea de intentar navegar este panorama mientras arriesgan miles de millones de dólares. Pero, por supuesto, desarrollar nuevos productos no es una opción: al igual que en cualquier otra industria, los competidores intentan asegurar su propia ventaja al promocionar sus productos a un número limitado de clientes. La tecnología de los aviones está cambiando constantemente, y las aerolíneas (en una competencia brutal por sí mismas) quieren todas las últimas tecnologías: mejor aerodinámica, materiales más ligeros, motores más grandes y más eficientes, etc., que reducirán sus costos operativos y mejorarán la experiencia de sus pasajeros. Perder incluso un pedido puede ser un gran revés para un fabricante de aviones, tanto por el pequeño número de clientes en general como porque las ventas tienden a tener impulso: una venta a una aerolínea probablemente signifique más ventas futuras a esa aerolínea (ya que habrá eficiencias de la flota común en cosas como el mantenimiento compartido y la formación), o ventas a aerolíneas asociadas, o ventas a competidores que quieren apostar por el caballo ganador. Las aerolíneas son muy conscientes del hecho de que pocas ventas pueden poner a un fabricante de aviones en terreno peligroso, lo que hace arriesgado para una aerolínea engancharse a un perdedor.

Si un fabricante de aviones puede navegar con éxito por este panorama, la recompensa es una ganancia insignificante: entre 1970 y 2010, Boeing, el constructor de aviones comerciales más exitoso, promedió poco más del 5% de ganancia anual. No es de extrañar que la feroz y costosa competencia y las ganancias miserables hayan expulsado gradualmente a los competidores del espacio, dejando solo a Boeing y Airbus (y, si te sientes generoso, a Bombardier y Embraer). Empresas como de Havilland, Dassault, Lockheed, Douglas, Convair, Glen Martin, todas han sido expulsadas o forzadas a fusionarse. Al observar la historia de la fabricación de aviones a reacción en 1982, John Newhouse estima que, de los 22 aviones a reacción comerciales que se habían desarrollado hasta entonces, solo dos, el Boeing 707 y el Boeing 727, se creía que habían ganado dinero (aunque señala que el 747 podría eventualmente hacer esa lista también).

El resultado de estas dificultades es que los fabricantes de aviones piensan muy cuidadosamente antes de desarrollar un nuevo avión: los riesgos son grandes, las recompensas pequeñas e inciertas. A menudo, es una apuesta mucho más segura simplemente desarrollar una modificación de un modelo existente, manteniendo el mismo fuselaje básico y agregando motores más eficientes o una forma de ala ajustada, o estirándolo para agregar más capacidad de pasajeros. Revisar un modelo existente puede costar solo el 10–20% de diseñar un nuevo avión desde cero, y puede proporcionar casi tantos beneficios. El típico avión nuevo podría ser un 20–30% más eficiente en combustible que los diseños existentes, pero Boeing pudo lograr una mejora del 15–16% con el 737 MAX revisado. Y actualizar un modelo existente también es más barato para las aerolíneas, no tienen que volver a entrenar a sus pilotos para volar el nuevo avión.

Para ver cómo se ve este tipo de cálculo en la práctica, echemos un vistazo a la historia del Boeing 737, que ha sido revisado y actualizado repetidamente desde su primer vuelo en 1967.

Evolución del Boeing 737

Boeing desarrolló por primera vez el 737 a mediados de la década de 1960 como un avión de corto alcance y pequeña capacidad para completar su línea de productos y prevenir que Douglas se quedara con todo el mercado de gama baja con su DC-9. Inicialmente, no fue particularmente exitoso, ni se esperaba que lo fuera: Douglas llevaba una ventaja de 2 años con el DC-9, y el 727 anterior de Boeing ya estaba sirviendo gran parte de ese mercado, aunque con 3 motores en lugar de los 2 motores más eficientes del 737. De hecho, el programa estuvo a punto de ser cancelado poco después de su lanzamiento debido a las bajas ventas iniciales. Para minimizar el costo y el tiempo de desarrollo, el 737 fue diseñado para compartir tantas piezas como fuera posible con los anteriores 707 y 727.

El rendimiento inicial del 737 fue menor de lo esperado, por lo que Boeing desarrolló una versión “avanzada” con aerodinámica mejorada en 1970. Pero incluso con estas mejoras, las luchas de Douglas para construir el DC-9, y un pedido para una versión militar del 737 (The T-43A trainer), las ventas aún eran lentas. El avión se estaba construyendo a una tasa de solo dos por mes, y hasta 1973 el programa estuvo al borde de ser cancelado (durante este período, Boeing casi quebró debido a los sobrecostos en el programa 747, y tuvo que despedir al 75 por ciento de su fuerza laboral). El 737 solo se salvó porque finalmente se estaba vendiendo por menos de sus costos de producción, pero no se esperaba que recuperara sus costos de desarrollo.

Pero las ventas comenzaron a repuntar en 1973, y la producción había alcanzado cinco aviones por mes en 1974. Para 1978, contra todo pronóstico, se había convertido en el avión de pasajeros más vendido del mundo, título que retuvo de 1980 a 1985. La desregulación de las aerolíneas en los EE. UU. había causado un cambio en la estrategia de las aerolíneas: en lugar de conectar directamente dos ciudades con vuelos de bajo volumen, comenzaron a conectar a través de aeropuertos centrales, utilizando aviones más pequeños y más baratos de operar. El 737 encajaba perfectamente en la factura.

Pero los competidores de Boeing no se quedaron quietos. Douglas lanzó una versión actualizada de su DC-9, el Super 80, con una versión mejorada de su motor Pratt and Whitney que lo hacía más silencioso y más eficiente en combustible que el 737. Para contrarrestar la amenaza, y para lidiar con las regulaciones de ruido cada vez más estrictas, Boeing respondió con el Boeing 737–300 de “nueva generación”, que comenzó su desarrollo en 1981. Esta versión del 737 agregó capacidad de pasajeros, mejoró la aerodinámica y tenía un nuevo turbofán de alto bypass más eficiente de CFMI (una empresa conjunta entre GE y la empresa francesa SNECMA).

Ajustar un motor tan grande debajo del ala del 737 fue un desafío. El 737 había sido diseñado originalmente con una distancia al suelo baja para acomodar aeropuertos de “segundo nivel” con sistemas de escaleras algo limitados. Extender el tren de aterrizaje para elevar el avión habría requerido cambiar la ubicación de los pozos de las ruedas, lo que habría cambiado la estructura del avión lo suficiente como para esencialmente convertirlo en un nuevo avión. En cambio, el motor se apretó en el espacio disponible, dándole una forma elipsoide característica. Este motor de alto bypass le dio al 737–300 una mejora del 18% en eficiencia de combustible versus los aviones de generación anterior, y una mejora del 11% sobre el Super 80 de McDonnell-Douglas, mientras aún lo mantenía lo más similar posible al 737 anterior.

A medida que el 737–300 tomaba forma, surgía un nuevo competidor. Tras el éxito de su A300, Airbus comenzó el desarrollo del A320 más pequeño, un competidor directo del 737, en 1984. El A320 incorporó muchas tecnologías avanzadas, como el fly-by-wire, que reemplazó las pesadas conexiones mecánicas o hidráulicas entre los controles del avión y las superficies de control del avión con conexiones electrónicas más ligeras. Para 1987, el A320 ya había acumulado 400 pedidos, incluyendo un gran pedido de Northwest Airlines, un cliente de Boeing de larga data. Claramente iba a ser un competidor feroz.

<figure><figcaption>Concept art for the 7J7</figcaption></figure>

Algunos argumentan que Boeing podría haber (y debería haber) eliminado al A320 inmediatamente anunciando un nuevo avión “desde cero”. En ese momento, Boeing estaba trabajando en un avión del tamaño del 737 llamado 7J7, que utilizaba un avanzado motor de avión “unducted fan” (UDF) de GE. Teóricamente, el 7J7 habría sido un 60% más eficiente en combustible que los aviones existentes, junto con la incorporación de tecnologías como el fly by wire. Pero el motor UDF tenía problemas técnicos sin resolver, como la alta generación de ruido, y Boeing estaba preocupado por el tiempo que llevaría llevar el 7J7 al mercado. En cambio, Boeing desarrolló otra versión estirada de su 737 (el 737–400), canceló el proyecto 7J7, y comenzó a desarrollar un avión para llenar el hueco entre su 767 y 747, el 777.

Pero a medida que el A320 continuaba invadiendo el mercado y más clientes de Boeing de larga data desertaban (como United en 1992), estaba claro que se requería un reemplazo para el 737. Muchos una vez más favorecieron el desarrollo de un avión completamente nuevo (que Airbus cree que habría sido catastrófico para el A320), pero Boeing era cauteloso con los nuevos aviones después del 777. Aunque el programa se realizó a tiempo y entregó un avión excepcional, los costos se dispararon, hasta $14 mil millones según algunas estimaciones ($28 mil millones en dólares de 2023) contra un presupuesto proyectado de $5 mil millones. En cambio, Boeing lanzó el 737 “Next Generation” (NG), otra actualización del fuselaje del 737. El 737NG presentaba, entre otras cosas, un nuevo diseño de ala, un motor más eficiente que reducía los costos de combustible en un 9% y los costos de mantenimiento en un 15%, y añadía “winglets” para mejorar la aerodinámica. El 737NG también redujo el número de piezas en un 33% en comparación con las versiones anteriores, mientras que aún conservaba suficiente similitud para requerir una mínima reentrenamiento de los pilotos y permanecer dentro de las reglas de “derivados” de la FAA.

Entregado por primera vez en diciembre de 1997, el 737NG se volvió inmensamente popular, con la versión 737–800 vendiendo más de 5000 aviones en los siguientes 20 años (aunque, como hemos señalado, aumentar la producción vino con inmensas dificultades). Sin embargo, esto no fue suficiente para enterrar al A320, que también continuó vendiéndose bien. Algunas personas de Airbus creen que un avión completamente nuevo habría sido catastrófico para el A320 a finales de los años 90.

A principios de la década de 2000, el 737 y el A320 se habían convertido en los productos más importantes de la oferta de Boeing y Airbus, y juntos representaban el 70% del mercado de aviones comerciales. Una vez más, Boeing comenzó a considerar un reemplazo para el 737 e inició un proyecto, Yellowstone, para explorar reemplazos completamente nuevos para el 737 y otros aviones de Boeing. Pero los hallazgos no fueron particularmente alentadores: sin un nuevo motor avanzado (que no estaría listo hasta 2013 o 2014), las mejoras en la eficiencia del combustible serían de un 4% como máximo. Y las tecnologías que incorporaría del 787 en desarrollo, como los compuestos avanzados, serían difíciles de escalar para la producción de alto volumen requerida para un reemplazo del 737.

Boeing se había vuelto una vez más cauteloso con los nuevos aviones debido a su experiencia con el 787, que había superado masivamente el presupuesto y se había retrasado. El nuevo Boeing, centrado en las finanzas, había sido lo suficientemente reacio a aprobar el desarrollo del 787, y ahora era aún más reacio.

Pero para 2010, con nuevos motores como el Pratt and Whitney GTF y el CFM LEAP en el horizonte, Boeing se inclinaba fuertemente hacia un reemplazo del 737 desde cero.

La mano de Boeing terminó siendo forzada por Airbus.

En 2011, Airbus comenzó a trabajar en un A320 con motores nuevos con un rendimiento significativamente mejorado, llamado A320neo (por “new engine option”), y lo utilizó para atraer parcialmente a un gran cliente de Boeing, American Airlines (que dividió un gran pedido entre Boeing y Airbus). Airbus creía que Boeing se sentiría obligado a responder con su propio motor en lugar de perder más clientes mientras desarrollaba un reemplazo desde cero. Los clientes, por su parte, habían perdido la confianza en que Boeing pudiera entregar un nuevo avión a tiempo después del desastre del 787, y también preferían que Boeing lanzara un motor con una mejor oportunidad de estar a tiempo. Un motor tendría casi todos los beneficios de un avión completamente nuevo (~15–16% de ahorro de combustible versus 20% para un típico desde cero), costaría quizás el 10–20% para desarrollar, y evitaría los costos de las aerolíneas teniendo que volver a entrenar a los pilotos, así como cosas como tener que averiguar cómo producir partes compuestas en grandes volúmenes.

El resto, por supuesto, es historia.

En lugar de un nuevo avión, Boeing desarrolló otra revisión del 737, el 737 MAX. Ajustar motores aún más grandes en el avión mientras se mantenía lo suficientemente similar para caer bajo las reglas de derivados de la FAA requería desplazarlos muy hacia adelante e inclinarlos ligeramente hacia arriba, lo que cambió ligeramente las características de rendimiento del avión. Para mantener su rendimiento similar a los 737 anteriores, Boeing creó un software, MCAS, para tratar de emular el comportamiento de los aviones anteriores. El software MCAS, y sus interacciones con varios sensores, finalmente causaron dos accidentes mortales de vuelos 737 MAX.

Conclusión: A veces pienso en cómo el límite de la posibilidad tecnológica está definido no solo por el dominio del universo, sino por los límites de la economía y las organizaciones que operan dentro de ella. Si los productos son suficientemente complejos, y la demanda es de cantidades tan pequeñas que hay un caso de negocio limitado para ellos, no los obtendremos, incluso si son físicamente posibles de construir.

Los submarinos nucleares parecen estar cerca de este límite: armas enormemente complejas que solo un puñado de organizaciones en el planeta son capaces de construir. Los aviones a reacción parecen estar dirigiéndose rápidamente a este límite exterior, si es que no están allí ya.

El costo y el nivel de tecnología requeridos, junto con el tremendo riesgo de desarrollarlos y el pequeño número de ventas en las que se pueden recuperar los costos, ya han reducido el número de proveedores a esencialmente dos (aunque quizás COMAC de China eventualmente agregue un tercer jugador), y no hay evidencia de que esté volviéndose más fácil.

podman-compose and systemd

Posted by Jens Kuehnel on February 17, 2024 09:45 PM

I’m using more and more podman and especially podman-compose. podman-compose is not part of RHEL, but it is available in EPEL and it is in Fedora. Of course I run it as a non-root user. It really works great, but creating systemd unit files for podman-compose is ugly. I had it running for about a year, but I wanted to have a look for something better. This blog post talks about Fedora (tested with 39), RHEL8 and RHEL9. All have some smaller problems, but sometimes different ones.

I wanted to try Quadlet-podman for over a year. This week I had a closer look and found that it is more complicate than I thought. I really like the simple one-file solution of a compose file. I found podlet to migrate compose files to quadlet. (Use podlet musl, if you have problem with the glibc of the gnu version).

But at the end I really like to continue using the compose file that are provided by most of the tool that I use and I had only very small problems with podman-compose, all of them easy fixable. At the end I decided to use podman-compose systemd. There is not a lot of documentation, but I really liked it.

I had quite a lot of problems, but I will show you here how to fix them and how my setup is working now.

Setup as root

First things first. I run it always as non-root of course. If you do too, please do a “restorecon -R” to the homedir of the user that runs the container. audit2allow and the logs will not show the problem (you have to disable noaudit to see it – I fear), but it will interferes with the startup of your container.

You want to make sure the container user can run even when he is not loged in. So you have to enable linger with:

loginctl enable-linger USERNAME.

I enabled cgroupv2 on my RHEL8 and there is a bug that you have to fix. The Problem can be fixed by different solution, but I choose to change the file /etc/containers/containers.conf. Of course this is not needed on RHEL9 or Fedora.

#/etc/containers/containers.conf
[containers]
pids_limit=0

To use podman-compose systemd you need a systemd template and I choose to set it up in /etc, because I have multiple user running multiple applications. (If the filename of the output of systemctl status on Fedora/RHEL9 looks strange to you. There is a link: /etc/xdg/systemd/user -> ../../systemd/user).

You can run the command podman-compose systemd -a create-unit as root. Or you can run this command as a normal user and paste the output to /etc/systemd/user/podman-compose@.service. But all platforms (podman-compose runs with version 1.0.6 on all) the template has an error that prohibits the successful startup of the containers. You have to add “–in-pod pod_%i” to the up command – thanks to RalfSchwiete. I also added the ExecStopPost line. Here my complete template:

# /etc/systemd/user/podman-compose@.service
[Unit]
Description=%i rootless pod (podman-compose)

[Service]
Type=simple
EnvironmentFile=%h/.config/containers/compose/projects/%i.env
ExecStartPre=-/usr/bin/podman-compose --in-pod pod_%i up --no-start
ExecStartPre=/usr/bin/podman pod start pod_%i
ExecStart=/usr/bin/podman-compose wait
ExecStop=/usr/bin/podman pod stop pod_%i
ExecStopPost=/usr/bin/podman pod rm pod_%i

[Install]
WantedBy=default.target

As User

With the preparation as root finished, the setup as non-root is quit simple – Almost 😉 . First stop the container with “podman-compose down”. Then go to the directory with the podman-compose.yml (or docker-compose.yml if you use the old name) file and run “podman-compose systemd”. Be careful as this commands starts the containers again. I always stop the containers again with podman-compose down and start it up again with “systemctl –user enable –now ‘podman-compose@COMPOSE ‘ “. Otherwise you are not sure if the systemctl command is working.

But not on Fedora and RHEL9. Here I always got the error message “Failed to connect to bus: No medium found“. The solution was not to use “su – USERNAME” but instead

machinectl shell USERNAME@ 

With su the DBUS_SESSION_BUS_ADDRESS is missing on Fedora and RHEL9. This is a know issue, but RedHat states that “Using su or su – is not a currently supported mechanism of rootless podman.” I’m not sure it machinectl is supported or not, but I can tell you it is working. If you never heard of machinectl before or didn’t know that machinectl has a shell options – you are not alone. 🙂 The official way it to ssh as USERNAME into the maschine. (I prefer my way better :-))

Running but where are the log

If it is working you get this output at the end of the podman-compose systemd command like this:

you can use systemd commands like enable, start, stop, status, cat
all withoutsudo like this:

            systemctl --user enable --now 'podman-compose@COMPOSE'
            systemctl --user status 'podman-compose@COMPOSE'
            journalctl --user -xeu 'podman-compose@COMPOSE'

and for that to work outside a session
you might need to run the following command once

            sudo loginctl enable-linger 'USERNAME'

you can use podman commands like:

            podman pod ps
            podman pod stats 'pod_COMPOSE'
            podman pod logs --tail=10 -f 'pod_COMPOSE'

systemctl –user status ‘podman-compose@COMPOSE’ did work fine on RHEL8 and shows the output of the command. But on Fedora and RHEL9 it did not show anything. On all Version the command journalctl –user -xeu ‘podman-compose@COMPOSE’ never shows any output.

To fix this your non-root user has to become member of the systemd-journald group. But even then you have to use the right command on all Platforms. Not the one from the output above, but this instead:

journalctl  -xe --user-unit 'podman-compose@COMPOSE'

As you can see the podman-compose is quite nice, but there are a lot of stumbling block. After you know how to avoid them, it works quite well.

Some musings on matrix

Posted by Kevin Fenzi on February 17, 2024 06:21 PM

The fedoraproject has moved pretty heavily toward matix in the last while, and I thought I would share some thoughts on it, good, bad, technical and social.

The technical:

  • I wish I had known more about how rooms work. Here’s my current understanding:
    • When a room is created, it has a ‘internal roomid’. This is a ! (anoyingly for the shell) followed by a bunch of letters a : and the homeserver of the user who created it. It will keep this roomid forever, even if it has nothing at all to do with the homeserver of the user who created it anymore. It’s also just a identifier, it could say !asdjahsdakjshdasd:example.org, but the room isn’t ‘on’ example.org and could have 0 example.org users in it.
    • Rooms also have 0 or more local addresses. If it’s 0, people can still (if the room isn’t invite only) join by the roomid, but thats pretty unfriendly. local addresses look like :homeserver. Users can only add local addresses for their homeserver. So if you were using a matrix.org account, you could only add :matrix.org to any room, not any other server. local addresses can be used by people on your same homeserver to find rooms.
    • Rooms also have 0 or more published addresses. If 1 or more are set, one of them is ‘main published address’. These can only be set by room admins and optionally published in the admins homeservers directory. published addresses can only be chosen from the list of existing local addresses. ie, you have to add a local address, then you can make it a published address, a main published address and if it’s in your homeserver directory or not. If you do publish this address to your directory, it allows users to search your homeserver and find the room.
    • Rooms have names. Names can be set by admins/moderators and are the ‘human friendly’ name of the room. They can be changed and nothing about the roomid or addresses changes at all. Likewise topic, etc.
    • Rooms are federated to all the homeservers that have users in the room. That means if there’s just people from one homeserver in the room, it’s actually not federated/synced anywhere but that homeserver. If someone joins from another server, it gets the federated data and starts syncing. This can result in a weird case if someone makes a room, publishes it’s address to the homeserver directory, other people join and then the room creator (and all others from that homeserver) leave… the room is no longer actually synced on the server it’s address is published on (resulting in not being able to join it easily by address).
    • Rooms work on events published to them. If you create a room, then change the name, the ‘name changed’ event is in that rooms timeline. If somehow you ignored or looked at all events before that you can see the state at that time with the old name, etc.
    • Rooms have ‘versions’. Basically what version of the matrix spec the room knows about. In order to move to a newer version you have to create a new room.
    • Rooms can be a ‘space’. This is a organizational tool to show a tree of rooms. We have fedora.im users join the fedoraproject.org ‘space’ when they first login. This allows you to see the collection of rooms and join some default ones. They really are just rooms though, with a slightly diffrent config. Joining a space room joins you to the space.
  • The admin api is really handy, along with synadm ( https://github.com/JOJ0/synadm ). You can gather all kinds of interesting info, make changes, etc.
  • When you ‘tombstone’ a room (that is, you put an event there that says ‘hey this room is no longer used, go to this new room’, everyone doesn’t magically go to the new room. They have to click on the thing, and in some clients, they just stay in the old room too, and if it happened a long while back and people left a bunch, depending on your client you may not even see the ‘go to new room’ button. ;( For this reason, I’ve taken to renaming rooms that are old to make that more apparent.
  • There’s a bit of confusion about how fedoraproject has setup their servers, but it all hopefully makes sense: We have 2 managed servers (from EMS). One of them is the ‘fedora.im’ homeserver and one is the ‘fedoraproject.org’ homeserver. All users get accounts on the fedora.im homeserver. This allows them to use matrix and make rooms and do all the things that they might need to do. Having fedoraproject.org (with only a small number of admin users) allows us to control that homeserver. We can use it to make rooms ‘official’ (or at least more so) and published in the fedoraproject.org space. Since you have to be logged in from a specific homeserver before you can add local addresses in it, this nicely restricts ‘official’ rooms/addresses. It also means those rooms will be federated/synced between at least fedoraproject.org and fedora.im (but also it means we need to make sure to have at least one fedoraproject.org user in those rooms for that to happen).

The good:

  • When I used to reboot my main server (that runs my IRC bouncer), I would just loose any messages that happened while the machine was down. With matrix, my server just pulls those from federated servers. No loss!
  • In general things work fine, people are able to commnicate, meetings work fine with the new meeting bot, etc. I do think the lower barrier to entry (not having to run a bouncer, etc) has helped get some new folks that were not around on IRC. Of course there are some folks still just on IRC.
  • Being able to edit messages is kind of nice, but can be confusing. Most clients when you up arrow assume you want to edit your last line instead of repeating it. This is not great for bots, or if you wanted to actually say the same thing again with slightly different stuff added. I did find out that nheko lets you do control-p/control-n to get next/previous lines to resend (and up arrow does edit).

The bad:

  • Moderation tools are… poor. You kind of have to depend on sharing lists of spamming users to try and help others block them, but there’s no real flood control or the like. I’m hoping tools will mature here, but it’s really not great.
  • Clients are still under a lot of development. Many support only a subset of things available. Many seem to be falling into the ‘hey this is like a group text with 3 of your buddies’, which may be true sometimes, but the vast majority of my use is talking to communities where there can be 5, 10, more people talking. Little message bubbles don’t really cut it here, I need a lot of context I can see when answering people. I’m hopeful this will improve over time.
  • I get that everything is a room, but it’s a bit weird for Direct messages. Someone sends you a message, it makes a room, they join it, then it invites you. But if you arent around and the person decides they don’t care anymore and leaves the room, you can’t join and have to just reject the invite and never know what they were trying to send you.
  • Threading is a nice idea, but it doesn’t seem well implemented on the client side. In Element you have to click on a thread and it’s easy to miss. In Nheko, you click on a thread thingie, but then when you are ‘in’ a thread you only see that, not activity in the main room, which is sometimes confusing.
  • Notifications are a bit anoying. They are actually set on the server end and shared between clients. Sometimes this is not at all what I (or others) want. ie, I get 10 notications on my phone, I read them and see there’s some things I need to do when I get back to my computer. So, I get back later and… how can I find those things again? I have to remember them or hunt around, all the notifications are gone. I really really would love a ‘bookmark this event’ so I could then go back later and go through those and answer/address them. Apparently the beeper client has something like this.

Anyhow, that’s probably too much for now. See you all on matrix…

Swimming positions improvements

Posted by Adam Young on February 16, 2024 06:58 PM

I have been getting in the pool for the past couple of months, with a goal of completing a triathalon this summer. Today I got some pointers on things I need to improve on in my freestyle technique.

Kick

My kick needs some serious work. I am pulling almost exclusively with my arms. As Jimmy (the lifeguard and a long time swim coach) said, “You’re killing yourself.”

He had me do a drill with just a kickboard. “If your thighs are not hurting you are doing it wrong.” I focused on barely breaking the surface with my heels, pointing my toes, and keeping my legs relatively straight…only a slight bend.

Next he had me do a lap with small flippers. “You shouldn’t fell like you are fighting them.” The force you to point your toe. It felt almost too easy. We ran out of time for me to try integrating it into a regular stroke.

For weight lifting, he recommended squats.

For a sprint he recommended 3 kicks per stroke/6 per pair. For longer courses, two kicks per stroke. I think I will shoot for 2/ stroke, as I am going for 1/2 mile total.

Breathing

Improving my kick will improve my whole body position, including my breathing. It seems I am pulling my head too far out of the water, mainly because my legs are dropping. Although the opposite is true, too: pulling my head too far out of the water is causing my legs to drop. The two should be fixed together.

Arm Entry

One other swimmer at the pool that I asked for advice told me to “lead with my elbows” and then to think about entering the water “like a knife through butter”. Jimmy added that I should be reaching “long and lean.” Like a fast sailboat.

After that, the stroke should go out, and then finish in an S.

I think I need to glide more during the initial entry of the arm into the water.

Jimmy recommended a drill, either using a kickboard or a ring, and holding that out in front, and passing it from hand to hand.

Head position

I should be looking down and to the front,while the the top of my head breaks the surface.

Contribute at the Fedora Linux Test Week for GNOME 46

Posted by Fedora Magazine on February 16, 2024 05:15 PM

The Desktop/Workstation team is working on final integration for GNOME46. This version was just recently released, and will arrive soon in Fedora Linux. As a result, the Fedora Desktop x QA teams are organizing a test week from Monday, February 19, 2024 to Monday, Feburary 26, 2023. The wiki page in this article contains links to the test images you’ll need to participate. Please continue reading for details.

GNOME 46 has landed and will be part of the change for Fedora Linux 40. Since GNOME is the default desktop environment for Fedora Workstation, and thus for many Fedora users, this interface and environment merits a lot of testing. The Workstation Working Group and Fedora Quality team have decided to split the test week into two parts:

Monday 19 February through Thursday 22 February, we will be testing GNOME Desktop and Core Apps. You can find the test day page here.

Thursday 22 Febuary through Monday 26 Febuary, the focus will be to test GNOME Apps in general. These will be shipped by default. The test day page is here.

How does a test week work?

A test week is an event where anyone can help ensure changes in Fedora Linux work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed before, this is a perfect way to get started.

To contribute, you only need to be able to do the following things:

  • Download test materials, which include some large files.
  • Read and follow directions step by step.

Happy testing, and we hope to see you on one of the test days.