Fedora People

Contribute to Rawhide Test Days – DNF 5.2

Posted by Fedora Magazine on June 03, 2024 05:06 PM

Fedora Rawhide test days are events where anyone can help make sure changes in Fedora Linux work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed to Fedora Linux before, this is a perfect way to get started.

For some time, we have been trying to elevate the quality of Fedora by testing things well ahead of time. The Fedora Changes process lets people submit changesets well before the release process starts and many developers try their best to create a timeline for the new changes to land. We in the Quality team figured out we should also be able to execute test days for the changesets, which are crucial for us and will likely help us identify bugs very early. This will ensure a smoother and cleaner release process and also help us stay on track with on-time releases.

Previously, we started with testing DNF 5, which is a changeset proposed for Fedora Linux 41. Since the brand new dnf5 package has landed in rawhide, we organized a test week and got some initial feedback on it before it became the default. This time, we will test DNF5.2 according to its basic acceptance criteria to iron out any rough edges.

As a part of DNF 5.2, we will be testing the system upgrade from Fedora 40 to Fedora 41. In older versions of Fedora, the upgrade leverages a plugin. This functionality is now baked in by default and this is a great time to test it out. Needless to say, we will still be testing the general DNF 5 funtionalities.

The test week is Wed, June 5th through Wed, June 12th. The test week page is available here.

Happy testing, and we hope to see you on one of the test days.

Fedora 40 Wallpaper Talk!

Posted by Madeline Peck on June 03, 2024 02:39 PM

The Fedora wallpaper is always on such a journey 😄. The last blog post I wrote specifically dedicated to the Fedora wallpaper is linked here if you’re interested in reading and getting a sense of what goes into it.

As I write this, Fedora 40 is live, and being the big four-zero the other design team members and I thought we should do something special. Instead of being inspired by a STEM person in history with an O last name we chose the word “Open” (conveniently starting with O as well!).

We always create a mind map to brainstorm different ways of interpreting the theme we could go with.

1. Open Outdoors

Slightly obvious with the open outdoors and the word “open”. But in the same way that open-source software is developed in a collaborative, public manner- nature works in a connected and collaborative way as well to create something bigger and better.

2. Trees

Trees have roots and branches that spread out and interconnect. Trees continue to grow, change, and distribute seeds, similarly to open-source code. Obviously, open source doesn’t distribute literal seeds, but seeds of knowledge are thrown into the community!

3. Open Windows to Nature

Having an inside window looking outside shows that we’re always looking towards something with an open mind. Inspired a bit like the Lo-Fi girl who sits next to a window, it would provide a very appealing wallpaper.

4. Open Path

Again, it’s a bit on the nose with “Open” but we wouldn’t want to illustrate a closed path haha, now would we? The possibilities are endless on a path though. In the same way, working together in an open-source way offers endless opportunities that closed-source software might not.

<figure class=" sqs-block-image-figure intrinsic "> </figure> <figure class=" sqs-block-image-figure intrinsic "> </figure>

After we made the mind map I felt inspired to create the image below in Krita. It started as just playing around with different textured brushes and seeing what I was immediately drawn to. I then drew over what I had on a separate layer with a proper guideline.

An open path or river, up to the viewer’s interpretation, through the tree’s overlapping connecting branches.

<figure class=" sqs-block-image-figure intrinsic "> <figcaption class="image-caption-wrapper">

Fedora 40 Day Wallpaper

</figcaption> </figure> <figure class=" sqs-block-image-figure intrinsic "> <figcaption class="image-caption-wrapper">

Fedora 40 Night Wallpaper

</figcaption> </figure>

This was intended to be just the beta wallpaper. As the creator of the piece, I didn’t think it looked finished, and there were details I wanted to add to make it perfect. Although there’s a phrase I try to live by in situations like, “Make it finished, not perfect.” And this was a finished wallpaper!

The Fedora wallpaper is not a project that is entirely on my shoulders though. It’s always a collaboration! As the deadline quickly approached we luckily had a few contributors who had some options for F40. Below are some of the first ideas and drafts from Yotam Guttman.

<figure class=" sqs-block-image-figure intrinsic "> </figure> <figure class=" sqs-block-image-figure intrinsic "> <figcaption class="image-caption-wrapper">

wip 1

</figcaption> </figure> <figure class=" sqs-block-image-figure intrinsic "> </figure>

The orange and brown desert path version is the first draft of how we ended up with the images below! Yotam had so many different ideas and did an amazing job taking constructive criticism from the team and taking the first version to the last.

<figure class=" sqs-block-image-figure intrinsic "> <figcaption class="image-caption-wrapper">

wip 2

</figcaption> </figure> <figure class=" sqs-block-image-figure intrinsic "> <figcaption class="image-caption-wrapper">

wip 3

</figcaption> </figure>

In the Fedora 40 ticket, you can see all the conversations and different versions.

The amazing Yotam Guttman created the images below in Inkscape as an option for F40!

<figure class=" sqs-block-image-figure intrinsic "> <figcaption class="image-caption-wrapper">

Fedora 41 Day Wallpaper

</figcaption> </figure> <figure class=" sqs-block-image-figure intrinsic "> <figcaption class="image-caption-wrapper">

Fedora 41 Night Wallpaper

</figcaption> </figure>

The Design Team and I absolutely loved Yotam’s work and were set on using it for F40 but for various reasons, deadlines creeping up on us, it was easier to use the beta wallpaper as the final wallpaper. We have used the beta as the final wallpaper before and it isn’t a big deal. But we loved those art pieces above and really wanted to use them! So we are going to use them for the Fedora 41 Wallpaper!

What does this mean?

Well! The great thing is that we are now ahead of schedule for F41!

F41 would have been inspired by something starting with the letter P, and this piece of work has a beautiful P as in Path! We plan on using the finished versions designed by Yotam (you can see any small updates and documentation on the F41 ticket here) and are ready to start working on the F42 wallpaper!

This way we can have more flexibility for edits, and time between finishing the wallpaper and handing it over for the Final Freeze deadlines. In the future, with all the wallpapers we will be ahead of schedule from this development!

If you’re interested in contributing to the Fedora 42 wallpaper find out more at this ticket. Anyone is welcome and encouraged to engage and participate!

Why are vulnerabilities out of control in 2024?

Posted by Josh Bressers on June 03, 2024 02:00 PM

If you follow the vulnerability world, 2024 is starting to feel like we’ve become trapped in the mirror universe. NVD collapsed, the Linux kernel is generating a huge number of CVE IDs, CISA is maybe enriching the CVE data, and the growth rate of CVE is higher than its ever been. It feels like we’re careening off a cliff in the clown car where half the people are trapped inside trying to get out, and the other half are laughing at the clown honking its nose.

I want to start out by saying all of this is not an accident. A lot of gears have been turning for years, or even decades, we’re seeing the result of trends finally coming together. It was only a matter of time until this happened. Let’s look at a few of those trends.

The size of CVE

The first and most important thing to understand is the sheer size of CVE today. If we graph the CVE IDs since January 1 of their respective years to the time of this writing, we can see the current rate of growth

<figure class="wp-block-image size-large"></figure>

2024 is more than double the IDs we had at this point in 2021. It’s pretty clear from the graph that the rate of growth is expanding. 2024 will be double 2022 before the year ends. The “let’s just all go back to normal” Very Serious People should probably take note. Normal already careened off the cliff right in front of the clown car, it’s gone forever.

How many vulnerability budgets have doubled since 2021? Probably none. We were not prepared this rate of growth. The best solution we currently have is “thoughts and prayers”.

Even if we wanted to double the size of our vulnerability teams, there aren’t enough people to hire. This isn’t a space that has a huge number of people, much less a huge number of people looking for work. And really, does anyone believe more people could solve this problem?

Before we move on, there’s an aspect of this growth that rarely gets mentioned. If you search GitHub issues and pull requests for some common security terms, such as “buffer overflow” or “cross site scripting”, you get hundreds of thousands of hits just for those two terms. I’m sure this is an unimportant detail we should just ignore while we try to find where normal crashed in the ravine.

The Linux kernel

We just mentioned overall growth, but the Linux kernel is a part of that growth. The kernel is adding A LOT of IDs. Enough to show up on a graph.

<figure class="wp-block-image size-large"></figure>

The green of the graph is Linux kernel CVE IDs, the blue is all other IDs. It’s a huge number. The kernel is pretty huge though, so these numbers shouldn’t be unexpected.

There are many very serious people who don’t think most of these kernel issues should have CVE IDs. They claim the bugs aren’t bad enough to warrant a CVE, and more verification should be done on the bugs, and they should only be filing things attackers could use to steal bitcoins. There are many complaints about this volume of IDs. The kernel is doing something new and different, so it must be wrong.

Greg K-H explained on the Open Source Security Podcast (which I co-host) that the kernel is used on too many devices to easily decide what is or isn’t a vulnerability. It’s on everything from phones, to space ships, to milking machines. There aren’t many projects that can claim such a wide reach.

Even if we ignore these supposed low quality kernel CVE IDs, there are also a lot of non-kernel CVE IDs that could be considered low quality. What this means is vulnerabilities that could be considered plain old bugs depending on who does the analysis. If we want to raise the bar for what a vulnerability is, that won’t just affect the kernel, it would affect a number of CVE IDs. There are a lot of bugs that are right on the line for what the definition a vulnerability is. It’s sadly about as well defined as what a sandwich is.

This isn’t really a problem though, we’ll explain a bit more about data quality and how we use it below.

The demise of NVD

One of the more exciting events this year was in February when NVD stopped enriching vulnerabilities. We can see from this graph is dropped off pretty abruptly. This dropoff was unexpected and didn’t come with any sort of explanation for many weeks.

<figure class="wp-block-image size-large"></figure>

Before February, NVD would add data to published CVE IDs. Details like the products and versions affected. Vulnerability severity, and the type of severity. The data wasn’t the best, but sort of like that falling apart car we had as a kid, it was ours and it got us where we wanted to go, usually.

Since this sudden change in enriching vulnerabilities there has quite a bit of confusion about what happens moving forward. For example the FEDRAMP standard specifically names NVD as the source for vulnerability severity. What happens if there is no NVD severity anymore? Nobody seems to know, so we’ll just wait for everything to go back to normal.

CISA has also started to add some enrichment data in a project they have named Vulnrichment. The Vulnrichment project has people at CISA adding vulnerability details similar to how NVD worked. The devil is in the details though. Many words could be written about what CISA is and isn’t doing with Vulnrichment. We’ll wait and see how the projects looks in a few months, it’s still changing pretty rapidly. I’m sure they’re talking with the NVD folks about all this … right … RIGHT?

NVD has made several announcements about the current and future status of the project. We’re about 6 pinky swears deep now that everything will go back to normal soon. I’m sure this time it’s real. The thing is, the truth we need to understand, is that NVD can’t go back to “normal”. Normal fell off the cliff, remember? That CVE graph shows the volume of IDs is growing at an accelerated rate. Unless NVD had a massive budget increase, every year, forever, it’s silly to even pretend.

The unsustainable future

If we look at all the things happening with vulnerabilities it doesn’t spark joy. None of the graphs are pointing in directions we want them to be. In theory, if we had more effective security we would see fewer vulnerabilities every year, not more. But there’s another trend that’s rarely discussed by security people; the mind boggling growth of open source. I gave a talk at Cyphercon earlier this year about the size of open source. It’s gigantic, bigger than you can imagine. Here’s a graph of releases of open source packages

<figure class="wp-block-image size-large"></figure>

That’s more than 100 million releases in the last 15 years. To put this in perspective there have been 250 thousand CVE IDs ever created, EVER. Any security researcher could look at any piece of software and find at least one vulnerability. I’ll let you think about that one for a while.

Given the CVE graph and now this open source graph it’s pretty clear the trend for the number of … everything is only going to go up for the next few years. If we can’t handle the number of vulnerabilities we have today, how can we possibly deal with double the number in a few years? Oh right, normal, we should go back to normal. Normal will solve our problems.

There are a few things we do today that are making these problems worse. I don’t know exactly how to fix them, but pointing out the problems can help sometimes.

Stop the zero CVE madness

There’s a prevailing attitude in the industry to get the number of CVEs in our environments to zero. This creates a perverse incentive where the goal is a number, not better security. We should try to find ways to discover which vulnerabilities matter and deal with those, not do whatever it takes to make a number on a spreadsheet equal zero.

The size of the numbers in the above graphs means we have to transition out of thinking about vulnerabilities individually and start looking at the data in aggregate. When you have this volume of data there’s no longer a zero, only graphs going up and down. Security and data science need to collide.

Lack of cooperation

There’s virtually no cooperation when it comes to vulnerability data today. If you’re an analyst working on any of this, you’re probably working alone in a dark room filled with old pizza boxes. You might know some fellow vulnerability nerds you can talk to, over discord of course, but mostly, it’s a lonely job. There is a ton of duplicate work happening and we keep reinventing the same square wheels. There are a lot of people working on this data, just not a lot of people working together on this data.

The data quality is terrible

The publicly available vulnerability data is terrible. If you work with this data, this needs no explanation. Until a year ago most of this data was prose text. Think poetry, but very bad poetry, Vogon poetry. It’s somewhat more machine readable as of early 2024, but it often contains errors that are difficult to impossible to get fixed. Entire companies exist just to try and clean up this data then sell it. They are basically selling facts, that’s how bad the data is. This data is like sportsball scores – it should exist freely as the minimum, then build a business around adding more data or insightful commentary.

And one last thing to think about around data quality. If we go back to the complaint that the Linux kernel has too many low quality vulnerabilities that are just bugs, we should also keep in mind that there hundreds of thousands, maybe even millions of untriaged vulnerabilities in open source projects (remember those GitHub searches). While some will be the low quality probably a bug type, some are going to be very critical. Even if it’s 0.5%, that’s a scary number. We’re ignoring these today.

Can we fix it?

We probably can fix this, if we want to, it’s pretty important, but so far we’ve successfully managed to ignore it. It’s not going to be easy. By definition, the people who got us here can’t fix this problem. They would have long ago if they could. Many of the people involved in the vulnerability space, CVE, NVD – they have been doing this work for decades. A bunch don’t think there’s anything wrong. If you talk to any vulnerability analyst, developer, or operations person actually working with CVE IDs, they’re miserable. They get their work done in spite of CVE IDs, not because of them.

Here’s the thing though, the people suffering under the boot of vulnerability management, we’ve never really had a place to discuss, complain, and explore. This is a very closed off world. It feels lonely and terrible when we’re all working in our dark little rooms instead of working together. There’s a lot that could be said around this and there have been some attempts. Much of the attention to date has been focused on the people creating vulnerability data, not the people consuming it and what they need. If such a group manages to emerge, I’ll be sure to spread the word far and wide.

Or we could keep looking for normal. I’m sure it’s around here somewhere.

Week 22 update

Posted by Ankur Sinha "FranciscoD" on June 03, 2024 12:17 PM

Last Monday was the late spring bank holiday, so it was a four day week.

Next Open NeuroFedora meeting: 03 June 1300 UTC

Posted by The NeuroFedora Blog on June 03, 2024 10:10 AM
Photo by William White on Unsplash

Photo by William White on Unsplash.


Please join us at the next regular Open NeuroFedora team meeting on Monday 03 June at 1300 UTC. The meeting is a public meeting, and open for everyone to attend. You can join us in the Fedora meeting channel on chat.fedoraproject.org (our Matrix instance). Note that you can also access this channel from other Matrix home severs, so you do not have to create a Fedora account just to attend the meeting.

You can use this link to convert the meeting time to your local time. Or, you can also use this command in the terminal:

$ date -d 'Monday, June 03, 2024 13:00 UTC'

The meeting will be chaired by @ankursinha. The agenda for the meeting is:

We hope to see you there!

Libadwaita: Splitting GTK and Design Language

Posted by Hari Rana (TheEvilSkeleton) on June 03, 2024 12:00 AM

Introduction

Recently, the Linux Mint Blog published Monthly News – April 2024, which goes into detail about wanting to fork and maintain older GNOME apps in collaboration with other GTK-based desktop environments.

Despite the good intentions of the author, Clem, many readers interpreted this as an attack against GNOME. Specifically: GTK, libadwaita, the relationship between them, and their relevance to any desktop environment or desktop operating system. Unfortunately, many of these readers seem to have a lot of difficulty understanding what GTK is trying to be, and how libadwaita helps.

In this article, we’ll look at the history of why and how libadwaita was born, the differences between GTK 4 and libadwaita in terms of scope of support, their relevance to each desktop environment and desktop operating system, and the state of GTK 4 today.

What Is GTK?

First of all, what is GTK? GTK is a cross-platform widget toolkit from the GNOME Project, which means it provides interactive elements that developers can use to build their apps.

The latest major release of GTK is 4, which brings performance improvements over GTK 3. GTK 4 also removes several widgets that were part of the GNOME design language, which became a controversy. In the context of application design, a design language is the visual characteristics that are communicated to the user. Fonts, colors, shapes, forms, layouts, writing styles, spacing, etc. are all elements of the design language.(Source)

Unnecessary Unification of the Toolkit and Design Language

In general, cross-platform toolkits tend to provide general-purpose/standard widgets, typically with a non-opinionated styling, i.e. widgets and design patterns that are used consistently across different operating systems (OSes) and desktop environments.

However, GTK had the unique case of bundling GNOME’s design language into GTK, which made it far from generic, leading to problems of different lexicons, mainly philosophical and technical problems.

Clash of Philosophies

When we look at apps made for the GNOME desktop (will be referred to as “GNOME apps”) as opposed to non-GNOME apps, we notice that they’re distinctive: GNOME apps tend to have hamburger buttons, larger buttons, larger padding and margins, etc., while most non-GNOME apps tend to be more compact, use menu bars, standard title bars, and many other design metaphors that may not be used in GNOME apps.

This is because, from a design philosophy standpoint, GNOME’s design patterns tend to go in a different direction than most apps. As a brand and product, GNOME has a design language it adheres to, which is accompanied by the GNOME Human Interface Guidelines (HIG).

As a result, GTK and GNOME’s design language clashed together. Instead of being as general-purpose as possible, GTK as a cross-platform toolkit contained an entire design language intended to be used only by a specific desktop, thus defeating the purpose of a cross-platform toolkit.

For more information on GNOME’s design philosophy, see “What is GNOME’s Philosophy?”.

Inefficient Diversion of Resources

The unnecessary unification of the toolkit and design language also divided a significant amount of effort and maintenance: Instead of focusing solely on the general-purpose widgets that could be used across all desktop OSes and environments, much of the focus was on the widgets that were intended to conform to the GNOME HIG. Many of the general-purpose widgets also included features and functionality that were only relevant to the GNOME desktop, making them less general-purpose.

Thus, the general-purpose widgets were being implemented and improved slowly, and the large codebase also made the GNOME widgets and design language difficult to maintain, change, and adapt. In other words, almost everything was hindered by the lack of independence on both sides.

Libhandy: the Predecessor

Because of the technical bottlenecks caused by the philosophical decisions, libhandy was created in 2017, with the first experimental version released in 2018. As described on the website, libhandy is a collection of “[b]uilding blocks for modern adaptive GNOME applications.” In other words, libhandy provides additional widgets that can be used by GNOME apps, especially those that use GTK 3. For example, Boxes uses libhandy, and many GNOME apps that used to use GTK 3 also used libhandy.

However, some of the problems remained: Since libhandy was relatively new at the time, most GNOME widgets were still part of GTK 3, which continued to suffer from the consequences of merging the toolkit and design language. Furthermore, GTK 4 was released at the end of December 2020 — after libhandy. Since libhandy was created before the initial release of GTK 4, it made little sense to fully address these issues in GTK 3, especially when doing so would have caused major breakages and inconveniences for GTK, libhandy, and app developers. As such, it wasn’t worth the effort.

With these issues in mind, the best course of action was to introduce all these major changes and breakages in GTK 4, use libhandy as an experiment and to gain experience, and properly address these issues in a successor.

Libadwaita: the Successor

Because of all the above problems, libadwaita was created: libhandy’s successor that will accompany GTK 4.

GTK 4 was initially released in December 2020, and libadwaita was released one year later, in December 2021. With the experience gained from libhandy, libadwaita managed to become extensible and easy to maintain.

Libadwaita is a platform library accompanying GTK 4. A platform library is a library used to complement a specific platform. In the case of libadwaita, the platform it targets is the GNOME desktop.

Porting Widgets to Libadwaita

Some GNOME widgets from GTK 3 (or earlier versions of GTK 4) were removed or deprecated in GTK 4 and were reimplemented in / transferred to libadwaita, for example:

These aforementioned widgets only benefited GNOME apps, as they were strictly designed to provide widgets that conformed to the GNOME HIG. Non-GNOME apps usually didn’t use these widgets, so they were practically irrelevant to everyone else.

In addition, libadwaita introduced several widgets as counterparts to GTK 4 to comply with the HIG:

Similarly, these aforementioned GTK 4 (the ones starting with Gtk) widgets are not designed to comply with the GNOME HIG. Since GTK 4 widgets are supposed to be general-purpose, they should not be platform-specific; the HIG no longer has any influence on GTK, only on the development of libadwaita.

Scope of Support

The main difference between GTK 4 and libadwaita is the scope of support, specifically the priorities in terms of the GNOME desktop, and desktop environment and OS support. While the GNOME desktop is the top priority for both GTK and libadwaita, GTK 4 is nowhere near as focused on the GNOME desktop as libadwaita is. GTK 4, while opinionated, still tries to get closer to the traditional desktop metaphor by providing these general-purpose widgets, while libadwaita provides custom widgets to conform to the GNOME HIG.

Since libadwaita is only made for the GNOME desktop, and the GNOME desktop is primarily officially supported on Linux, libadwaita thus primarily supports Linux. In contrast, GTK is officially supported on all major operating systems (Windows, macOS, Linux). However, since GTK 4 is mostly developed by GNOME developers, it works best on Linux and GNOME — hence “opinionated”.

State of GTK 4 Today

Thanks to the removal of GNOME widgets from GTK 4, GTK developers can continue to work on general-purpose widgets, without being influenced or restricted in any way by the GNOME HIG. Developers of cross-platform GTK 3 apps that rely exclusively on general-purpose widgets can be more confident that GTK 4 won’t remove these widgets, and hopefully enjoy the benefits that GTK 4 offers.

At the time of writing, there are several cross-platform apps that have either successfully ported to GTK 4, or are currently in the process of doing so. To name a few: Freeciv gtk4 client, HandBrake, Inkscape, and Transmission. The LibreOffice developers are working on the GTK 4 port, with the gtk4 VCL plugin option enabled. For example, the libreoffice-fresh package from Arch Linux has it enabled.

Here are screenshots of the aforementioned apps:

<figure> <figcaption>

Freeciv gtk4 client in the game view, displaying a title bar, a custom background, a menu bar, a tab view with the Chat tab selected, an entry, and a few buttons.

</figcaption> </figure> <figure> <figcaption>

HandBrake in the main view, displaying a title bar, a menu bar, a horizontal toolbar below it with custom buttons, entries, popover buttons, a tab view with the Summary tab selected, containing a popover button and several checkboxes.

</figcaption> </figure> <figure> <figcaption>

Development version of Inkscape in the main view, displaying a title bar, a menu bar, a horizontal toolbar below, vertical toolbars on the left and right, a canvas grid on the center left, a tab view on the center right with the Display Properties tab selected, and a toolbar at the bottom.

</figcaption> </figure> <figure> <figcaption>

LibreOffice Writer with the experimental gtk4 VCL plugin in workspace view, displaying a title bar, a menu bar, two horizontal toolbars below, a vertical toolbar on the right, a workspace grid in the middle with selected text, and a status bar at the bottom.

</figcaption> </figure> <figure> <figcaption>

Transmission in the main view, displaying a title bar, a menu bar, a horizontal toolbar, a filter bar, an empty field in the center of the view, and a status bar at the bottom.

</figcaption> </figure>

Alternate Platforms

While libadwaita is the most popular and widely used platform library that accompanies GTK 4, there are several alternatives to libadwaita:

Just like libadwaita, these platform libraries offer custom widgets and styling that differ from GTK and are built for their respective platforms, so it’s important to realize that GTK is meant to be built with a complementary platform library that extends its functionality when targeting a specific platform.

Similarly, Kirigami from KDE accompanies Qt to build Plasma apps. MauiKit from the Maui Project (another KDE project) also accompanies Qt, but targets Nitrux. Libcosmic by System76 accompanies iced to build COSMIC apps.

Conclusion

A cross-platform toolkit should primarily provide general-purpose widgets. Third parties should be able to extend the toolkit as they see fit through a platform library if they want to target a specific platform.

As we’ve seen throughout the philosophical and technical issues with GTK, a lot of effort has gone into moving GNOME widgets from GTK 4 to libadwaita. GTK 4 will continue to provide these general-purpose widgets for apps intended to run on any desktop or OS, while platform libraries such as libadwaita, Granite and libhelium provide styling and custom widgets that respect their respective platforms.

Libadwaita is targeted exclusively at the GNOME ecosystem, courtesy of the GNOME HIG. Apps built with libadwaita are intended to run best on GNOME, while GTK 4 apps that don’t come with a platform library are intended to run everywhere.

Week 22 in Packit

Posted by Weekly status of Packit Team on June 03, 2024 12:00 AM

Week 22 (May 28th – June 3rd)

  • Issue with pushing to CentOS Stream dist-git on GitLab is now resolved. (packit-service#2433)
  • Check for allowed_pr_authors/allowed_committers configuration before running Koji production builds is now more reliable. (packit/packit-service#2435)

Episode 431 – Redirecting HTTP to HTTPS

Posted by Josh Bressers on June 03, 2024 12:00 AM

Josh and Kurt talk about a blog post titled “Your API Shouldn’t Redirect HTTP to HTTPS”. It’s an interesting idea, and probably a good one. There is however a lot of baggage in this space as you’ll hear in the discussion. There’s no a simple solution, but this is certainly something to discuss.

<audio class="wp-audio-shortcode" controls="controls" id="audio-3447-1" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_431_Redirecting_HTTP_to_HTTPS.mp3?_=1" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_431_Redirecting_HTTP_to_HTTPS.mp3</audio>

Show Notes

Some useful Linux kernel cmdline debug parameters to troubleshoot driver issues

Posted by Javier Martinez Canillas on June 02, 2024 12:07 PM

I have written before on how to troubleshoot drivers’ deferred probe issues in Linux. And while having a /sys/kernel/debug/devices_deferred debugfs entry is quite useful to know the list of devices whose drivers failed to probe due being deferred [0], there are also some debug command line parameters that could help with the troubleshooting.

But first, an explanation of what these parameters do:

Devices require different resources in order to be operative, for example a display controller could need a set of clocks, power domains and regulators to be enabled.

Ideally, if these are managed by Linux, these should be declared in the hardware description that is handled to the kernel (e.g: Device Tree Blobs) but sometimes this isn’t the case. And devices may be working just because the firmware that booted Linux left these required resources enabled.

But of course this has the disadvantage that Linux can’t manage those, and can’t for example disable a clock or power domain when is not used.

It is common though to add support for a platform incrementally and so it could be that at the beginning this relies on the setup made by the firmware (e.g: some required clocks left enabled and Linux not knowing about them). But later, support for these clocks could and the Common Clock Framework (CCF) will now be aware of them.

But it could be that the Device Tree was not updated to define the dependency between some device that requires these introduced clocks. If that is the case, the clocks may appear to be unused by the system and the CCF rightfully decide to disable them [1].

If that is the case, this message is printed in the kernel log buffer:

[    5.189930] clk: Disabling unused clocks

This will of course make the device to not work anymore because one of its required clock has been gated.

To prevent this, there is a debug clk_ignore_unused command line parameter that can be used to prevent the CCF to gate unused clocks. When this is used, the unused clocks won’t be disabled an the following message be printed instead:

[    5.200758] clk: Not disabling unused clocks

And is the same for power domains and regulators, the frameworks will disable unused ones and the pd_ignore_unused and regulator_ignore_unused command line parameters can be used to prevent this. When using them, the following messages are printed in the kernel log:

[    5.186559] PM: genpd: Not disabling unused power domains
[   35.298779] regulator: Not disabling unused regulators

The {clk,pd}_ignore_unused parameters have been present in Linux for a long time, but we didn’t have one for regulators until recently.

Before, to prevent an unused regulator to be disabled by the regulator framework, it had to be marked in the Device Tree as always-on by using the regulator-always-on property. But this requires to modify the Device Trees which is more cumbersome just to troubleshoot drivers regressions.

So after talking with with Mark Brown (the regulator framework maintainer), I proposed to add a regulator_ignore_unused debug parameter which landed in Linux kernel v6.8.

I hope these debug parameters can be useful when facing regressions in drivers.

Happy hacking!

[0] Read this older post if want to learn more about about drivers and devices registration, matching and probe deferral.

[1] Brian Masney pointed me out that the list of clocks being disabled can be shown by using the tp_printk trace_event=clk:clk_disable cmdline param as explained in the kernel docs.

Fedora Ops Architect Weekly

Posted by Fedora Community Blog on May 31, 2024 10:58 PM

Welcome to my weekly (kind -of) little report on stuff ‘n’ things happening around Fedora!

Events

The Fedora Linux 40 release party was amazing! This time around we used Matrix and Restream/Youtube and the event went very smooth – kudos to everyone involved, from our brilliant speakers to our behind-the-scenes organizers. You can watch the talks now on youtube, all conveniently broken down into individual talks 🙂

Fedora Week of Diversity is set for June 17th – 22nd this year. There will be interviews from community members publishing throughout the week with the main event happening on June 21st & 22nd, so make sure to mark your calendars!

Devconf.cz is just around the corner, June 13th – 15th, so if you are giving a talk that’s fedora-related at the event, please add your name to our coordination wiki. You can also join the matrix room too to connect with other folks who will be at the event to meet for a coffee/walk/beer/run – and maybe not in that order, or all together.

Flock to Fedora is set for August 7th – 10th in Rochester, NY, USA. Notifications for talk acceptances should be landing in your in inboxes any day now, if not already, so do keep an eye out and check your spam folder(s) just in case they end up there.

Elections

Congratulations to all of our elected candidates for EPEL Steering Committee, Fedora Council, Fedora Mindshare and FESCo! You can read more about our election results from the Elections blog post that landed earlier today.

Fedora Linux 41

We are now fully into development for F41, so here are some important dates for change proposals, with other key dates and milestones can be found on the release schedule.

  • June 19th – Changes requiring infrastructure changes
  • June 25th – Changes requiring mass rebuild
  • June 25th – System Wide changes
  • July 16th – Self Contained changes

Announced Changes

Changes awaiting FESCo decision

A full list of the current F41 accepted changes can be found by visiting the Change Set page.

Hot Topics

There’s a few this week, but your opinion matters so please take some time to complete and weigh in on the following:

Help Wanted

The post Fedora Ops Architect Weekly appeared first on Fedora Community Blog.

Fedora Linux 40 election results

Posted by Fedora Community Blog on May 31, 2024 02:23 PM

The Fedora Linux 40 election cycle has concluded. Here are the results for each election. Congratulations to the winning candidates, and thank you all candidates for running in this election!

Results

Council

Two Council seats were open this election. A total of 525 ballots were cast, meaning a candidate could accumulate up to 684 votes.

<figure class="wp-block-table">
# votesCandidate
461Aleksandra Fedorova
358Adam Samalik
313Sumantro Mukherjee
</figure>

FESCo

Four FESCo seats were open this election. A total of 1137 ballots were cast, meaning a candidate could accumulate up to 1416 votes.

<figure class="wp-block-table">
# votesCandidate
930Neal Gompa
858 Stephen Gallagher
824Fabio Valentini
726Michel Lind
693Tom Stellard
635Jonathan Wright
</figure>

Mindshare

One Mindshare seat was open this election. A total of 184 ballots were cast, meaning a candidate could accumulate up to 199 votes.

<figure class="wp-block-table">
# votesCandidate
184Sumantro Mukherjee
</figure>

EPEL Steering Committee

Four EPEL Steering Committee seats were open this election. A total of 277 ballots were cast, meaning a candidate could accumulate up to 356 votes.

<figure class="wp-block-table">
# votes
Candidate
812Kevin Fenzi
745Carl George
739Troy Dawson
568Jonathan Wright
434
Neil Hanlon
367
Robby Callicotte
</figure>

Stats

The F40 election cycle had a few new records this cycle! One being the inclusion of the EPEL Steering /committee, and the second being the transition of the Council Election to once-per-year.

Thank you to everyone who voted and to our candidates, and congradulations to the newly elected members of Council, FESCo, Mindshare & EPEL Steering Committee

The post Fedora Linux 40 election results appeared first on Fedora Community Blog.

Infra and RelEng Update – Week 22, 2024

Posted by Fedora Community Blog on May 31, 2024 10:00 AM

This is a weekly report from the I&R (Infrastructure & Release Engineering) Team. It also contains updates for CPE (Community Platform Engineering) Team as the CPE initiatives are in most cases tied to I&R work.

We provide you both infographic and text version of the weekly report. If you just want to quickly look at what we did, just look at the infographic. If you are interested in more in depth details look below the infographic.

Week: 27 May – 31 May, 2024

<figure class="wp-block-image size-full">Infra&Releng Infographics</figure>

Infrastructure & Release Engineering

The purpose of this team is to take care of day to day business regarding CentOS and Fedora Infrastructure and Fedora release engineering work.
It’s responsible for services running in Fedora and CentOS infrastructure and preparing things for the new Fedora release (mirrors, mass branching, new namespaces etc.).
List of planned/in-progress issues

Fedora Infra

CentOS Infra including CentOS CI

Release Engineering

CPE Initiatives

EPEL

Extra Packages for Enterprise Linux (or EPEL) is a Fedora Special Interest Group that creates, maintains, and manages a high quality set of additional packages for Enterprise Linux, including, but not limited to, Red Hat Enterprise Linux (RHEL), CentOS, Scientific Linux (SL) and Oracle Linux (OL).

Updates

If you have any questions or feedback, please respond to this report or contact us on #redhat-cpe channel on matrix.

The post Infra and RelEng Update – Week 22, 2024 appeared first on Fedora Community Blog.

IPA-IPA trust progress report

Posted by Alexander Bokovoy on May 31, 2024 09:30 AM

FreeIPA and SSSD teams are working to enable IPA deployments to trust each other. This report outlines the progress we have so far.

Trust across enterprise domains

FreeIPA implements an enterprise domain management: systems enrolled into domain managed centrally and access resources available through the domain controllers. These resources include information about users and groups, machines, Kerberos services and different rules that can bind them. FreeIPA supports a trust to Active Directory forest by posing as a separate Active Directory forest with a single Active Directory domain (a forest root).

The approach helps to integrate with a majority of enterprise deployments. New deployments might not include Active Directory, though. Instead, they tend to be self-sufficient or integrate with externally managed data sources, often without retaining compatibility with traditional POSIX or Active Directory deployments. For example, cloud deployments might rely on OAuth2 identity providers or federate to social services.

FreeIPA can also integrate with OAuth2-based identity providers by bridging authentication over to them. Users would be native IPA users; IPA policies and rules to access IPA resources would apply to them but authentication and permission to issue a Kerberos ticket to the users would be delegated to an external IdP.

One useful approach is to have a single IPA environment that holds all users and groups while individual machines all placed in separate IPA deployments. These deployments would need to trust the environment where users defined. In this document we’d call such deployment an IPA-IPA trust.

IPA-IPA trust would look similar to the existing trust to Active Directory. A core technology is Kerberos: each deployment is a separate Kerberos realm; a trust on Kerberos level means cross-realm ticket granting tickets (TGT) issued by the realms’ KDCs. These cross-realm TGTs allow clients from one realm to request tickets for services located in the other realm.

Requesting Kerberos tickets is a base feature needed to authenticate across the trust. For proper relationship, identities of the authenticated objects (Kerberos principals and services) also need to be resolved. In POSIX environments these identities have to be associated with POSIX users to be able to store data on disk or to run processes with individual login sessions. The process requires not only authentication but also the ability to query information from a trusted domain’s domain controller.

In FreeIPA deployments, user and group information from the domain controllers is retireved by SSSD. When trust to Active Directory forest created, SSSD on IPA domain controllers is able to use a special entry in the trusted domain (trusted domain object, TDO) to authenticate against trusted domain’s domain controllers. SSSD knows how LDAP schema and directory information tree (DIT) on the Active Directory side are organized and can handle user and group object mapping for us.

If SSSD is able to resolve users and groups in the same way as with Active Directory trusts, this means we can reuse existing support for trusted Active Directory users and groups in IPA commands and keep the same user experience when adding HBAC or SUDO rules, managing IPA deployment, and so on. This would also mean we already would have a comprehensive test suite to validate IPA-IPA trust functionality.

Kerberos trust infrastructure

We first focused on making Kerberos infrastructure work when both realms represented by separate FreeIPA deployments. For past decade Microsoft, Samba Team, FreeIPA team, MIT Kerberos and Heimdal projects worked together to ensure there is a solid secure foundation for interoperability between non-POSIX (Active Directory) and POSIX deployments. In Active Directory world some crucial POSIX information is not available. When this information required on POSIX side, it is either synthesized or mapped onto existing objects by FreeIPA and Samba AD.

Kerberos protocol originally didn’t include any means to securely tie Kerberos principals to identities. For example, one was able to create a Windows machine named root in Active Directory and then use its machine account to login to a different machine as root POSIX user. This attack approach not possible anymore if all sides (Kerberos KDCs of the involved realms and a target machine) use authorization data in Kerberos tickets to exchange identity details of the original Kerberos principal’s operating system object. In this case it would mean that a Kerberos principal root would have its machine account information associated with the Kerberos ticket and the target machine would be able to inquire and see that the real Kerberos principal would be host/root.some.domain and that this principal’s operating system object type is a machine account.

The extensions also give ability to communicate group membership details and more information about the account. FreeIPA domain controllers already use these details to check and map properties of trusted Active Directory domains’ objects. For native IPA objects we also generate this information. IPA-IPA trust can rely on these details as well.

This observation allowed us to start with an experiment. Create a trusted domain object that represents a trusted IPA domain and see what does not work and needs a fix to make IPA-IPA trust a reality. We originally thought to replicate Active Directory infrastructure on IPA domain controller side so that existing ipa trust-add --type=ad .. command can be used to establish trust.

These changes are part of the tree I maintain at my github WIP tree.

Identity information retrieval

A part of Active Directory infrastructure that we implemented first was Global Catalog. Global Catalog is a separate, read-only, LDAP instance that contains a subset of the information about all Active Directory objects in the forest. Since FreeIPA LDAP tree uses different LDAP schema and structure than Active Directory LDAP tree, SSSD’s Active Directory identity provider cannot use normal FreeIPA LDAP tree as “yet another Active Directory domain’s LDAP tree”. We imagined that adding support for the Global Catalog (which SSSD is able to query) would make it possible to query IPA users and groups as “Active Directory” users and groups.

We soon found out two issues with this approach. It is not possible to force SSSD to use Global Catalog exclusively. SSSD uses the primary LDAP tree for retrieving users’ information and only switches to Global Catalog for retrieving groups information. Adding support for Global Catalog would not help: existing Active Directory domain identity provider in SSSD would not be able to work against a trusted IPA domain anyway. A proper support for trusted IPA domain in SSSD is needed.

Here we stumbled upon our next issue. SSSD implementation for IPA domain provider includes ability to handle all trusts to Active Directory domains that IPA deployment has. Trusted domains (subdomains, in SSSD speak), in theory, can come from any trust type. Since IPA has only a trust to Active Directory, SSSD implementation did not have an infrastructure to handle different types of subdomains. Instead, a single subdomain type is assumed: any subdomain discovered is an Active Directory domain. Even if we present IPA-IPA trust as an Active Directory trust, SSSD would still attempt to use Active Directory-specific LDAP schema and directory information tree arrangement when making LDAP queries. IPA users and groups from trusted domains aren’t found because LDAP server would not match that LDAP query to the IPA LDAP tree and schema.

SSSD needed a complete refactoring of the subdomains’ infrastructure to allow more than one subdomain type to be present. This work is still in progress. Justin Stephenson from the SSSD team tracks his progress in this WIP pull request.

Demo

With the refactoring work currently being done by Justin, it is possible to combine two parts: Global Catalog-based FreeIPA changes from my WIP source tree and SSSD changes from Justin’s WIP source tree.

We have these modifications to FreeIPA and SSSD prebuilt in Fedora COPR repository abbra/wip-ipa-trust. The packages there work in a specific environment where two FreeIPA deployments named ipa1.test and ipa2.test because we have so far no way to differentiate trusted IPA deployment from a trusted Active Directory deployment, thus SSSD hardcodes the names of the IPA domains to trust.

A trust between ipa1.test and ipa2.test can be established using standard ipa trust-add command. This approach allowed us to investigate what is working or not after the trust link is created. Both FreeIPA and SSSD teams can also work on validating that specific IPA functionality available to handle trusted Active Directory users and groups is also working with IPA-IPA trust.

I recorded a small demo on Fedora 40:

<iframe allow="encrypted-media; picture-in-picture; web-share" allowfullscreen="" frameborder="0" height="480" referrerpolicy="strict-origin-when-cross-origin" src="https://www.youtube.com/embed/z6o1tj9XUMQ" title="YouTube video player" width="740"></iframe>

Two administrators establish trust between their environments and then allow use of resources across the trust. Since the exact details how trust is established will change, I omitted that step and show pre-created trust entries along with the ID range details. Only ‘Active Directory trust with POSIX attributes’ ID range type supported for now. It should not be a problem though because IPA deployments do provide POSIX information.

To allow access to IPA API, an administrator of the trusting domain needs to create ID overrides for user from the trusted domain. Once the overrides are in place, a user from the trusted domain can issue IPA API calls. In the demo we do ipa ping call and show that underlying Kerberos service tickets were requested and issued.

Future work

Now that we validated the approach, the road splits into two:

  • figure out how to get trust established
  • fix remaining bugs to make ‘day two’ operations possible.

Establishing trust is an interesting problem. Or two. For pure IPA-IPA trust we don’t need to provision the same infrastructure as with Active Directory trust: we don’t need to run Samba on each domain controller to be able to create trusted domain objects and respond to DCE RPC requests because we don’t use DCE RPC in SSSD.

On top of that, FreeIPA has extensive set of passwordless authentication methods which require use of a FAST channel when obtaining an initial Kerberos TGT. When trust to Active Directory created, we first authenticate to AD DC to get a Kerberos TGT of the Active Directory’s administrator. Windows does not have support for passwordless authentication methods through Kerberos except for PKINIT (smartcards), so in most cases we deal with password authentication. If an IPA administrative account is using passwordless method to authenticate, we first have to create a FAST channel or a trusted-to-be domain controller will not even expose supported pre-authentication methods. Any existing Kerberos ticket could be used to create the FAST channel but it must be from the trusted domain and we don’t have trust yet. In our own domain we can use a machine account (on a particular machine) or an Anonymous PKINIT to create the FAST channel. For a trusted-to-be domain we have to trust the CA who issued PKINIT certificates to their KDC. And KDCs might be accessible through a HTTPS proxy (MS-KKDCP protocol). This means some kind of pre-trust exchange needed to make sure we can authenticate against their KDCs successfully.

Also, we have no idea what exact pre-authentication methods will be available to the user. Practically, we only care about being able to authenticate to the trusted domain’s IPA API and then use that connection to set things up. For this it would be enough to authenticate to the trusted-to-be domain’s IPA API endpoint, save received cookies and use them. This method wouldn’t support passwordless authentication either, except an OTP method. FIDO2 and external IdP (OAuth2) authentication aren’t yet directly supported in IPA web UI and IPA API end-points.

With all those complications, we look at enabling OAuth2-based authentication and authorization endpoints in IPA first. We then can use them to request a token from a trusted-to-be domain’s OAuth2 endpoint and use it to establish the trust. A remote domain controller will handle the authentication of their users and will be able to supply all required details: trusted CA chains, ID ranges, scopes for authentication/ID mapping, etc.

And the OAuth2 endpoint will allow us to handle own passwordless authentication methods for IPA Web UI and applications on the enrolled systems like Cockpit which will be able to delegate to IPA for the user login.

IPA-IPA trust progress report

Posted by Alexander Bokovoy on May 31, 2024 09:30 AM

FreeIPA and SSSD teams are working to enable IPA deployments to trust each other. This report outlines the progress we have so far.

Trust across enterprise domains

FreeIPA implements an enterprise domain management: systems enrolled into domain managed centrally and access resources available through the domain controllers. These resources include information about users and groups, machines, Kerberos services and different rules that can bind them. FreeIPA supports a trust to Active Directory forest by posing as a separate Active Directory forest with a single Active Directory domain (a forest root).

The approach helps to integrate with a majority of enterprise deployments. New deployments might not include Active Directory, though. Instead, they tend to be self-sufficient or integrate with externally managed data sources, often without retaining compatibility with traditional POSIX or Active Directory deployments. For example, cloud deployments might rely on OAuth2 identity providers or federate to social services.

FreeIPA can also integrate with OAuth2-based identity providers by bridging authentication over to them. Users would be native IPA users; IPA policies and rules to access IPA resources would apply to them but authentication and permission to issue a Kerberos ticket to the users would be delegated to an external IdP.

One useful approach is to have a single IPA environment that holds all users and groups while individual machines all placed in separate IPA deployments. These deployments would need to trust the environment where users defined. In this document we’d call such deployment an IPA-IPA trust.

IPA-IPA trust would look similar to the existing trust to Active Directory. A core technology is Kerberos: each deployment is a separate Kerberos realm; a trust on Kerberos level means cross-realm ticket granting tickets (TGT) issued by the realms’ KDCs. These cross-realm TGTs allow clients from one realm to request tickets for services located in the other realm.

Requesting Kerberos tickets is a base feature needed to authenticate across the trust. For proper relationship, identities of the authenticated objects (Kerberos principals and services) also need to be resolved. In POSIX environments these identities have to be associated with POSIX users to be able to store data on disk or to run processes with individual login sessions. The process requires not only authentication but also the ability to query information from a trusted domain’s domain controller.

In FreeIPA deployments, user and group information from the domain controllers is retireved by SSSD. When trust to Active Directory forest created, SSSD on IPA domain controllers is able to use a special entry in the trusted domain (trusted domain object, TDO) to authenticate against trusted domain’s domain controllers. SSSD knows how LDAP schema and directory information tree (DIT) on the Active Directory side are organized and can handle user and group object mapping for us.

If SSSD is able to resolve users and groups in the same way as with Active Directory trusts, this means we can reuse existing support for trusted Active Directory users and groups in IPA commands and keep the same user experience when adding HBAC or SUDO rules, managing IPA deployment, and so on. This would also mean we already would have a comprehensive test suite to validate IPA-IPA trust functionality.

Kerberos trust infrastructure

We first focused on making Kerberos infrastructure work when both realms represented by separate FreeIPA deployments. For past decade Microsoft, Samba Team, FreeIPA team, MIT Kerberos and Heimdal projects worked together to ensure there is a solid secure foundation for interoperability between non-POSIX (Active Directory) and POSIX deployments. In Active Directory world some crucial POSIX information is not available. When this information required on POSIX side, it is either synthesized or mapped onto existing objects by FreeIPA and Samba AD.

Kerberos protocol originally didn’t include any means to securely tie Kerberos principals to identities. For example, one was able to create a Windows machine named root in Active Directory and then use its machine account to login to a different machine as root POSIX user. This attack approach not possible anymore if all sides (Kerberos KDCs of the involved realms and a target machine) use authorization data in Kerberos tickets to exchange identity details of the original Kerberos principal’s operating system object. In this case it would mean that a Kerberos principal root would have its machine account information associated with the Kerberos ticket and the target machine would be able to inquire and see that the real Kerberos principal would be host/root.some.domain and that this principal’s operating system object type is a machine account.

The extensions also give ability to communicate group membership details and more information about the account. FreeIPA domain controllers already use these details to check and map properties of trusted Active Directory domains’ objects. For native IPA objects we also generate this information. IPA-IPA trust can rely on these details as well.

This observation allowed us to start with an experiment. Create a trusted domain object that represents a trusted IPA domain and see what does not work and needs a fix to make IPA-IPA trust a reality. We originally thought to replicate Active Directory infrastructure on IPA domain controller side so that existing ipa trust-add --type=ad .. command can be used to establish trust.

These changes are part of the tree I maintain at my github WIP tree.

Identity information retrieval

A part of Active Directory infrastructure that we implemented first was Global Catalog. Global Catalog is a separate, read-only, LDAP instance that contains a subset of the information about all Active Directory objects in the forest. Since FreeIPA LDAP tree uses different LDAP schema and structure than Active Directory LDAP tree, SSSD’s Active Directory identity provider cannot use normal FreeIPA LDAP tree as “yet another Active Directory domain’s LDAP tree”. We imagined that adding support for the Global Catalog (which SSSD is able to query) would make it possible to query IPA users and groups as “Active Directory” users and groups.

We soon found out two issues with this approach. It is not possible to force SSSD to use Global Catalog exclusively. SSSD uses the primary LDAP tree for retrieving users’ information and only switches to Global Catalog for retrieving groups information. Adding support for Global Catalog would not help: existing Active Directory domain identity provider in SSSD would not be able to work against a trusted IPA domain anyway. A proper support for trusted IPA domain in SSSD is needed.

Here we stumbled upon our next issue. SSSD implementation for IPA domain provider includes ability to handle all trusts to Active Directory domains that IPA deployment has. Trusted domains (subdomains, in SSSD speak), in theory, can come from any trust type. Since IPA has only a trust to Active Directory, SSSD implementation did not have an infrastructure to handle different types of subdomains. Instead, a single subdomain type is assumed: any subdomain discovered is an Active Directory domain. Even if we present IPA-IPA trust as an Active Directory trust, SSSD would still attempt to use Active Directory-specific LDAP schema and directory information tree arrangement when making LDAP queries. IPA users and groups from trusted domains aren’t found because LDAP server would not match that LDAP query to the IPA LDAP tree and schema.

SSSD needed a complete refactoring of the subdomains’ infrastructure to allow more than one subdomain type to be present. This work is still in progress. Justin Stephenson from the SSSD team tracks his progress in this WIP pull request.

Demo

With the refactoring work currently being done by Justin, it is possible to combine two parts: Global Catalog-based FreeIPA changes from my WIP source tree and SSSD changes from Justin’s WIP source tree.

We have these modifications to FreeIPA and SSSD prebuilt in Fedora COPR repository abbra/wip-ipa-trust. The packages there work in a specific environment where two FreeIPA deployments named ipa1.test and ipa2.test because we have so far no way to differentiate trusted IPA deployment from a trusted Active Directory deployment, thus SSSD hardcodes the names of the IPA domains to trust.

A trust between ipa1.test and ipa2.test can be established using standard ipa trust-add command. This approach allowed us to investigate what is working or not after the trust link is created. Both FreeIPA and SSSD teams can also work on validating that specific IPA functionality available to handle trusted Active Directory users and groups is also working with IPA-IPA trust.

I recorded a small demo on Fedora 40:

<iframe allow="encrypted-media; picture-in-picture; web-share" allowfullscreen="" frameborder="0" height="480" referrerpolicy="strict-origin-when-cross-origin" src="https://www.youtube.com/embed/z6o1tj9XUMQ" title="YouTube video player" width="740"></iframe>

Two administrators establish trust between their environments and then allow use of resources across the trust. Since the exact details how trust is established will change, I omitted that step and show pre-created trust entries along with the ID range details. Only ‘Active Directory trust with POSIX attributes’ ID range type supported for now. It should not be a problem though because IPA deployments do provide POSIX information.

To allow access to IPA API, an administrator of the trusting domain needs to create ID overrides for user from the trusted domain. Once the overrides are in place, a user from the trusted domain can issue IPA API calls. In the demo we do ipa ping call and show that underlying Kerberos service tickets were requested and issued.

Future work

Now that we validated the approach, the road splits into two:

  • figure out how to get trust established
  • fix remaining bugs to make ‘day two’ operations possible.

Establishing trust is an interesting problem. Or two. For pure IPA-IPA trust we don’t need to provision the same infrastructure as with Active Directory trust: we don’t need to run Samba on each domain controller to be able to create trusted domain objects and respond to DCE RPC requests because we don’t use DCE RPC in SSSD.

On top of that, FreeIPA has extensive set of passwordless authentication methods which require use of a FAST channel when obtaining an initial Kerberos TGT. When trust to Active Directory created, we first authenticate to AD DC to get a Kerberos TGT of the Active Directory’s administrator. Windows does not have support for passwordless authentication methods through Kerberos except for PKINIT (smartcards), so in most cases we deal with password authentication. If an IPA administrative account is using passwordless method to authenticate, we first have to create a FAST channel or a trusted-to-be domain controller will not even expose supported pre-authentication methods. Any existing Kerberos ticket could be used to create the FAST channel but it must be from the trusted domain and we don’t have trust yet. In our own domain we can use a machine account (on a particular machine) or an Anonymous PKINIT to create the FAST channel. For a trusted-to-be domain we have to trust the CA who issued PKINIT certificates to their KDC. And KDCs might be accessible through a HTTPS proxy (MS-KKDCP protocol). This means some kind of pre-trust exchange needed to make sure we can authenticate against their KDCs successfully.

Also, we have no idea what exact pre-authentication methods will be available to the user. Practically, we only care about being able to authenticate to the trusted domain’s IPA API and then use that connection to set things up. For this it would be enough to authenticate to the trusted-to-be domain’s IPA API endpoint, save received cookies and use them. This method wouldn’t support passwordless authentication either, except an OTP method. FIDO2 and external IdP (OAuth2) authentication aren’t yet directly supported in IPA web UI and IPA API end-points.

With all those complications, we look at enabling OAuth2-based authentication and authorization endpoints in IPA first. We then can use them to request a token from a trusted-to-be domain’s OAuth2 endpoint and use it to establish the trust. A remote domain controller will handle the authentication of their users and will be able to supply all required details: trusted CA chains, ID ranges, scopes for authentication/ID mapping, etc.

And the OAuth2 endpoint will allow us to handle own passwordless authentication methods for IPA Web UI and applications on the enrolled systems like Cockpit which will be able to delegate to IPA for the user login.

A great journey towards Fedora CoreOS and bootc

Posted by Fedora Magazine on May 31, 2024 08:00 AM

Some time ago I wrote an article about Fedora Server. At the time, I didn’t know Fedora CoreOS, and my use case lead me to Fedora Server as a good alternative to run my workload which is mainly based on containers. Thanks to comments from the community, I learned about Fedora CoreOS as the natural next step, and they were not wrong.

Getting to know Fedora CoreOS has been a good experience. I use Fedora Silverblue on my laptop, so I wasn’t a stranger to atomic (image based) operating systems.

Provisioning with Ignition

The documentation for this project is well organised and simple to follow. The first step was playing with Ignition to provision my node. Something I noticed from the beginning was that Ignition would be used to install Fedora CoreOS for the first time (first boot), but any future changes would have to be applied manually.

Despite the documentation advising that it would be possible to reinstall Fedora CoreOS while keeping the data persistent, I couldn’t get that result. It is due to the procedure I have to use on a cloud provider for reinstallations.

Nevertheless, to record post-installation changes, I keep my Ignition file up to date and matching my node. This policy helps me to remember what I have done and it makes it easy to replicate my server in the future.

As a rule of thumb, all system configuration in my node is handled by Ignition. It includes content in /etc and installed (layered) packages. Changes in my home directory would be created once the server is up and running and they would be backed up regularly.

Layering and Toolbx

As suggested in the documentation, and similar to the criteria I follow on my laptop, I try to keep layering to a minimum, and use Toolbx to install utilities and troubleshooting tools.

Toolbx works in user space and it cannot access the root of the host filesystem, so there are a few limitations on what makes sense to install inside the container. Programs that need to access the host filesystem other than /home, /mnt and a few other specific locations wouldn’t be good candidates to run in Toolbx. The same rule applies for programs that need access to systemd services running on the host.

Ultimately, it depends on each individual case, but I opted to use the following criteria:

  • Toolbx: htop, ncdu, nmap, speedtest-cli, etc.
  • Layered: httpd, cockpit, tailscale, fail2ban, postfix, etc.

How bootc is rpm-ostree on steroids

Following the announcement of container images at Red Hat summit, I started to look at this technology.

Using the existing techniques to build containers, bootc allows you to build your own OS. Container images use the OCI specification and container tools to build, and transport your containers. But, once the container is installed in a node, it becomes a regular OS.

Think of Fedora CoreOS as an opinionated final product, whereas bootc reference images are skeleton bootable containers to build on.

As of today, there are three official container images available:

  • quay.io/fedora/fedora-bootc
  • quay.io/centos-bootc/centos-bootc
  • registry.redhat.io/rhel9/rhel-bootc

Some of the issues I faced with rpm-ostree were resolved for my use case. Unlike Ignition applicable only at first boot, all changes are centralized in a container image and applied to my node in a continues integration approach orchestrated by Containerfile and bootc.

Furthermore, bootc includes the dnf package manager which has several benefits over the rpm-ostree package manager. For example, it is easier to query installed and available packages from existing and added repositories using dnf.

You may be tempted to execute dnf install. It appears to work, but it will error out. How the OS is configured is interesting. There is an overlay mount on / with no available space, so dnf install is tricked to believe that there is no space available.

Finally, with a simple command (bootc usr-overlay) bootc mounts an overlay on /usr making it possible to use dnf to install packages temporary. Once you are done testing, this overlay will be dropped at boot or it can be manually unmounted (umount -l /usr). I recommend removing all packages installed this way before unmounting /usr, so any configuration files installed in /etc will also be removed.

Personally, I feel more comfortable installing packages in my container image than I do when layering with rpm-ostree. However, I still follow the same criteria and put most system packages in the container image and the rest in Toolbx. Regarding my system configuration (/usr, /etc content), it is all implemented in the container image.

Unlike Fedora CoreOS which provides a default user (core), bootc images don’t have default user other than root. A default user can be created as part of the container build, or during the installation.

Similar to Fedora CoreOS, my system is completely reproducible at all times. There is however more control, fine tuning and available options building a container versus defining Ignition.

Filesystem structure

The filesystem structure follows ostree specifications:

  • /usr directory is read-only, so all changes are solely managed by the container image.
  • /etc is editable, but any change applied in the container will be transferred to the node if the relevant configuration file wasn’t modified locally.

Anything requiring to be added to /var (including /var/home) will be done during first boot. Afterwards, /var is untouched.

Auto updates

Fedora CoreOS uses zincati as the agent handling transactional updates and reboots. There are a couple of parameters that control this process. One is wariness which indicates how eager the server is to receive updates. Another is the strategy to initiate a reboot. These parameters are used to control when rpm-ostree downloads image updates and when the system will be rebooted.

In a similar manner, bootc uses bootc-fetch-apply-updates agent to accomplish the same outcome. This service checks the container repository for updates to the image, and triggers downloads as required to then reboot the system to apply the changes.

For convenience, it is simple to host a repo on GitHub, and use actions to generate images. In case you are not familiar with GitHub actions, the code below will do the trick:

The following example assumes your Containerfile is located at: repo/mycontainer/Containerfile.
The action is implemented at: repo/.github/workflows/mycontainer-image.yml.

name: build container -> mycontainer-image
run-name: building container -> mycontainer-image
on:
  workflow_dispatch:              #Allow triggering this action manually
  schedule:
  - cron: '30 0 * * 6'            #Schedule build at 12:30AM every Saturday
jobs:
  mycontainer-image:
  runs-on: ubuntu-latest
  steps:
  - uses: actions/checkout@v4
  - name: run podman build
  run: podman build -t mycontainer-image mycontainer
  - name: push image to ghcr.io
  run: podman push --creds=${{ github.actor }}:${{ secrets.GITHUB_TOKEN }} mycontainer-image ghcr.io/<username>/mycontainer-image

Tweaks

I encountered a few hurdles along the way that I would like to share here in hope that it will help others achieve a softer landing than I did.

When I created my container image, I configured sshd to not allow password authentication, and I injected a public key as part of the container build. It took me sometime to understand why my connection was being rejected by the node.

Finally, I realized that for sshd to read authorized keys, it needs the ssh-key-dir package. So, if you intend to use a SSH key to connect to your node, be sure to install that package as part of your container image. One could argue that it should be part of the base, but presumably there is some reason it is not.

Some packages rely on alternatives (explained here https://www.redhat.com/sysadmin/alternatives-command). As an example, when installing Postfix, it creates /usr/sbin/sendmail as a symlink implemented by /usr/sbin/sendmail.postfix, so any application relying on the sendmail command will still work. A similar situation exists for the man command. When installing the man-db package, /usr/bin/man is created as a symlink and the target is /usr/bin/man.man-db.

Those symlinks are not created, so sendmail and man in these examples don’t work. As a temporary solution, I created symlinks to their respective handlers as part of the container image. I have submitted a bug report explaining this issue, and I can see that there is work being done to resolve it.

Final thoughts

Below are the most common methods to install your custom container image:

  • bootc install to-disk is for installing the image on an available (not busy) disk. It doesn’t work if the disk has the root of the OS you booted from.
  • bootc to-existing-root can be used to install to the root of the OS you booted from (e.g. Fedora Server). This method won’t create the standard bootc partitions. Instead, it will follow the existing disk configuration.
  • bootc-image-builder will create a raw image for fedora-bootc and a raw/iso for centos-bootc and rhel-bootc.

If this article has inspired your curiosity, then here are some links for further reading:

Perform backups with Systemd

Posted by Fabio Alessandro Locati on May 31, 2024 12:00 AM
Many strategies can be employed to build resilience in IT systems. Personally, I think one of the most critical yet overlooked ones - both in personal and corporate settings - is backups. I recently had to back up a folder containing the state of a service running on a Fedora machine. As often happens, an interesting aspect of this service is that the backups are consistent and, therefore, restorable only if the service is stopped while the configuration folder is backed up.

Merging Config Fragments in the Linux Kernel

Posted by Adam Young on May 30, 2024 03:50 PM

If you have a config fragment that you want included in your Linux config file, you can use the make system to add it to the existing config. For example, I have a file called: kernel/configs/ampere-topic-mctp.config that looks like this:

CONFIG_MCTP_TRANSPORT_PCC=m

This tells the make system to change CONFIG_MCTP_TRANSPORT_PCC from the default (=n) to module (=m) when building. To include this config fragment, I append it to the make file command to build the config file. For example:

make olddefconfig  ampere-topic-mctp.config

This can be done with multiple fragments.

Fedora London Meetup comes alive in 2024

Posted by Fedora Community Blog on May 30, 2024 08:00 AM

Our London-based Fedora Community members got together in person to get to know the people behind the Fedora Account System and Matrix nicks.

At a Glance

  • What: A revival of Fedora local meetup in London
  • Where: University College London
  • When: 18:00-20:30, 26 Feb 2024

Introduction

  • Ankur Sinha (Neuroscience, Join SIG, Ask Fedora, @ankursinha). Ankur kindly offered to help with the venue at University College London.
  • Christopher Klooz (Ask Fedora, SELinux Confined Users SIG, @py0xc3)
  • Hank Lee (Documentation writer, Music & Audio SIG, @hankuoffroad)
  • Richard Jones (Packager, RISC-V SIG, Red Hat, @rjones)
  • Alessandro (Red Hat, @aleskandro)
  • Anthony Kesterton (Red Hat)
  • Dario Molinari (Red Hat, @dariomolinari). Dario brought us Italian Amaretti Biscuits. Grazie!

How did this London meetup come about?

  • Revive (reimagine) previous UK meetup or user group
  • Have in-person meetup for UK-based Fedora contributors
  • Grow community and retain contributors

RISC-V Showcase

  • Richard Jones gave an interesting talk about RISC-V/FPGA hardware. Thanks for bringing us a wealth of knowledge about electronic engineering and a set of RISC-V test units for us.

Roundtable discussion

  • Community cohort: Group mentoring
  • Onboarding experience: Consolidation of contributor guides and identifying common process
  • Virtual delivery of training session with recorded tutorial and live chat Q&A
  • Collaboration with student community
  • Upstream working model and collaboration in SELinux Confined Users SIG

Snaps from @ London Meetup

<figure class="wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-1 is-layout-flex wp-block-gallery-is-layout-flex"> <figure class="wp-block-image size-large"><figcaption class="wp-element-caption">Richard Jones gives a talk about RISC-V past and present.</figcaption></figure> <figure class="wp-block-image size-large"><figcaption class="wp-element-caption">Introduction to Logic Gates in FPGA (am I right?)</figcaption></figure> </figure>

Credit to Dario Molinari

Pub Chat After Meetup

A few of attendees couldn’t resist ‘London pub culture’.  We didn’t plan it, but London magically attracted us to an old English pub, a short walk from University College London. We hope more people can join pub chat next time.

Related posts

https://discussion.fedoraproject.org/t/organising-monthly-new-contributor-cohorts/107600

https://discussion.fedoraproject.org/t/2nd-local-london-meetup/107164


Photo by Marcin Nowak on Unsplash. Modified by Justin W. Flory.

The post Fedora London Meetup comes alive in 2024 appeared first on Fedora Community Blog.

Where did 5 Million EPEL-7 systems come from starting in March?

Posted by Stephen Smoogen on May 30, 2024 01:09 AM

ADDENDUM (2024-05-30T01:08+00:00): Multiple Amazon engineers reached out after I posted this and there is work on identifying what is causing this issue. Thank you to all the people who are burning the midnight oil on this.

ORIGINAL ARTICLE:

So starting earlier this year, the Fedora mirror manager mailing list started getting reports about heavier usage from various mirrors. Looking at the traffic reported, it seemed to be a large number of EL-7 systems starting to check in a lot more than in the past. At first I thought it was because various domains were beginning to move their operating systems from EL-7 to a newer release using one of the transition tools like Red Hat LEAPP or Alma ELevate. However, the surge didn't seem to die down, and in fact the Fedora Project mirrors have had regular problems with load due to this surge. 

A couple of weeks ago, I finally had time to look at some graphs I had set up when I was in Fedora Infrastructure and saw this:

Cumulative EPEL releases since 2019

 

Basically the load is an additional 5 million systems starting to query both the Fedora webproxies for mirror data, and then mirrors around the world to get further information. Going through the logs, there seems to be a 'gradual' shift of additional servers starting to ask for content when they had not before. In looking at the logs, it is hard to see what the systems asking for this data are. EL-7 uses yum which doesn't report any user data beyond:

urlgrabber/3.10 yum/3.4.

That could mean the system is Red Hat Enterprise Linux 7, CentOS Linux 7, or even Amazon Linux 2 (which is sort of based on CentOS 7, but with various changes that using EPEL is probably not advised).

Because there wasn't a countme or any identifiers in the yum code, the older data-analysis program does a 'if I see an ip address 1 time a day, I count it once.. if I see it 1000 times, I count it once.' This had a problem of undercounting for various cloud and other networks behind a NAT router.. so normally maybe only 1 ip address would show up in a class C (/24) network space. What seemed to change is where we might only count one ip address in various networks, we were now seeing every ip address showing up in a Class C network. 

Doing some backtracking of the ip addresses to ASN numbers, I was able to show that the 'top 10' ASNs changed dramatically in March

January 27, 2024
Total  ASN 
1347016 16509_AMAZON-02,
219728 14618_AMAZON-AES,
53500 396982_GOOGLE-CLOUD-PLATFORM,
11205 8560_IONOS-AS
10403 8987_AMAZON
8463 32244_LIQUIDWEB,
8019 54641_IMH-IAD,
7965 8075_MICROSOFT-CORP-MSN-AS-BLOCK,
7889 398101_GO-DADDY-COM-LLC,
7234 394303_BIGSCOOTS,

February 27, 2024
1871463 16509_AMAZON-02,
219545 14618_AMAZON-AES,
51511 396982_GOOGLE-CLOUD-PLATFORM,
11021 8560_IONOS-AS
9016 8987_AMAZON
8208 32244_LIQUIDWEB,
7885 54641_IMH-IAD,
7768 8075_MICROSOFT-CORP-MSN-AS-BLOCK,
7618 398101_GO-DADDY-COM-LLC,
7383 394303_BIGSCOOTS,

March 27, 2024
2604768 16509_AMAZON-02,
276737 14618_AMAZON-AES,
34674 396982_GOOGLE-CLOUD-PLATFORM,
10211 8560_IONOS-AS
9560 135629_WESTCLOUDDATA
8134 8987_AMAZON
7952 54641_IMH-IAD,
7677 32244_LIQUIDWEB,
7445 394303_BIGSCOOTS,
7250 398101_GO-DADDY-COM-LLC,

April 27, 2024
4247068 16509_AMAZON-02,
1807803 14618_AMAZON-AES,
65274 8987_AMAZON
51668 135629_WESTCLOUDDATA
41190 55960_BJ-GUANGHUAN-AP
9799 396982_GOOGLE-CLOUD-PLATFORM,
7662 54641_IMH-IAD,
7561 394303_BIGSCOOTS,
6613 32244_LIQUIDWEB,
6425 8560_IONOS-AS

May 27, 2024
4186230 16509_AMAZON-02,
1775898 14618_AMAZON-AES,
62698 8987_AMAZON
50895 135629_WESTCLOUDDATA
38521 55960_BJ-GUANGHUAN-AP
9059 396982_GOOGLE-CLOUD-PLATFORM,
7613 394303_BIGSCOOTS,
7531 54641_IMH-IAD,
6307 398101_GO-DADDY-COM-LLC,
6222 32244_LIQUIDWEB,

I am not sure what changed in Amazon in March, but it has had a tremendous impact on parts of Fedora Infrastructure and the volunteer mirror systems which use it.

System insights with command-line tools: lscpu and lsusb

Posted by Fedora Magazine on May 29, 2024 08:00 AM

Introduction

Fedora (and other common Linux setups out there) offers you an array of tools for managing, monitoring, and understanding the system. Among these tools are a series of commands that begin with ls (for “list”).

They provide easy insights into various aspects of the system’s hardware and resources. This article series gives you an intro and overview over many of them, starting with the simpler ones. The post will cover lscpu and lsusb.

lscpu – Display CPU information

The lscpu command gathers and displays information about the CPU architecture. It is provided by the util-linux package. The command gathers CPU information from multiple sources like /proc/cpuinfo and architecture-specific libraries (e.g. librtas on PowerPC):

$ lscpu

This command outputs information like the number of CPUs, threads per core, cores per socket, and the CPU family and model.

If asked, it outputs detailed CPU information in JSON format. This provides a structured view that is particularly useful for scripting and automation:

$ lscpu --extended --json

Advanced usage example

With the machine readable JSON output, you can extract information using jq (a powerful command-line tool that allows users to parse, filter, and manipulate JSON data efficiently and worth an article of its own). For example, the following command will extract the current MHz for each CPU:

export LANG=en_US.UTF-8
export LC_ALL="en_US.UTF-8"
lscpu --json --extended \
| jq '.cpus[] | {cpu: .cpu, mhz: .mhz}'

Let’s look at the single parts of the command:

  • export LANG=en_US.UTF-8 and export LC_ALL=”en_US.UTF-8″ are making sure that the output is not using localized numbers. For example, a German language setting can result in broken JSON output because of the use of commas in place of periods as floating point separators.
  • lscpu –json –extended generates the detailed CPU information in JSON format.
  • jq ‘.cpus[] | will iterate over each entry in the cpus array. The {cpu: .cpu, mhz: .mhz}‘ part constructs a new JSON object for each CPU entry showing the CPU number (cpu) and its current frequency in MHz (mhz).

Example output from a laptop operating in performance mode:

$ lscpu --json --extended \
| jq '.cpus[] | {cpu: .cpu, mhz: .mhz}'
{
"cpu": 0,
"mhz": 3700.0171
}
{
"cpu": 1,
"mhz": 3700.2241
}
{
"cpu": 2,
"mhz": 3700.1121
}
{
"cpu": 3,
"mhz": 3884.2539
}

and later in power saver mode:

$ lscpu --json --extended \
| jq '.cpus[] | {cpu: .cpu, mhz: .mhz}'
{
"cpu": 0,
"mhz": 1200.0580
}
{
"cpu": 1,
"mhz": 1200.0070
}
{
"cpu": 2,
"mhz": 1200.5450
}
{
"cpu": 3,
"mhz": 1200.0010
}

lsusb – Display USB Devices Information

The lsusb command displays detailed information about the USB buses in the system and the devices connected to them. It is provided by the usbutils package and helps users and system administrators easily view the configuration and the devices attached to their USB interfaces:

$ lsusb

This produces a list of all USB buses, devices connected to them, and brief information about each device, such as ID and manufacturer. This is particularly useful for a quick check of what devices are connected to the system and if you need the USB device ID for udev rules or the like.

Usage example and output

For those needing more detailed information about the USB devices, lsusb allows listing more detailed information:

$ lsusb | grep Fibocom
Bus 001 Device 013: ID 2cb7:0210 Fibocom L830-EB-00 LTE WWAN Modem

$ sudo lsusb -d 2cb7:0210 -v
Bus 001 Device 013: ID 2cb7:0210 Fibocom L830-EB-00 LTE WWAN Modem
Device Descriptor:
bLength 18
bDescriptorType 1
bcdUSB 2.00
bDeviceClass 239 Miscellaneous Device
bDeviceSubClass 2 [unknown]
bDeviceProtocol 1 Interface Association
bMaxPacketSize0 64
idVendor 0x2cb7 Fibocom
idProduct 0x0210 L830-EB-00 LTE WWAN Modem
bcdDevice 3.33
iManufacturer 1 FIBOCOM
iProduct 2 L830-EB-00
iSerial 3 004999010640000
bNumConfigurations 1
Configuration Descriptor:
bLength 9
bDescriptorType 2
wTotalLength 0x00a1
bNumInterfaces 4
bConfigurationValue 1
iConfiguration 0
bmAttributes 0xe0
Self Powered
Remote Wakeup
MaxPower 100mA
[ more output omitted for readability ]

Using the -v and -t options will tell lsusb to dump the physical USB device hierarchy as a tree including IDs. The following shows a detailed tree of all USB devices (here using a ThinkPad T480S), their types, speeds, and device classes. This is particularly useful for troubleshooting USB device issues:

$ lsusb -t -v
/:  Bus 001.Port 001: Dev 001, Class=root_hub, Driver=xhci_hcd/12p, 480M
    ID 1d6b:0002 Linux Foundation 2.0 root hub
    |__ Port 001: Dev 002, If 0, Class=Human Interface Device, Driver=usbhid, 1.5M
        ID 046d:c069 Logitech, Inc. M-U0007 [Corded Mouse M500]
    |__ Port 002: Dev 003, If 0, Class=Human Interface Device, Driver=usbhid, 1.5M
        ID 046a:c098 CHERRY 
    |__ Port 002: Dev 003, If 1, Class=Human Interface Device, Driver=usbhid, 1.5M
        ID 046a:c098 CHERRY 
    |__ Port 003: Dev 004, If 0, Class=Chip/SmartCard, Driver=usbfs, 12M
        ID 058f:9540 Alcor Micro Corp. AU9540 Smartcard Reader
    |__ Port 005: Dev 005, If 0, Class=Video, Driver=uvcvideo, 480M
        ID 5986:2123 Bison Electronics Inc. 
    |__ Port 005: Dev 005, If 1, Class=Video, Driver=uvcvideo, 480M
        ID 5986:2123 Bison Electronics Inc. 
    |__ Port 006: Dev 013, If 0, Class=Communications, Driver=cdc_mbim, 480M
        ID 2cb7:0210 Fibocom L830-EB-00 LTE WWAN Modem
    |__ Port 006: Dev 013, If 1, Class=CDC Data, Driver=cdc_mbim, 480M
        ID 2cb7:0210 Fibocom L830-EB-00 LTE WWAN Modem
    |__ Port 006: Dev 013, If 2, Class=Communications, Driver=cdc_acm, 480M
        ID 2cb7:0210 Fibocom L830-EB-00 LTE WWAN Modem
    |__ Port 006: Dev 013, If 3, Class=CDC Data, Driver=cdc_acm, 480M
        ID 2cb7:0210 Fibocom L830-EB-00 LTE WWAN Modem
    |__ Port 007: Dev 007, If 0, Class=Wireless, Driver=btusb, 12M
        ID 8087:0a2b Intel Corp. Bluetooth wireless interface
    |__ Port 007: Dev 007, If 1, Class=Wireless, Driver=btusb, 12M
        ID 8087:0a2b Intel Corp. Bluetooth wireless interface
    |__ Port 008: Dev 008, If 0, Class=Video, Driver=uvcvideo, 480M
        ID 5986:2115 Bison Electronics Inc. 
    |__ Port 008: Dev 008, If 1, Class=Video, Driver=uvcvideo, 480M
        ID 5986:2115 Bison Electronics Inc. 
    |__ Port 009: Dev 009, If 0, Class=Vendor Specific Class, Driver=[none], 12M
        ID 06cb:009a Synaptics, Inc. Metallica MIS Touch Fingerprint Reader
/:  Bus 002.Port 001: Dev 001, Class=root_hub, Driver=xhci_hcd/6p, 5000M
    ID 1d6b:0003 Linux Foundation 3.0 root hub
    |__ Port 003: Dev 002, If 0, Class=Mass Storage, Driver=usb-storage, 5000M
        ID 0bda:0316 Realtek Semiconductor Corp. Card Reader
/:  Bus 003.Port 001: Dev 001, Class=root_hub, Driver=xhci_hcd/2p, 480M
    ID 1d6b:0002 Linux Foundation 2.0 root hub
    |__ Port 001: Dev 002, If 0, Class=Hub, Driver=hub/5p, 480M
        ID 0451:8442 Texas Instruments, Inc. 
        |__ Port 001: Dev 003, If 0, Class=Hub, Driver=hub/7p, 480M
            ID 0424:2137 Microchip Technology, Inc. (formerly SMSC) 
        |__ Port 003: Dev 005, If 0, Class=Hub, Driver=hub/3p, 480M
            ID 0bda:5411 Realtek Semiconductor Corp. RTS5411 Hub
            |__ Port 001: Dev 006, If 0, Class=Vendor Specific Class, Driver=r8152, 480M
                ID 0bda:8153 Realtek Semiconductor Corp. RTL8153 Gigabit Ethernet Adapter
        |__ Port 004: Dev 004, If 0, Class=Human Interface Device, Driver=usbhid, 480M
            ID 0451:82ff Texas Instruments, Inc. 
/:  Bus 004.Port 001: Dev 001, Class=root_hub, Driver=xhci_hcd/2p, 10000M
    ID 1d6b:0003 Linux Foundation 3.0 root hub

Conclusion

Even though they are simple, both commands offer insights into the system’s configuration and status. Whether you’re troubleshooting, optimizing, or simply curious, these tools provide valuable data that can help you better understand and manage your Linux environment. See you next time when we will have a look at more useful listing and information command line tools and how to use them.

12/22 Élections pour le Conseil, FESCo, Mindshare et le comité EPEL pendant deux jours encore

Posted by Charles-Antoine Couret on May 28, 2024 08:59 PM

Comme le projet Fedora est communautaire, une partie du collège des organisations suivantes doit être renouvelée : Council, FESCo et Mindshare. Et ce sont les contributeurs qui décident. Chaque candidat a bien sûr un programme et un passif qu'ils souhaitent mettre en avant durant leur mandat pour orienter le projet Fedora dans certaines directions. Je vous invite à étudier les propositions des différents candidats pour cela.

J'ai voté

Pour voter, il est nécessaire d'avoir un compte FAS actif et de faire son choix sur le site du scrutin. Vous avez jusqu'au vendredi 23 décembre à 1h heure française pour le faire. Donc n'attendez pas trop.

Par ailleurs, comme pour le choix des fonds d'écran additionnel, vous pouvez récupérer un badge si vous cliquez sur un lien depuis l'interface après avoir participé à un vote.

Je vais profiter de l'occasion pour résumer le rôle de chacun de ces comités afin de clarifier l'aspect décisionnel du projet Fedora mais aussi visualiser le caractère communautaire de celui-ci.

Council

Le Council est ce qu'on pourrait qualifier le grand conseil du projet. C'est donc l'organe décisionnaire le plus élevé de Fedora. Le conseil définit les objectifs à long terme du projet Fedora et participe à l'organisation de celui-ci pour y parvenir. Cela se fait notamment par le biais de discussions ouvertes et transparentes vis à vis de la communauté.

Mais il gère également l'aspect financier. Cela concerne notamment les budgets alloués pour organiser les évènements, produire les goodies, ou des initiatives permettant de remplir les dits objectifs. Ils ont enfin la charge de régler les conflits personnels importants au sein du projet, tout comme les aspects légaux liés à la marque Fedora.

Les rôles au sein du conseil sont complexes.

Ceux avec droit de vote complet

Tout d'abord il y a le FPL (Fedora Project Leader) qui est le dirigeant du conseil et de facto le représentant du projet. Son rôle est lié à la tenue de l'agenda et des discussions du conseil, mais aussi de représenter le projet Fedora dans son ensemble. Il doit également servir à dégager un consensus au cours des débats. Ce rôle est tenu par un employé de Red Hat et est choisi avec le consentement du conseil en question.

Il y a aussi le FCAIC (Fedora Community Action and Impact Coordinator) qui fait le lien entre la communauté et l'entreprise Red Hat pour faciliter et encourager la coopération. Comme pour le FPL, c'est un employé de Red Hat qui occupe cette position avec l'approbation du conseil.

Il y a deux places destinées à la représentation technique et à la représentation plus marketing / ambassadrice du projet. Ces deux places découlent d'une nomination décidée au sein des organes dédiées à ces activités : le FESCo et le Mindshare. Ces places sont communautaires mais ce sont uniquement ces comités qui décident des attributions.

Il reste deux places communautaires totalement ouvertes et dont tout le monde peut soumettre sa candidature ou voter. Cela permet de représenter les autres secteurs d'activité comme la traduction ou la documentation mais aussi la voix communautaire au sens la plus large possible. C'est pour une de ces places que le vote est ouvert cette semaine !

Ceux avec le droit de vote partiel

Un conseiller en diversité est nommé par le FPL avec le soutien du conseil pour favoriser l'intégration au sein du projet des populations le plus souvent discriminées. Son objectif est donc de déterminer les programmes pour régler cette problématique et résoudre les conflits associés qui peuvent se présenter.

Un gestionnaire du programme Fedora qui s'occupe du planning des différentes versions de Fedora. Il s'assure du bon respect des délais, du suivi des fonctionnalités et des cycles de tests. Il fait également office de secrétaire du conseil. C'est un employé de Red Hat qui occupe ce rôle toujours avec l'approbation du conseil.

FESCo

Le FESCo (Fedora Engineering Steering Committee) est un conseil entièrement composé de membres élus et totalement dévoués à l'aspect technique du projet Fedora.

Ils vont donc traiter en particulier les points suivants :

  • Les nouvelles fonctionnalités de la distribution ;
  • Les sponsors pour le rôle d'empaqueteur (ceux qui pourront donc superviser un débutant) ;
  • La création et la gestion des SIGs (Special Interest Group) pour organiser des équipes autour de certaines thématiques ;
  • La procédure d'empaquetage des paquets.

Le responsable de ce groupe est tournant. Les 9 membres sont élus pour un an, sachant que chaque élection renouvelle la moitié du collège. Ici 4 places sont à remplacer.

Mindshare

Mindshare est une évolution du FAmSCo (Fedora Ambassadors Steering Committee) qu'il remplace. Il est l'équivalent du FESCo sur l'aspect plus humain du projet. Pendant que le FESCo se préoccupera beaucoup plus des empaqueteurs, la préoccupation de ce conseil est plutôt l'ambassadeur et les nouveaux contributeurs.

Voici un exemple des thèmes dont il a compétence qui viennent du FAmSCo :

  • Gérer l'accroissement des ambassadeurs à travers le mentoring ;
  • Pousser à la création et au développement des communautés plus locales comme la communauté française par exemple ;
  • Réaliser le suivi des évènements auxquels participent les ambassadeurs ;
  • Accorder les ressources aux différentes communautés ou activités, en fonction des besoin et de l'intérêt ;
  • S'occuper des conflits entre ambassadeurs.

Et ses nouvelles compétences :

  • La communication entre les équipes, notamment entre la technique et le marketing ;
  • Motiver les contributeurs à s'impliquer dans différents groupes de travail ;
  • Gérer l'arrivé de nouveaux contributeurs pour les guider, essayer de favoriser l'inclusion de personnes souvent peu représentées dans Fedora (femmes, personnes non américaines et non européennes, étudiants, etc.) ;
  • Gestion de l'équipe marketing.


Il y a 9 membres pour gérer ce comité. Un gérant, 2 proviennent des ambassadeurs, un du design et web, un de la documentation, un du marketing, un de la commops et les deux derniers sont élus. C'est pour un de ces derniers sièges que le scrutin est ouvert.

Mindshare

Mindshare est une évolution du FAmSCo (Fedora Ambassadors Steering Committee) qu'il remplace. Il est l'équivalent du FESCo sur l'aspect plus humain du projet. Pendant que le FESCo se préoccupera beaucoup plus des empaqueteurs, la préoccupation de ce conseil est plutôt l'ambassadeur et les nouveaux contributeurs.

Voici un exemple des thèmes dont il a compétence qui viennent du FAmSCo :

  • Gérer l'accroissement des ambassadeurs à travers le mentoring ;
  • Pousser à la création et au développement des communautés plus locales comme la communauté française par exemple ;
  • Réaliser le suivi des évènements auxquels participent les ambassadeurs ;
  • Accorder les ressources aux différentes communautés ou activités, en fonction des besoin et de l'intérêt ;
  • S'occuper des conflits entre ambassadeurs.

Et ses nouvelles compétences :

  • La communication entre les équipes, notamment entre la technique et le marketing ;
  • Motiver les contributeurs à s'impliquer dans différents groupes de travail ;
  • Gérer l'arrivé de nouveaux contributeurs pour les guider, essayer de favoriser l'inclusion de personnes souvent peu représentées dans Fedora (femmes, personnes non américaines et non européennes, étudiants, etc.) ;
  • Gestion de l'équipe marketing.


Il y a 9 membres pour gérer ce comité. Un gérant, 2 proviennent des ambassadeurs, un du design et web, un de la documentation, un du marketing, un de la commops et les deux derniers sont élus. C'est pour un de ces derniers sièges que le scrutin est ouvert.

Le comité EPEL

Officiellement dénommé EPEL Steering Committee, ce comité existe depuis 2007 mais est ouvert aux candidatures depuis le printemps 2024. C'est donc la première élection de ce comité cette fois-ci.

L'objectif du comité est d'aider à définir le projet EPEL qui fourni des paquets communautaires depuis Fedora vers CentOS ou RHEL. Cela permet d'avoir des paquets plus diversifiés que Red Hat ne souhaite pas maintenir ou dans des versions plus récentes le tout en se basant sur l'infrastructure de Fedora.

Il y a 4 places d'ouvertes pour cette élection sur les 7 places occupées par le comité. Leur mandat sera de deux ans.

The $TRANSPORT macro of syslog-ng

Posted by Peter Czanik on May 28, 2024 10:58 AM

Do you want to know how your log messages arrived to syslog-ng? The new $TRANSPORT macro provides you with part of the answer. It shows you the protocol variant for network sources, or the kind of local source used.

Read more at https://www.syslog-ng.com/community/b/blog/posts/the-transport-macro-of-syslog-ng

<figure><figcaption>

syslog-ng logo

</figcaption> </figure>

Week 21 update

Posted by Ankur Sinha "FranciscoD" on May 28, 2024 09:38 AM

Other stuff

I lost my tabs the other day. I was tinkering with my Qutebrowser configuration, and I opened the browser but saved the session and quit it before all the tabs had loaded. This meant that the next time I opened it, all the tabs were completely blank. Bug? Not really---I needed to either have waited for the session to load, or I should've quit without saving a partially loaded session.

The silver lining is that I've closed 100 tabs with research papers that I'd had open for the past year but not managed to read. So, a clear up of sorts.

Returned Sony earbuds

I'd bought myself a set of WF-C700N ear buds some time ago. I'd seen a few reviews. They won the What Hi-Fi award in 2023, so they seemed a good buy.

They're very good sound wise for their price, but unfortunately, they turned out to be very buggy. The right ear bud would just not turn on a lot of the time. The docs suggested re-initialising them to fix the issue, and while it did fix the issue for a bit, the right earbud would go off again and not turn back on. I did this a few times, but it became increasingly annoying. Heading out on my commute only to find that the right earbud had stopped working again was really not fun. On top of that, I couldn't reliably re-initialise them each time. Sometimes the reinitialising process wouldn't work.

A search showed me that lots of users were experiencing this, with multiple folks on Reddit confirming the issue and suggesting returning the earbuds. I persevered and tried various things on my Android phone too, but in the end nothing really fixed the issue. So, I've returned them too.

I shelled out a bit more and got the top of the line WF-1000XM5 ones now. They're double the price, and have much better sound quality and a few more features. So far, so good 🤞.

GNOME will have two Outreachy interns conducting a series of short user research exercises

Posted by Felipe Borges on May 28, 2024 08:51 AM

We are happy to announce that GNOME is sponsoring two Outreachy internship projects for the May-August 2024 Outreachy internship round where they will be conducting a series of short user research exercises, using a mix of research methods.

Udo Ijibike and Tamnjong Larry Tabeh will be working with mentors Allan Day and Aryan Kaushik.

Stay tuned to Planet GNOME for future updates on the progress of this project!

آموزش آپگرید لینوکس فدورا ۳۹ به فدورا ۴۰

Posted by Fedora fans on May 28, 2024 06:30 AM
fedora-upgrade

fedora-upgrade
با توجه به انتشار نسخه نهایی Linux Fedora 40 پیشنهاد می شود تا کاربرانی که از نسخه های پایین تر استفاده می کنند، سیستم خود را به آخرین نسخه ی پایدار ارتقاء دهند. به همین خاطر در این مطلب قصد داریم تا لینوکس فدورا ۳۹ را به فدورا ۴۰ آپگرید کنیم.

قبل از انجام آپگرید بهتر است که از اطلاعات خود نسخه ی پشتیبان تهیه کنید. در حال حاظر احتمالا باید یک notification بر روی برنامه ی مدیر بسته ی گرافیکی سیستم دریافت کرده باشید که با طی مراحل آن می توانید به صورت گرافیکی سیستم خود را آپگرید کنید.

علاوه بر روش گرافیکی، امکان آپگرید از طریق خط فرمان نیز فراهم می باشد که در این مطلب قصد داریم تا آپگرید را از طریق خط فرمان انجام دهیم. برای شروع کار مطمئن شوید که از آخرین نسخه بسته های نرم افزاری استفاده می کنید. به همین خاطر برای بروز رسانی بسته های نرم افزاری کافیست تا دستور زیر را اجرا کنید:

# dnf upgrade --refresh

نکته اینکه اگر نسخه kernel آپدیت شد یکبار سیستم را reboot کنید.
اکنون پلاگین DNF را نصب کنید:

# dnf install dnf-plugin-system-upgrade

سپس برای دریافت بسته های بروزرسانی فدورا ۴۰ دستور زیر را اجرا کنید:

# dnf system-upgrade download --releasever=40

پس از دانلود موفقیت آمیز بسته های آپگرید، اکنون زمان reboot کردن سیستم می باشد تا فرایند آپگرید آغاز گردد. برای اینکار کافیست تا دستور زیر را اجرا کنید تا سیستم reboot شود:

# dnf system-upgrade reboot

 

The post آموزش آپگرید لینوکس فدورا ۳۹ به فدورا ۴۰ first appeared on طرفداران فدورا.

Malayalam open font design competition 2025 announced

Posted by Rajeesh K Nambiar on May 28, 2024 04:52 AM

Rachana Institute of Typography, in association with KaChaTaThaPa Foundation and Sayahna Foundation, is launching a Malayalam font design competition for students, professionals, and amateur designers.

Selected fonts will be published under Open Font License for free use.

It is not necessary to know details of font technology; skills to design characters would suffice.

Timelines, regulations, prizes and more details are available at the below URLs.

English: https://sayahna.net/fcomp-en
Malayalam: https://sayahna.net/fcomp-ml

Registration

Interested participants may register at https://sayahna.net/fcomp

Last day for registration is 30th June 2024.

Serverless URL redirects using JavaScript on GitHub Pages

Posted by Vedran Miletić on May 28, 2024 12:00 AM

Serverless URL redirects using JavaScript on GitHub Pages

As many readers of this blog are already aware, we make great use of GitHub Pages for hosting this website and several others. In particular, after FIDIT's inf2 server was finally decomissioned, Pages was the obvious choice for replacing the remaining services it offered.

Since the number and variety of applications and services hosted on inf2 server grew and diminished organically over time, what remained afterward was a collection of complex, but unrelated link hierarchies that had to be redirected to new locations (remember that Cool URIs don't change).

Cockpit 317

Posted by Cockpit Project on May 28, 2024 12:00 AM

Cockpit is the modern Linux admin interface. We release regularly.

Here are the release notes from Cockpit 317:

webserver: System user changes

Cockpit has changed how the system user is handled. All supported distributions use the same system user names. We don’t test, support, or recommend running the web server as root, which had previously been the default with an upstream build and install. Hence, the following ./configure options related to static users have been removed:

  • --with-cockpit-user
  • --with-cockpit-group
  • --with-cockpit-ws-instance-user
  • --with-cockpit-ws-instance-group

The cockpit-ws system user is no longer statically created, but created transiently and on-demand via systemd’s DynamicUser feature. If you would like to remove the previously-used static system user:

  1. Upgrade to Cockpit 317 or later
  2. Run systemctl stop cockpit
  3. Run userdel -r cockpit-ws

The cockpit-wsinstance system user is now declared through systemd-sysusers.

Metrics: Grafana setup now prefers Valkey

The Grafana metrics setup now prefers using valkey over redis on Fedora. See the Fedora change proposal for details.

Try it out

Cockpit 317 is available now:

پایان پشتیبانی از لینوکس فدورا ۳۸

Posted by Fedora fans on May 27, 2024 11:31 AM
fedora

fedora
به اطلاع می رساند در تاریخ ۲۱ می ۲۰۲۴ پشتیبانی از لینوکس فدورا ۳۸ به پایان رسید. از تاریخ اعلام شده هیچ بسته ی بروزرسانی و امنیتی برای Linux fedora 38 منتشر نخواهد شد. از سوی دیگر، فدورا لینوکس ۳۹ تا تقریباً یک ماه پس از انتشار فدورا لینوکس ۴۱ همچنان به دریافت به‌روز‌رسانی‌ها ادامه خواهد داد.

به کاربران پیشنهاد می شود تا از آخرین نسخه ی فدورا یعنی فدورا ۴۰ استفاده کنند یا اینکه سیستم خود را آپگرید کنند.

The post پایان پشتیبانی از لینوکس فدورا ۳۸ first appeared on طرفداران فدورا.

کلاستر در کلاستر با vCluster

Posted by Fedora fans on May 27, 2024 06:30 AM
vcluster

vcluster

vCluster یک ابزار متن‌باز توسعه‌یافته توسط Loft Labs است که امکان ایجاد و مدیریت کلاسترهای مجازی Kubernetes را بر روی یک کلاستر فیزیکی فراهم می‌کند. این ابزار به تیم‌های توسعه و DevOps کمک می‌کند تا با isolation بهتر و استفاده بهینه از منابع، فرآیندهای توسعه، آزمایش و استقرار را بهبود دهند. با استفاده از vCluster، می‌توان چندین کلاستر مجازی را به سرعت ایجاد و حذف کرد، که این کار به صرفه‌جویی در هزینه‌ها و منابع، و همچنین به مدیریت محیط‌های توسعه و آزمایش کمک زیادی می‌کند.

در این مطلب قصد داریم تا vCluster را نصب کنیم و با استفاده از آن بر روی یک کلاستر کوبرنتیز (به عنوان Host) یک کلاستر کوبرنتیز دیگر راه اندازی کنیم.

 

گام ۱: کوبرنتیز کلاستر

در ابتدا ما نیاز به یک کلاستر کوبرنتیز خواهیم داشت که میزبان کلاسترهایی خواهد بود که قصد داریم تا با استفاده از vCluster راه اندازی کنیم. برای راه اندازی این کلاستر کوبرنتیز می توانید از Minikube یا Kind استفاده کنید.
اکنون ما فرض می کنیم که با استفاده از kubectl به یک کوبرنتیز کلاستر دسترسی داریم.

 

گام ۲: نصب vCluster

در این مرحله باید vCluster CLI را نصب کنیم. برای نصب آن بر روی لینوکس کافیست تا دستور زیر را اجرا کنید:

curl -L -o vcluster "https://github.com/loft-sh/vcluster/releases/download/v0.20.0-beta.1/vcluster-linux-amd64" && sudo install -c -m 0755 vcluster /usr/local/bin && rm -f vcluster

نکته اینکه برای دانلود آخرین نسخه ی vCluster می توانید آن را از صفحه GitHub Releases پروژه دانلود کنید:

https://github.com/loft-sh/vcluster/releases

برای بررسی درستی نصب می توانید از دستور زیر استفاده کنید:

 

vcluster --version

 

گام ۳: راه اندازی کوبرنتیز کلاستر با vCluster

اکنون جهت راه اندازی یک کلاستر با vCluster در کلاستر کوبرنتیز میزبان مراحل زیر را باید انجام داد.

 

ساخت یک namespace:

kubectl create namespace myteam

راه اندازی کلاستر مجازی با vCluster:

vcluster create my-vcluster --namespace myteam

 

گام ۴: اتصال به کلاستر مجازی

جهت اتصال به کلاستر مجازی (my-vcluster) کافیست تا دستور زیر را اجرا کنید:

 

vcluster connect my-vcluster --namespace myteam

 

نکته اینکه برای disconnect شدن از کلاستر مجازی می توانید از دستور زیر استفاده کنید:

vcluster disconnect

 

گام ۵: نصب برنامه در کلاستر مجازی

اکنون جهت آزمایش قصد داریم تا nginx را در این کلاستر مجازی (my-vcluster) نصب کنیم. برای اینکار کافیست تا دستورهای زیر را اجرا کنید:

 

kubectl create namespace demo-nginx
kubectl create deployment ngnix-deployment -n demo-nginx --image=nginx -r 2

جهت بررسی دستور زیر را اجرا کنید:

kubectl get pods -n demo-nginx

 

امید است تا از این مطلب استفاده لازم را برده باشید.

 

The post کلاستر در کلاستر با vCluster first appeared on طرفداران فدورا.

Week 21 in Packit

Posted by Weekly status of Packit Team on May 27, 2024 12:00 AM

Week 21 (May 21st – May 27th)

  • We have fixed the syncing of ACLs for propose-downtream for CentOS Stream. (packit#2318)

Episode 430 – Frozen kernel security

Posted by Josh Bressers on May 27, 2024 12:00 AM

Josh and Kurt talk about a blog post about frozen kernels being more secure. We cover some of the history and how a frozen kernel works and discuss why they would be less secure. A frozen kernel is from when things worked very differently. What sort of changes will we see in the future?

<audio class="wp-audio-shortcode" controls="controls" id="audio-3393-2" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_430_Frozen_kernel_security.mp3?_=2" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_430_Frozen_kernel_security.mp3</audio>

Show Notes

Not again Red Hat

Posted by Jens Kuehnel on May 26, 2024 12:23 PM

Update 2024.05.26: I appears the elrepo has found a way to create kmod packages for RHEL9.4 and RHEL8.10. I tested mp3sas, ftsteutates and wireguard from the elrepo-testing repo and it works flawlessly.

In this video Jeff Geerling accounced that “Corporate Open Source is Dead”. He already dropped support from his really good ansible playbooks. This was because Red Hat only distributes its sources to customers. Another brick in this wall was announced today by the great ELrepo project.

In this blogpost it was announced that RHEL made some changes in the upcoming 8.10 and 9.4 releases of RHEL and this will break some of the kernel modules that were created by elrepo to allow running RHEL with older cards – that are not official supported anymore. The fun thing is not the whole driver that was deprecated, but only some of the supported pci-id where removed.

Especially for home lab users this created a big problem. aacraid, megaraid_sas, mlx4 and mpt3sas are drivers that are used in a lot of home labs everywhere.

Again the overall intention from Red Hat are not the problem. If Red Hat would break support of that in RHEL 10 there would be no problems. It would be interesting to know if this is a unexpected consequence of an patch or a targeted business decision. Yes I know why the support was dropped by Red Hat, but Red Hat is not only forgetting it roots, but again kicked the non-prod users in the curb. Just after they droped Centos and broke there promis there as well.

At least for my homelab this creates an extra work to do, because my RAID Controller is on the deprecated list. At least AlmaLinux has undo this patch and you don’t even need elrepo to support his older hardware.

I was planning to reinstall the host anyway. I only have to decide to select Fedora or AlmaLinux. The time to decide that is coming earlier the I hoped.

I was interviewed on NPR Planet Money

Posted by Richard W.M. Jones on May 24, 2024 02:45 PM

I was interviewed on NPR Planet Money about my small role in the Jia Tan / xz / ssh backdoor.

NPR journalist Jeff Guo interviewed me for a whole 2 hours, and I was on the program (very edited) for about 4 minutes! Quite an interesting experience though.

Infra and RelEng Update – Week 21 2024

Posted by Fedora Community Blog on May 24, 2024 10:00 AM

This is a weekly report from the I&R (Infrastructure & Release Engineering) Team. It also contains updates for the CPE (Community Platform Engineering) Team as the CPE initiatives are in most cases tied to I&R work.

We provide you both an infographic and a text version of the weekly report. If you just want to quickly look at what we did, just look at the infographic. If you are interested in more in-depth details look at the infographic.

Week: 20 May – 24 May 2024

<figure class="wp-block-image size-full">I&R infographic</figure>

Infrastructure & Release Engineering

The purpose of this team is to take care of day-to-day business regarding CentOS and Fedora Infrastructure and Fedora release engineering work.
It’s responsible for services running in Fedora and CentOS infrastructure and preparing things for the new Fedora release (mirrors, mass branching, new namespaces, etc.).
List of planned/in-progress issues

Fedora Infra

CentOS Infra including CentOS CI

Release Engineering

CPE Initiatives

EPEL

Extra Packages for Enterprise Linux (or EPEL) is a Fedora Special Interest Group that creates, maintains, and manages a high quality set of additional packages for Enterprise Linux, including, but not limited to, Red Hat Enterprise Linux (RHEL), CentOS, Scientific Linux (SL) and Oracle Linux (OL).

Updates

If you have any questions or feedback, please respond to this report or contact us on #redhat-cpe channel on matrix.

The post Infra and RelEng Update – Week 21 2024 appeared first on Fedora Community Blog.

How to move from Redis to Valkey

Posted by Fedora Magazine on May 24, 2024 08:00 AM

The Redis move to a new license means it will no longer be considered open source software. This article will explain how to move from Redis to the alternative, Valkey.

Background

A few weeks ago, the company behind Redis published a blog post announcing that future versions of the Redis project were moving from BSD-3 clause, a well-accepted and understood open source license, to dual RSALv2 (Redis Source Available License 2.0) / SSPLv1 (Server Side Public License). The blog post plainly states “we openly acknowledge that this change means Redis is no longer open source under the OSI definition” (and, indeed, Fedora doesn’t allow either of these licenses). 

In response to this change, a number of existing contributors and maintainers of Redis formed Valkey, a new Linux Foundation project. You can think of Valkey as a continuation of open source Redis. It is derived from the same code-base and has many of the same folks making the decisions and contributing. The Valkey project has a technical steering committee that consists of people that work at Alibaba, AWS, Ericsson, Huawei, Google, and Tencent and an even wider array of organizations as contributors.

How do you move?

As a Fedora user, what are you to do if you use Redis and want to move to Valkey? Thankfully, there is the valkey-compat-redis package to handle the transition for you by moving your data and configuration over to Valkey.

In this scenario, you likely already have Redis installed, running, and loaded with data. For the purpose of this post I started with a fresh Fedora 40 installation, added the Redis (7.2.4) package, and started the server.

From here I’m going to take a peek at the current redis server information:

$ redis-cli info server
# Server
redis_version:7.2.4
redis_git_sha1:00000000
redis_git_dirty:0
redis_build_id:8c22b097984bd350
redis_mode:standalone
...

Note that the server responds with redis_version:7.2.4 , remember this for later.

Since this is a fresh install of Redis, I’m going to write some data to a key so it will be visible later in Valkey.

$ redis-cli set test_key "written in redis 7.2.4"

Migration and Persistence

Now is a good time to consider your persistence settings. How you persist your data in Redis (and Valkey) is highly configurable. Valid usage patterns run the gamut from no persistence whatsoever all the way to disk writes on every key change.  valkey-compat-redis will cleanly shut down Redis using SIGTERM, but there isn’t any additional magic: if you don’t have persistence set up or you have the Redis configuration (/etc/redis/redis.conf) option shutdown-on-sigterm set to now or force you could lose some or all your data. However, keep in mind that running valkey-compat-redis isn’t functionally different from doing any previous Redis upgrade in this regard

Installing the compat package

With that out of the way, it’s time to move to Valkey!

Install valkey-compat-redis with the allowerasing flag. You need the allowerasing flag so the package manager can remove Redis and put Valkey in its place. 

$ sudo dnf install valkey-compat-redis --allowerasing

...

Installed:
  valkey-7.2.5-5.fc40.x86_64                               valkey-compat-redis-7.2.5-5.fc40.x86_64
Removed:
  redis-7.2.4-3.fc40.x86_64

Complete!

What did this just do? The package takes your existing configuration files and persistence data and moves them to Valkey. Valkey is able to accept these directly with no changes or translation steps. Valkey is fully compatible with the Redis API, protocol, persistence data, ports, and configuration files. 

This is all possible because the current generally available release of Valkey (7.2.5) is, in practical purposes, a name change from Redis 7.2.4. The code changes in Valkey 7.2.5 were about re-branding. This version intentionally introduced no new features or functionality to make sure you can move to Valkey easily.

Starting Valkey

Now that you’ve installed Valkey, you’ll need to start it:

$ sudo systemctl start valkey

Then get the info about the server:

$ redis-cli info server
# Server
redis_version:7.2.4
server_name:valkey
valkey_version:7.2.5
redis_git_sha1:00000000
redis_git_dirty:0
redis_build_id:b0d9f188ef999bd3
redis_mode:standalone
...

A few things to note in this example:

  • Valkey responds with redis_version:7.2.4 as well as valkey_version:7.2.5 . Some software has specific checks for versions and could break on an unknown response. Retaining redis_version  maintains maximum compatibility while still identifying the server with valkey_version.
  • Valkey has a binary named valkey-cli but symbolically links redis-cli. This will keep your muscle memory intact until you get used to typing valkey-cli. It does the same for valkey-server and redis-server

Interacting with Valkey

Now, it’s time to take a look at the data and interact with the server. First get the key that was written in Redis (of course, if you don’t have persistence turned on this won’t work):

$ valkey-cli get test_key
"written in redis 7.2.4"

Just as a proof point, you can write data to the server and get the key back written in Valkey along side the one written in Redis:

$ valkey-cli set from-new-server "valkey-7.2.5"
OK
$ valkey-cli mget test_key from-new-server
1) "written in redis 7.2.4"
2) "valkey-7.2.5"

This example is pretty simple, but it illustrates a lot. First, because valkey-compat-redis moves your configuration over, it doesn’t need to be the default. No matter how you have configured Redis 7.2.4, valkey-compat-redis moves simple or complex configurations over and Valkey can read them. This means that the same migrations step works even for Cluster or Sentinel.

Second, since Valkey connects in the same way (using the same protocols and default port) your application should work the same. Passing in the same connection information to your application that once pointed at Redis (and now points to Valkey) will yield the same results.

Other Considerations

All of this being said: there are a few things that can’t be illustrated in the above example. You don’t have to use valkey-compat-redis, instead you can just install valkey. If you’ve got Redis on a machine and run:

$ sudo dnf install valkey

You’ll have Valkey installed along side of Redis. You can’t successfully start up both processes using the default configuration. However, changing the configuration file at /etc/valkey/valkey.conf to use a different port would allow you to run Valkey and Redis. Indeed, you could even cluster Valkey and Redis together since they speak the same protocol, enabling more complex migrations. 

The other thing to keep in mind is that Valkey isn’t any more backwards compatible than Redis 7.2.4. Moving from a very old version of Redis to Valkey carries the same breaking changes as moving to Redis 7.2.x. Thankfully, most versions carry minimal breaking changes even between major versions. 

What’s next?

This post has largely focused on what hasn’t changed and why that’s a good thing. The Valkey team intends to add no new features to 7.2.x. Only bug fixes coming as patch releases will occur in the future. All new changes are going in 8.x. While the team is busy planning a full slate of new features that increase performance and usability, they don’t plan to break the API. If you’re interested in helping work on the future of Valkey, be sure to check out valkey-io/valkey and read the CONTRIBUTING.md file to learn more.

PHP version 8.2.20RC1 and 8.3.8RC1

Posted by Remi Collet on May 24, 2024 05:42 AM

Release Candidate versions are available in the testing repository for Fedora and Enterprise Linux (RHEL / CentOS / Alma / Rocky and other clones) to allow more people to test them. They are available as Software Collections, for a parallel installation, the perfect solution for such tests, and also as base packages.

RPMs of PHP version 8.3.8RC1 are available

  • as base packages
    • in the remi-modular-test for Fedora 38-40 and Enterprise Linux ≥ 8
    • in the remi-php83-test repository for Enterprise Linux 7
  • as SCL in remi-test repository

RPMs of PHP version 8.2.20RC1 are available

  • as base packages
    • in the remi-modular-test for Fedora 38-40 and Enterprise Linux ≥ 8
    • in the remi-php82-test repository for Enterprise Linux 7
  • as SCL in remi-test repository

emblem-notice-24.png The Fedora 39, 40, EL-8 and EL-9 packages (modules and SCL) are available for x86_64 and aarch64.

emblem-notice-24.pngPHP version 8.1 is now in security mode only, so no more RC will be released.

emblem-notice-24.pngInstallation: follow the wizard instructions.

emblem-notice-24.png Announcements:

Parallel installation of version 8.3 as Software Collection:

yum --enablerepo=remi-test install php83

Parallel installation of version 8.2 as Software Collection:

yum --enablerepo=remi-test install php82

Update of system version 8.3 (EL-7) :

yum --enablerepo=remi-php83,remi-php83-test update php\*

or, the modular way (Fedora and EL ≥ 8):

dnf module switch-to php:remi-8.3
dnf --enablerepo=remi-modular-test update php\*

Update of system version 8.2 (EL-7) :

yum --enablerepo=remi-php82,remi-php82-test update php\*

or, the modular way (Fedora and EL ≥ 8):

dnf module switch-to php:remi-8.2
dnf --enablerepo=remi-modular-test update php\*

emblem-notice-24.png Notice:

  • version 8.3.8RC1 is also in Fedora rawhide for QA
  • EL-9 packages are built using RHEL-9.4
  • EL-8 packages are built using RHEL-8.9
  • EL-7 packages are built using RHEL-7.9
  • oci8 extension uses the RPM of the Oracle Instant Client version 21.13 on x86_64 or 19.19 on aarch64
  • intl extension uses libicu 73.2
  • RC version is usually the same as the final version (no change accepted after RC, exception for security fix).
  • versions 8.2.20 and 8.3.8 are planed for June 6th, in 2 weeks.

Software Collections (php82, php83)

Base packages (php)

SSL: How It Works and Why It Matters

Posted by Farhaan Bukhsh on May 23, 2024 02:46 PM
How did it start? I have been curious about what an SSL Certificate is and how it works. This is a newbie's log on the path to understanding a thing or two about how it works. We casually talk about security and how SSL certificates should be used to...

Live Migration: setup

Posted by pjp on May 23, 2024 12:12 PM

While debugging a QEMU live migration related issue, I learned a setup to live migrate a virtual machine (VM) across two host machines. Migration here means to move a VM from one host machine to another. And live migration is, while a VM is running on the source machine, its state is copied to the destination machine, and once this copying is done, VM stops on the source machine and starts running on the destination machine. Applications running inside the VM are suppose to run as before, as if nothing changed.

Migrating a VM from one machine to another would entail moving its disk image and xml files as well. But copying (rsync(1)) files across machines could take long time, depending on file size and network bandwidth.

f39vm.qcow2  107.39G  100%  334.82MB/s  0:05:05
r91vm.qcow2   42.96G  100%  111.88MB/s  0:06:06
r93vm.qcow2   21.48G  100%  242.79MB/s  0:01:24

It is not feasible to stall a VM for such long times during migration. So migration requires that the source and destination machines are connected by a reliable high bandwidth network channel and the VM disk files are accessed on both machines via shared storage system. Accordingly, the source host machine is turned into a NFS server and destination machine is made NFS client:

source)# dnf install nfs-utils
source)# systemctl enable nfs-server
source)# systemctl enable rpcbind
source)# echo '/home/images 10.x.x.0/24(rw)' > /etc/exports
source)# mount -t nfs source.host.machine.com:/home/images/ /mnt/images/

destin)# dnf install nfs-utils
destin)# systemctl enable nfs-server
destin)# systemctl enable rpcbind
destin)# virsh pool-create-as --name=migrate \
--type=netfs --source-host=source.host.machine.com \
--source-path='/home/images' --source-format=nfs \
--target=/mnt/images/

The VM xml configuration file should access the disk image via /mnt/images path as

<disk type='file' device='disk'>
  <source file='/mnt/images/r93vm.qcow2' index='1'/>
</disk>

If the NFS setup is incorrect, migrate command may throw errors like these:

error: Unsafe migration: Migration without shared storage is unsafe

Once NFS server & client setup is done, try to start the VM on both machines to confirm that it is working, before trying the migration.

source)# virsh start --console r93vm

When VM runs on both machines, we can try the migration between two machines with

# cat migrate-test.sh 
#!/bin/sh

if [ $# -lt 2 ]; then
    echo "Usage: $0 <guest> <destination-host>"
    exit 1;
fi

GUEST=$1
DHOST=$2
while true;
do
    ds=$(virsh domstate $GUEST | sed -ne '1p')
    if [ "$ds" == "running" ]; then
        id=$(virsh domid $GUEST | sed -ne '1p')
        echo -n "migrate $GUEST.........$id"
        sleep 3s
        /bin/virsh migrate --verbose --persistent \
            --timeout 3 --timeout-postcopy \
            --live --postcopy  \
            $GUEST  qemu+ssh://$DHOST/system
    fi
    sleep 3s
done
exit 0;

This script live migrates a VM between source and destination machines in an infinite loop using the postcopy method of migration. Next we’ll see more about these migration methods.

New badge: DevConf.cz 2025 Attendee !

Posted by Fedora Badges on May 23, 2024 07:17 AM
DevConf.cz 2025 AttendeeYou attended the 2025 iteration of DevConf.cz, a yearly open source conference in Czechia!

New badge: DevConf.cz 2024 Attendee !

Posted by Fedora Badges on May 23, 2024 07:15 AM
DevConf.cz 2024 AttendeeYou attended the 2024 iteration of DevConf.cz, a yearly open source conference in Czechia!

New badge: Let's have a party (Fedora 40) !

Posted by Fedora Badges on May 23, 2024 07:09 AM
LetYou attended the F40 Online Release Party!

Introducing the WebKit Container SDK

Posted by Patrick Griffis on May 23, 2024 04:00 AM

Developing WebKitGTK and WPE has always had challenges such as the amount of dependencies or it’s fairly complex C++ codebase which not all compiler versions handle well. To help with this we’ve made a new SDK to make it easier.

Current Solutions

There have always been multiple ways to build WebKit and its dependencies on your host however this was never a great developer experience. Only very specific hosts could be “supported”, you often had to build a large number of dependencies, and the end result wasn’t very reproducable for others.

The current solution used by default is a Flatpak based one. This was a big improvement for ease of use and excellent for reproducablity but it introduced many challenges doing development work. As it has a strict sandbox and provides read-only runtimes it was difficult to use complex tooling/IDEs or develop third party libraries in it.

The new SDK tries to take a middle ground between those two alternatives, isolating itself from the host to be somewhat reproducable, yet being a mutable environment to be flexible enough for a wide range of tools and workflows.

The WebKit Container SDK

At the core it is an Ubuntu OCI image with all of the dependencies and tooling needed to work on WebKit. On top of this we added some scripts to run/manage these containers with podman and aid in developing inside of the container. It’s intention is to be as simple as possible and not change traditional development workflows.

You can find the SDK and follow the quickstart guide on our GitHub: https://github.com/Igalia/webkit-container-sdk

The main requirements is that this only works on Linux with podman 4.0+ installed. For example Ubuntu 23.10+.

In the most simple case, once you clone https://github.com/Igalia/webkit-container-sdk.git, using the SDK can be a few commands:

source /your/path/to/webkit-container-sdk/register-sdk-on-host.sh
wkdev-create --create-home
wkdev-enter

From there you can use WebKit’s build scripts (./Tools/Scripts/build-webkit --gtk) or CMake. As mentioned before it is an Ubuntu installation so you can easily install your favorite tools directly like VSCode. We even provide a wkdev-setup-vscode script to automate that.

Advanced Usage

Disposibility

A workflow that some developers may not be familiar with is making use of entirely disposable development environments. Since these are isolated containers you can easily make two. This allows you to do work in parallel that would interfere with eachother while not worrying about it as well as being able to get back to a known good state easily:

wkdev-create --name=playground1
wkdev-create --name=playground2

podman rm playground1 # You would stop first if running.
wkdev-enter --name=playground2

Working on Dependencies

An important part of WebKit development is working on the dependencies of WebKit rather than itself, either for debugging or for new features. This can be difficult or error-prone with previous solutions. In order to make this easier we use a project called JHBuild which isn’t new but works well with containers and is a simple solution to work on our core dependencies.

Here is an example workflow working on GLib:

wkdev-create --name=glib
wkdev-enter --name=glib

# This will clone glib main, build, and install it for us. 
jhbuild build glib

# At this point you could simply test if a bug was fixed in a different versin of glib.
# We can also modify and debug glib directly. All of the projects are cloned into ~/checkout.
cd ~/checkout/glib

# Modify the source however you wish then install your new version.
jhbuild make

Remember that containers are isoated from each other so you can even have two terminals open with different builds of glib. This can also be used to test projects like Epiphany against your build of WebKit if you install it into the JHBUILD_PREFIX.

To Be Continued

In the next blog post I’ll document how to use VSCode inside of the SDK for debugging and development.

Quectel EM05-G (LTE module) with ThinkPad T14 Gen4 on Fedora 39 and 40

Posted by Andreas Haerter on May 23, 2024 01:46 AM

We recently bought a bunch of Lenovo ThinkPad T14 Gen4 Model 21HDCTO1WW. They were shipped with a Quectel EM05-G WWAN module. To our surprise, ModemManager did not activate the module right away even though the Fedora Linux support for the hardware is known to be good. It turned out that our hardware revision reports with a different USB device ID 2c7c:0313 than previous versions which used 2c7c:030a:

Bus 003 Device 002: ID 2c7c:0313 Quectel Wireless Solutions Co., Ltd. Quectel EM05-G

Therefore, the necessary FCC unlock procedure does not get triggered automatically even though an unlock script for the Quectel EM05-G was already added by Leah Oswald. However, the modem works perfectly fine if you unlock it manually after each reboot:

mmcli -L
sudo mbimcli --device-open-proxy --device="/dev/cdc-wdm0" --quectel-set-radio-state=on

We have opened an upstream issue to fix the problem. If you don’t want to wait so long until a new ModemManager version including the fix arrives on your computer, you can help yourself as follows:

sudo mkdir -p "/etc/ModemManager/fcc-unlock.d/"
sudo chown root:root -R "/etc/ModemManager/"
sudo find "/etc/ModemManager/" -type d -exec chmod 0755 {} +
sudo find "/etc/ModemManager/" -type f -exec chmod 0644 {} +
sudo ln -s -f "/usr/share/ModemManager/fcc-unlock.available.d/2c7c" "/etc/ModemManager/fcc-unlock.d/2c7c:0313"

This creates a symlink to the working FCC unlock script 2c7c for the new USB device ID 2c7c:0313 in your local configuration. Hope that helps.

Please use GPLv3 “or-later” instead of “only”

Posted by Andreas Haerter on May 23, 2024 12:25 AM

It makes sense to prefer copyleft licenses. The most popular copyleft license is probably the GNU General Public License (GPL), with Version 3 from 2007 being the latest one. When you use the GPLv3, you have to decide if you go for “GPL v3.0 or later” or “GPL v3.0 only”. This is because of Clause 14 “Revised Versions of this License” of the GPLv3.1

The clause addresses how future versions of the license will be handled and state that the Free Software Foundation (FSF) may publish new versions of the GPL, which will be similar in spirit to the current version but may include changes to address new legal and technological issues. This also ensures the protection of Free Software from potential missteps by the Free Software Foundation (FSF) itself as, for example, no one could state a valid GPLv4 without copyleft. Some argue that the GPLv3 is fundamentally different from the GPLv2, but a detailed examination shows this is not the case–it is indeed similar in spirit. Just read it for yourself. For our part, we therefore strongly recommend choosing the “or later” option for our own projects.

Learn from past mistakes

Using GPL-2.0-only in the past created significant compatibility issues with other licenses, hindering the integration and distribution of combined works. For example, software licensed under GPL-2.0-only is incompatible with the Apache-2.0 license, preventing the combination of many codebases even today. And some projects had to spend a lot of time and work to change licensing to achieve better license compatibility and reduce integration barriers. These issues can lead to fragmentation and reduced flexibility in the open-source ecosystem.

GPLv2 showed that this adaptability might be necessary, as sticking to GPL-2.0-only did not provide any significant benefit and led to compatibility problems. Therefore, it makes sense to adopt the “or later” option whenever possible. This approach not only preserves the spirit of the license but also provides a safeguard against potential future challenges, much like a well-prepared contingency plan.

Conclusion

The “or later” clause of GPL-3.0-or-later is crucial as it allows the evolution of the license to keep up with changing circumstances, ensuring ongoing protection and freedom for software users and developers. This clause is like a safety net that allows us to adapt to future changes in the legal and technological landscape and to enable cooperation from various parties in the future. Use it.


  1. It is the same with Clause 9 of GPLv2↩︎

mostly unknown keepalived feature

Posted by Jens Kuehnel on May 22, 2024 08:15 PM

There are lot of introduction to keepalived. Like Setting up a Linux cluster with Keepalived: Basic configuration | Enable Sysadmin (redhat.com) or Keepalived and high availability: Advanced topics | Enable Sysadmin (redhat.com)

But I recently learned that keepalived has a cool feature that makes writing keepalived much easier (Thanks Spindy). Almost all documentation found on the net shows you that you need two different configuration files. But this is not the case. There is an extension that allows you to rollout the same configuration for all nodes.

When you compare this example from the first links. They use this two configurations:

server1# cat /etc/keepalived/keepalived.conf
vrrp_instance VI_1 {
        state MASTER
        interface eth0
        virtual_router_id 51
        priority 255
        advert_int 1
        authentication {
              auth_type PASS
              auth_pass 12345
        }
        virtual_ipaddress {
              192.168.122.200/24
        }
}
server2# cat /etc/keepalived/keepalived.conf
vrrp_instance VI_1 {
        state BACKUP
        interface eth0
        virtual_router_id 51
        priority 254
        advert_int 1
        authentication {
              auth_type PASS
              auth_pass 12345
        }
        virtual_ipaddress {
              192.168.122.200/24
        }
}

When you look at it its almost the same file – only 2 lines are different, state and priority. Here comes the @format into play. You can rollout this file on both side and it will work the same way as above:

# cat /etc/keepalived/keepalived.conf
vrrp_instance VI_1 {
        @server1 state MASTER
        @server2 state BACKUP
        interface eth0
        virtual_router_id 51
        @server1 priority 255
        @server2 priority 254
        advert_int 1
        authentication {
              auth_type PASS
              auth_pass 12345
        }
        virtual_ipaddress {
              192.168.122.200/24
        }
}

So you can configure lines that are only valid for certain hosts by adding a @HOSTNAME in front. More possibilities are explained in the man page keepalived.conf(5) in the section “Conditional configuration and configuration id”

syslog-ng Prometheus exporter

Posted by Peter Czanik on May 22, 2024 11:08 AM

Prometheus is an open-source monitoring system that collects metrics from your hosts and applications, allowing you to visualize and alert on them. The syslog-ng Prometheus exporter allows you to export syslog-ng statistics, so that Prometheus can collect it.

While an implementation in Go has been available for years on GitHub (for more information, see this blog entry), that solution uses the old syslog-ng statistics interface. And while that Go-based implementation still works, syslog-ng 4.1 introduced a new interface that provides not just more information than the previous statistics interface, but does so in a Prometheus-friendly format. The information available through the new interface has been growing ever since.

The syslog-ng Prometheus exporter is implemented in Python. It also uses the new statistics interface, making new fields automatically available when added.

Read more at https://www.syslog-ng.com/community/b/blog/posts/syslog-ng-prometheus-exporter

<figure><figcaption>

syslog-ng logo

</figcaption> </figure>

Upgrade of Copr servers

Posted by Fedora Infrastructure Status on May 22, 2024 09:00 AM

We're updating copr packages to the new versions which will bring new features and bugfixes.

This outage impacts the copr-frontend and the copr-backend.

Contribute at the Podman 5.1 and Kernel 6.9 Test Week

Posted by Fedora Magazine on May 21, 2024 05:06 PM

Fedora test days are events where anyone can help make certain that changes in Fedora work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. This is a perfect way to start contributing to Fedora, if you haven’t in the past.

There are two upcoming test periods in the next two weeks covering two topics:

  • Thursday 23 May, is to test the Podman 5.1
  • Sunday 26 May through Monday 03 June, is to test Kernel 6.9

Podman 5.1 Test Day

Podman is a daemon-less, open source, Linux native tool designed to make it easy to find, run, build, share and deploy applications using Open Containers Initiative (OCIContainers and Container Images. It provides a command line interface (CLI) familiar to anyone who has used the Docker Container Engine. As part of a recent discussion, the Rawhide Test Day efforts, and Podman Container Engine Team’s collaborative efforts, we will hold a test day for a minor Podman Release.

During this test day, on Thursday 23 May, the focus will be on testing the changes that will be coming in Fedora 41 (Rawhide) as we move ahead with Podman 5.1. This test day is an opportunity for anyone to learn and interact with the Podman Community and container tools in general.

The wiki page helps the testers know and understand the scope of the test day. The Test day app helps the testers submit the results once they have tried the test cases.

Kernel 6.9 Test Week

The kernel team is working on final integration for Linux kernel 6.9. This recently released kernel version will arrive soon in Fedora Linux. As a result, the Fedora Linux kernel and QA teams have organized a test week from Sunday, May 26, 2024 to Monday, June 03, 2024.

The wiki page contains links to the test images you’ll need to participate. The results can be submitted in the test day app.

How do test days work?

A test day/week is an event where anyone can help make sure changes in Fedora Linux work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed before, this is a perfect way to get started.

To contribute, you only need to be able to download test materials (which include some large files) and then read and follow directions step by step.

Detailed information about all the test days is available on the wiki pages mentioned above. If you’re available on or around the days of the events, please do some testing and report your results. All the test day pages receive some final touches which complete about 24 hrs before the test day begins. We urge you to be patient about resources that are, in most cases, uploaded hours before the test day starts.

Come and test with us to make the upcoming Fedora Linux 41 even better

Fedora Linux 38 s'en va

Posted by Charles-Antoine Couret on May 20, 2024 10:00 PM

C'est en ce mardi 21 mai 2024 que la maintenance de Fedora Linux 38 prend fin.

Qu'est-ce que c'est ?

Un mois après la sortie d'une version de Fedora n, ici Fedora Linux 40, la version n-2 (donc Fedora Linux 38) n'est plus maintenue.

Ce mois sert à donner du temps aux utilisateurs pour faire la mise à niveau. Ce qui fait qu'en moyenne une version est officiellement maintenue pendant 13 mois.

En effet, la fin de vie d'une version signifie qu'elle n'aura plus de mises à jour et plus aucun bogue ne sera corrigé. Pour des questions de sécurité, avec des failles non corrigées, il est vivement conseillé aux utilisateurs de Fedora Linux 38 et antérieurs d'effectuer la mise à niveau vers Fedora Linux 40 ou 39.

Que faire ?

Si vous êtes concernés, il est nécessaire de faire la mise à niveau de vos systèmes. Vous pouvez télécharger des images CD ou USB plus récentes.

Il est également possible de faire la mise à niveau sans réinstaller via DNF ou GNOME Logiciels.

GNOME Logiciels a également dû vous prévenir par une pop-up de la disponibilité de Fedora Linux 40 ou 39. N'hésitez pas à lancer la mise à niveau par ce biais.